Fourtytwo42:
Do you mean that you never need to have a built-in delay because the PIC does it “automagically”? (I like that word
)
No.
When you are not converting, the sampling capacitor is connected to the analog input selected by [...] (i don't remember which register is in your particular pic that selects the current analog input)
when you set the "GO" bit the sampling capacitor is disconnected from the selected analog input and is connected to the rest of the ADC circuitry, for the number of TAD that it takes to complete the conversion.
The number of TADs that it takes to convert a value is fixed, while the value of TAD is not. TAD can be derived from the system clock (and must not exceed some value, tipically 1us in PIC16) or can be derived from a dedicated oscillator.
The sampling capacitor is a capacitor, so it needs to be charged to the input voltage. The best way to do so is by connecting a low impedance source to the analog pin (for example an opamp used as buffer with a small resistor in series).
The datasheet specifies a maximum recommended source impedance. if you exceed this two things may/will happen: The time to charge the capacitor is too big, you need to spend a lot of time on sampling the input, or the source impedance is too high and the parasitic currents inside the ADC input will affect the voltage, this is what he means by degradating the performance of the ADC.
knowing the source impedance, you can use the ADC input model to calculate an estimate of how much time you have to spend on sampling an input. And that's why you call the sampling delay, to charge the sampling capacitor.
On more recent and more evolved PICs you have the possibility to perform auto-sampling/auto converting (by using one timer to initiate the conversion, or by a dedicated timer in the ADC.. in which case the sampling time is STILL expressed as a number of TADs so Tsample + Tconvert = Tconversion -> then you have the sampling frequency of your ADC.
On more recent and more evolved PICs you can even have the possibility to SCAN the analog inputs, that's it: when channels are enabled for scan, the ADC acquire from channel A, then converts channel A and puts the result in the first location of a buffer. Then samples/convert channel B and puts it in the second location of the buffer. Then channel C and so on, then it restarts from channel A. The only thing you need to do is to take the data from the buffer before it gets overwritten