Author Topic: Will I have to buffer my ADC inputs?  (Read 1901 times)

0 Members and 1 Guest are viewing this topic.

Offline eecookTopic starter

  • Regular Contributor
  • *
  • Posts: 118
  • Country: ar
Will I have to buffer my ADC inputs?
« on: November 21, 2017, 08:45:38 pm »
Hi Folks,

I am working on a design using a Cortex M4 from NXP (PN: MK22FN128VLL10). When I get to the ADC I see the following simplified schematic for the front-end:



What's weird to me is the condition on the time constant Ras Cas < 1ns. It basically means I have to buffer every ADC input? What's worse, I am simulating the circuit with Ras = 250, Cas = 250n, a sampling freq of 1Mhz and a sampling time of 1ns and the error is marginal...

Any ideas?
Nullius in verba
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: Will I have to buffer my ADC inputs?
« Reply #1 on: November 21, 2017, 09:20:32 pm »
Yes, ADCs almost never seem to integrate the required buffer.

Buffer is needed because internally, the SAR ADC works by sampling the input to a sample&hold capacitor. This sampling typically happens in a very short time, so the input impedance (resistance or inductance) limits the speed this sampling capacitor charges at.

Some ADCs allow you to configure a longer sampling time, so that higher impedance can be accepted - but there are limits due to leakage currents as well.

Unity gain opamp is the "standard" solution.

A small MLCC directly at the input pin is a very simple solution if you don't mind the RC filtration effect it causes. Good for measuring slowly changing signals (think about battery voltage, for example). For this to be effective, the external capacitor needs to be orders of magnitude larger than the sampling capacitor - for example, if the S&H cap is 20pF, a 20nF external cap would drop 0.1% of its charge while sampling, causing negligible error to the measured voltage in cheap 10 or 12 bit ADCs. But with largish input Z, any C large enough seriously limits bandwidth, so sometimes the opamp is needed.
 

Offline MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: Will I have to buffer my ADC inputs?
« Reply #2 on: November 21, 2017, 09:57:43 pm »
What's weird to me is the condition on the time constant Ras Cas < 1ns. It basically means I have to buffer every ADC input? What's worse, I am simulating the circuit with Ras = 250, Cas = 250n, a sampling freq of 1Mhz and a sampling time of 1ns and the error is marginal...

Any ideas?
Something isn't right,  RC = 250 x 250n = 62.5 usec.  Probably, a typo in data sheet, RC constant is always R * C, not  R / C.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8654
  • Country: gb
Re: Will I have to buffer my ADC inputs?
« Reply #3 on: November 21, 2017, 10:21:59 pm »
Most MCU ADCs don't have a buffer, because it would be hard to make one with the 0V to reference kind of input swing people are mostly looking for in MCU applications. Its also hard to make a buffer amp with very low offset.

Most MCU ADCs allow the sampleing gate timing to be controlled. If you don't need to run the ADC very fast, and the signal isn't changing fast, and you have a high impedance signal source, you can probably just set a long sample time and not use a buffer. It is what a vast range of MCU applications do. If you look at the ADC pin with a scope you'll see the pin voltage dip as the sampling gate opens, and rise again as the sampling cap charges. You can experiment with the sampling gate time, and see the effect on how far the pin voltage recovers before the gate closes. With the sampling gate left open long enough you'll get a good conversion.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4532
  • Country: au
    • send complaints here
Re: Will I have to buffer my ADC inputs?
« Reply #4 on: November 21, 2017, 10:26:40 pm »
Probably, a typo in data sheet, RC constant is always R * C, not  R / C.
Its just wording rather than a typo, its not an inequality as R/C < 1ns
but instead its a sentence and needs to be parsed as such:
The [listed components] (Rx / Cx) time constant should be kept to [less than] (<) 1ns
Mixing maths and words together loosely was a poor choice
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: Will I have to buffer my ADC inputs?
« Reply #5 on: November 22, 2017, 03:33:47 pm »
Anyway, stupid datasheet "rules of thumb".

"For best result", RC time constant should be kept under 1 ns. I understand what it technically means, but: Why? I don't understand at all.

If I sample a signal at, say, 1 kHz for example, I'd probably want to RC filter the shit out of it, for best results, you know, for antialiasing and reducing noise. So in that case, I probably don't want to have GHz content there.
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7392
  • Country: nl
  • Current job: ATEX product design
Re: Will I have to buffer my ADC inputs?
« Reply #6 on: November 22, 2017, 03:44:28 pm »
Anyway, stupid datasheet "rules of thumb".

"For best result", RC time constant should be kept under 1 ns. I understand what it technically means, but: Why? I don't understand at all.

If I sample a signal at, say, 1 kHz for example, I'd probably want to RC filter the shit out of it, for best results, you know, for antialiasing and reducing noise. So in that case, I probably don't want to have GHz content there.
No. No. No.
You need high bandwidth buffer (therefore small R/C) on an ADC input. Not because your input signal. Because the ADC. Because the ADC is a SAR, switched capacitor. It will dynamically load your input with a capacitor. If you place a large RC constant there, all you get is INL errors.
Do you see it? It is right on the schematic. Cadin is "randomly" charged.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: Will I have to buffer my ADC inputs?
« Reply #7 on: November 22, 2017, 04:01:55 pm »
You need high bandwidth buffer (therefore small R/C) on an ADC input. Not because your input signal. Because the ADC. Because the ADC is a SAR, switched capacitor. It will dynamically load your input with a capacitor. If you place a large RC constant there, all you get is INL errors.
Do you see it? It is right on the schematic. Cadin is "randomly" charged.

Did you read what you replied to at all?

Yes, all of what you say is true, basic stuff, I described this earlier in my first reply.

But this is totally irrelevant to what you replied to.

Either small R, or high (compared to the sampling cap) C is actually required, so that the fast input switcing to the sampling cap doesn't cause fluctuations to the voltage seen on the pin.

This can be an opamp, or a large cap. Depending on the situation, I use either one of these two methods all the time, and have done this for almost two decades on many different MCU families as well as dedicated ADC's, never had a problem.

But the datasheet claims that small RC time constant is required, basically both small R and small C is required, which is not true at all. For example, with the given R=8ohms, they require that C must be at most 120 pF (0.96ns) for "best results". So tell me, what if I keep the small 8R input resistance, but put a 100nF cap there instead of the maximum 120pF, why do the results degrade? I don't know, I have never seen it happen, but I might really have missed something since I haven't worked with high resolution/accuracy metrology stuff that much.

A big C (such as 100n) acts as a stable voltage source for which any fast switching to the small sampling C is irrelevant, i.e., only miniscule percentage of charge from the 100n cap is consumed during the sampling. With low ESR and ESL right next to the input pin, what's the problem?
« Last Edit: November 22, 2017, 04:25:16 pm by Siwastaja »
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7392
  • Country: nl
  • Current job: ATEX product design
Re: Will I have to buffer my ADC inputs?
« Reply #8 on: November 22, 2017, 05:54:58 pm »
Your 100nF capacitor is 10000 times bigger than the internal capacitor. Which is fine if you are designing with the usual 10-12 bit 100KSPS ADC. This is 10 times the speed, and 100 times the resolution. If you only have a 100nF capacitor with a whatever resistor, the internal 10pF will cause errors. About 6 LSB.
I don't know, I have never seen it happen, but I might really have missed something since I haven't worked with high resolution/accuracy metrology stuff that much.
Clearly.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: Will I have to buffer my ADC inputs?
« Reply #9 on: November 22, 2017, 06:39:37 pm »
Again, basic math which I know very well, still totally unrelated to what I wrote about  :palm:

This is really stupid behavior, again |O.

I basically supplied all this information already in my first post: use an opamp as a buffer, or if high speed is not needed, a simple cap may be acceptable, giving a simplified example calculation of 0.1% error caused by said cap discharging while sampling. Now your mission seems to be to "prove" I don't know about exactly those basics I already supplied the OP with! Why, I don't know. How this could work, I don't know, everyone can read my earlier post.

I very well understand why the R should be minimized, and I also very well understand how a fast feedback loop in a high bandwidth opamp can help to compensate for it's non-zero output R.

You still didn't answer how increasing the C is a problem, as described in the datasheet, which asks to minimize it by providing a maximum for RC constant. I was criticizing this strange notion that two things are required:
1) Minimize R <--- this is of course true
2) Set R*C < 1 ns, which means minimizing the C as well <--- I found this requirement strange. They basically instruct you to decrease C if you need to increase R for any reason, which would make the error even worse if you followed that advice. That's why I said this particular design hint makes no sense. Point #1 should be enough in itself.

It's extremely clear to me that you cannot explain why they wrote that in the datasheet, you can only dodge the question and belittle others, as always. But I think I figured it out anyway: it might be that the datasheet assumes that 1) you use an implicit opamp (not drawn), 2) you compensate for the opamp's output stage R by the fast feedback loop, and the said feedback loop cannot work if the opamp is capacitively loaded; the C should be then damped with some series R (such as tens of ohms) to prevent oscillation, and this R kind of defeats the whole purpose, unless the C is huge for high resolution. And of course, if we assume you do have the opamp, there is absolutely no point in trying to use the "100n cap at the input" approach, since, as I described in the very first post you didn't read, it's an el-cheapo compromise alternative to the standard opamp solution.

This is what is wrong in this datasheet style of pulling numbers out of thin air, creating some kind of poorly documented rules of thumb: instead of actual design math that can be verified and simulated, these "rules of thumbs" are always full of assumptions, and these assumptions seldom are covered at all. So here, the notion of maximum RC time constant is probably there for the implicit opamp they assumed for minimizing Z below the resistance of the opamp output stage transistors. Matters are made worse by poor choice of symbols, such as using mathematical-looking "R/C" notion when they actually mean "R*C". (Or do they? But if they really mean "R/C", their unit (ns) is wrong.

Instead, they could have just recommended that a specific opamp, maybe even with a recommended part number, should be connected there directly and not loaded capacitively.

I can imagine an inexperienced designer carefully crafting an RC network there to satisfy this mystical "1ns RC constant circuit", whereas they should concentrate on crafting a simple opamp buffer circuit that is stable and provides low impedance at the relevant sampling frequencies to satisfy the SAR ADC.

I think I have said everything I can to try to explain my earlier "stupid datasheet "rules of thumb"" comment. NANDBlog can go on misinterpreting the posts and dodging the "difficult" questions, while belittling others as much as he likes, as usual. The OP was answerred already.

PS. Do you think that a 6LSB error at 16 bits is significant when the datasheet does not specify almost any performance numbers for 16-bit operation, just the 12-bit modes, and defines min. effective number of bits at 11.4 bits, and even that is with digital averaging applied, so basically, effective resolution of about 10 bits? You don't use a cheap, poorly specified MCU-integrated ADC like this for where a few LSBs at 16 bit resolution do matter. (I have paid $50 each for 5MSPS 14-bit ADCs in a CCD imaging system, and it did provide very noiseless and linear image, indeed; I'd never do it with an MCU ADC like this, even when it has MOAR BITS.)
« Last Edit: November 22, 2017, 07:06:06 pm by Siwastaja »
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7392
  • Country: nl
  • Current job: ATEX product design
Re: Will I have to buffer my ADC inputs?
« Reply #10 on: November 22, 2017, 07:31:40 pm »
Sorry, I dont like to write wall of texts. Apparently, you realized yourself, that you are wrong, so I will just move ahead, after I corrected the rest of your post, which is wrong.
PS. Do you think that a 6LSB error at 16 bits is significant when the datasheet does not specify almost any performance numbers for 16-bit operation, just the 12-bit modes, and defines min. effective number of bits at 11.4 bits, and even that is with digital averaging applied, so basically, effective resolution of about 10 bits? You don't use a cheap, poorly specified MCU-integrated ADC like this for where a few LSBs at 16 bit resolution do matter. (I have paid $50 each for 5MSPS 14-bit ADCs in a CCD imaging system, and it did provide very noiseless and linear image, indeed; I'd never do it with an MCU ADC like this, even when it has MOAR BITS.)
Datasheet page 36 defines 16 bit 32 AVG ENOB as 14.5 bit. 6LSB error would reduce this to 13.5 bit.

But according to you, datasheet values are stupid, internal ADCs are stupid, and people who correct your wrong information are stupid. Someone has an attitude problem.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf