EEVblog Electronics Community Forum

Electronics => Beginners => Topic started by: woodchips on March 16, 2015, 10:30:00 am

Title: Oversampling an ADC for more resolution
Post by: woodchips on March 16, 2015, 10:30:00 am
I have an existing uC with an 8 bit ADC and I would like to experiment using 10 bits. I have heard of oversampling and did a search to bring my knowledge up to date. Not cerrtain it has achieved that.

Firstly, there sems to be two quite different types of oversampling. The first is where an audio signal for a CD is sampled at a multiple of 44kHz to simplify the filter requirements, this is effectively subsampling the audio signal, not what I want to do. 

Secondly, when an ADC is read multiple times and the stream of converted samples are added together and then divided to increase the number of bits in the final reasult, what I wanted to do.

Problems arise, many of the various apps notes etc you find on the web seem to be confused as to the difference between resolution, accuracy, precision and repeatability.

All this method of oversampling can achieve is improve resultion, NOT accuracy. Does anyone agree? Or disagree?

If I am trying to measure a signal with my ADC then I want 10 bits because I need 10 bit accuracy, I fail to see any point in 10 bit resulution and 8 bit accuracy.

So what is the point of oversampling?

At first principles an 8 bit ADC will be specified to be 8 bit accurate, plus there is always the +/-1 LSB. The spec will also say it is monotonic, no missing codes etc but at the very best all it can do is measure a signal to 8 bits, +/- 1LSB, over the range from reference + to reference -. Adding four of these ADC values together will give a 10 bit value, but the bottom 2 bits are noise, not signal, and there is still the +/-1 8 bit LSB error added in four times.

So, to get 10 bit ACCURACY I do in fact need a 10 bit ADC? If you are not interested in accuracy then why bother with doing all the maths, might just as well read the 8 bit value and add a random couple of bits on, doesn't achieve any less?

Have I missed something here? I just cannot see how adding any number of inaccurate, to 10 bit, samples together can give a 10 bit answer.

Title: Re: Oversampling an ADC for more resolution
Post by: JohnnyBerg on March 16, 2015, 10:40:02 am
There are some excellent application notes on this subject. I find this one very useful, and a nice explanation:

http://www.silabs.com/Support%20Documents/TechnicalDocs/an118.pdf (http://www.silabs.com/Support%20Documents/TechnicalDocs/an118.pdf)
Title: Re: Oversampling an ADC for more resolution
Post by: Rerouter on March 16, 2015, 10:52:08 am
without averaging samples, to get 10 bit accuracy with what is in most micros, requires a 12 or 14 bit ADC (there specs are quite varied)

if you have a Gaussian noise distribution, i am fairly certain you gain effective accuracy, i certainly have seen it to be true using a similar method for timing, using many samples to measure ppm drift of a 100Hz source, and those extra digits defiantly where not just noise,
Title: Re: Oversampling an ADC for more resolution
Post by: T3sl4co1l on March 16, 2015, 12:34:01 pm
It's not at all useless; higher resolution, high speed ADCs tend to have really poor performance specs, relatively speaking.  Example: you might find a series of ADCs rated for 12, 14 and 16 bits in the 65, 80, 110, etc. MS/s range.  In a given speed class, the 12 bit one will have ~11 ENOB, the 14 bit will have ~12 and the 16 will have maybe 12 or 13.  That's taking ENOB as total error: noise, INL and DNL, all together.  What possible point is there?  These typically have low DNL, meaning they are good for resolving AC signals -- the THD+N of a sine wave, or THD+IMD of a multitone signal, is very low, 0.5-2 bits worth, independent of the number of bits.  Naturally, high sample rate ADCs are most often bought for ultrasound and SDR type applications, where AC signals are indeed the purpose.

It's all about signals.  Think to yourself: how can I better rearrange this measurement, so that it is performed by difference against another known or measured value?  How can I convert the signal to an alternating or varying signal and perform AC analysis on it (a chopper amplifier is a good analogy)?

The biggest problem with an 8 bit ADC will be establishing the baseline between those relatively lumpy bits, and dithering out their corresponding noise.  And note that ENOB goes up logarithmically with averaging, so to go from 8 bits to 12 bits requires at least 16 averages, and from 12 to 16 requires 16 averages of those or 256 samples.

Subtractive dithering would be a good idea.  You'll likely not have a noisy enough signal to achieve dithering by itself (the baseline reading should vary up and down by a few counts peak to peak, to do a reasonable job), so you'll have to add noise; so, you might as well take advantage.  You can add this from a DAC, and since you know what value is being added, you can subtract it from the reading (bit-aligned according to the gain ratio between ADC readings and DAC settings), and get much better noise reduction.  The DAC output should probably be something without a pattern, such as a pseudorandom number generator (a 16-32 bit LFSR would do a fine job, and be easy enough to implement in any modern processor, as fast as the ADC/DAC needs it).

If necessary or desirable, you can perform calibration on the system, by setting a couple input values (precision voltage reference?) and reading them.  You can also adjust the subtractive dithering process to null its gain; normally, the additive amount is a dozen or more counts (so, ~4 bits worth), so the baseline "0" reading should only ever be -0.5 to 0.5 counts (taking the fraction as 1.0 counts, as read from the ADC, after subtracting the difference which is bit-aligned for, say, 4.4 bits for a total 8.4 (12 total) bits per sample).  You can adjust the gain and offset of the summer to achieve this.

...

So, this is good to know and all, but in the grand scheme of things, you definitely want to go out and buy an external ADC, if this is your situation and you don't need anything special.  The performance specs are always better than integrated ADCs (lower error, lower noise, higher sample rate), and the cost is lower than all the stuff talked about above (maybe not if you reduce the dithering process to a single op-amp and some support parts, but anything more than that?..).  Really, the only reason they put those ADCs in MCUs is convenience, for the simple applications where that's all you need.  They aren't generally intended for high bandwidth or precision work (dsPICs being a possible exception, having pretty good sample rates).

Tim
Title: Re: Oversampling an ADC for more resolution
Post by: woodchips on March 16, 2015, 04:45:22 pm
Thanks for the replies, and the Silab link.

BUT

It talks about RESOLUTION, not ACCURACY! They are quite definitely not the same.

Resolution is an irrelevance, "I can read to 0.01V but it is only accurate to 0.4V", I fail to understand the point?

If I want to measure mechanically to 10 microns then I am looking at a micrometer etc with a basic accuracy of 2-3 microns, not one with 25 microns and measure 50 times?

My problem is that if an ADC or whatever has a basic accuracy, quantization value of something, then anything smaller than that means nothing? I simply don't see how accuracy can be created from nothing.

Title: Re: Oversampling an ADC for more resolution
Post by: JohnnyBerg on March 16, 2015, 04:52:30 pm
With oversampling you get resolution, how accurate that is depends on the quality of the reference and the sum of the errors in the adc.

So, when the reference is good, adc is good but resolution is low, oversampling has its use.
Title: Re: Oversampling an ADC for more resolution
Post by: Marco on March 16, 2015, 05:09:54 pm
You get hung up on words too much ... even in technical literature the meaning of words is fluid. When errors are unbiased and there is sufficient noise oversampling increases both accuracy and resolution. The errors in a micrometer are not caused by quantization and noise, it's not the same thing.

A small thought experiment, lets posit a binary ADC with the centre of the quantization bins at 0 and 1 and no other errors except quantization error. The value of the digitized signal is 0.5 with some unbiased noise with an amplitude larger than 0. What will the ADC output? A perfectly random stream of bits. What is the limit value of the average of that stream? Exactly 0.5, so we just determined the average value of the signal to an arbitrary accuracy and resolution by oversampling.
Title: Re: Oversampling an ADC for more resolution
Post by: cyr on March 16, 2015, 05:36:30 pm
Absolute accuracy is not needed to get relative measurements, so for that increased resolution from oversampling is useful.
Title: Re: Oversampling an ADC for more resolution
Post by: cellularmitosis on March 16, 2015, 06:06:42 pm
I want to expand on the "what's the point of more resolution without accuracy?".

When you stop and think about it, you realize that absolutely accuracy is very rarely what you are really after.  Most engineering can be done with relative measurements alone. 

Consider engine tuners.  Naively you'd think having a dyno which gave accurate absolute measurements was important.  But really, all they really care about is "did my change make it better or worse, and by how much?"   They don't actually need to know they went from exactly 160 HP to exactly 176 HP, they'd be just as happy to know that it went up by 10%.

When you think about it from the other direction, you start down a thought path of "why do I care about the absolute accuracy of my ADC when I don't even have a NIST calibrated voltage standard in my workshop?"

You'll run into this mental debate all the time when you think "wouldn't it be cool if I could my own bit of test gear which measures X?", followed by "making a device which can give me a relative measurement is the easy part, but I don't have a good way to calibrate it...", followed by "well, in 90% of my use-cases, all I really need is relative measurements..."
Title: Re: Oversampling an ADC for more resolution
Post by: MatthewEveritt on March 16, 2015, 06:41:34 pm
It's also worth noting that if the signal you're trying to measure is noisy then repeated measurement will let you find the mean far more effectively than measuring it with greater bit depth. That's why injecting noise with known properties can improve the resolution of an ADC without really impacting the accuracy.
Title: Re: Oversampling an ADC for more resolution
Post by: rs20 on March 16, 2015, 10:28:46 pm
The same question can be asked of digital multimeters -- their resolution is often far greater than the absolute accuracy. Especially for non DC-volts ranges, where the accuracy is worse than 10 counts -- that makes the last digit completely useless, right? Well yeah, if you're doing an absolute measurement. But if you're looking at those last digits, you more likely to be doing a relative measurement, comparing the voltage on two different nodes. And when you're doing that, you don't care about accuracy at all, resolution is all you need.

Similarly, for ADCs, the vast majority of applications need resolution far more than absolute accuracy. If your sound card had only 4 bits of resolution, you'd know! But if the accuracy was off by 10, even 50%, your volume would just be a bit different.

Averaging also ameliorates Gaussian noise, which is lovely too. Try switching your oscilloscope between normal and Hi-res mode. You can't tell me resolution in the absence of matching accuracy is useless after seeing it with your own eyes :-)
Title: Re: Oversampling an ADC for more resolution
Post by: woodchips on March 17, 2015, 08:42:49 pm
Thanks for the comments, but I really wonder at the casual way resolution is treated as accuracy. Did you never have to design something that needed to measure something to a required accuracy, weighing scales or similar. Whilst it is easy to fool the customer that adding 16 samples together increases the word length, if you happen to get one who knows maths then you are dead in the water.

Consider your successive approximation ADC busy converting an analogue voltage. It has just converted the second from LSB (least significant bit) and decided it is a 0, it is now converting the LSB. That too is also a 0.

Stop there. Is there any way of knowing where abouts within the quantization voltage this last bit happened to be? No. None whatsoever.

In fact it is worse than that, you don't even know if it should really have converted the LSB to a 1.

This will take some explaining, read slowly. The comparator in the ADC compares the input voltage with the internal fed back DAC voltage to decide if the resulting bit is a 0 or a 1. It therefore follows that this quantization voltage range equals the LSB voltage. For example if eight bits and measuring over the range 0 to 2.55V then the LSB is 10mV. This means that any voltage within the LSD, here 10mV, will result in the comparator selecting a 0. You have no idea precisly where in that 10mV that the actual voltage was. All you know is that the comparator said it was.

But, what has been forgotten is that the comparator isn't perfect. A comparator is a very high gain differential input amplifier where the small difference between the inverting and non-inverting inputs is amplified to give the 0 or 1 output level. There will be hysterisis in this comparision. Assume the hysterisis is 1mV, so now the quantization voltage is actually 10mV + 1mV (above) + 1mV (below) or 12mV. So the quantization voltage range is larger than the voltage of the LSB.

This is why ALL conversions from analogue to digital give an error with +/-1 LSD. There is no possible way of knowing whether the conversion really was within the quantization voltage range, or fractionally outside it. There is also nothing you can do about it.

What must be quite clear in the mind is that the comparator makes its decisions, 0 or 1, but once made they cannot be altered, they are fixed. So in the case above the two LSBs were 00, but possibly should have been 11, but the resulting output is the one that stands. The possibility of being 11 is covered by the +/-1 LSB.

If you come across an ADC that doesn't state the reading as beiing +/-1 LSB then be very suspicious, until you have worked out what change of terminology has been introduced to conceal it. I note than Analog Devices in their 574 ADC use +/-1/2 LSB, the attached gobblygook does take some reading.

A successive approximation ADC relies totally on the comparator. If its ability to differentiate between a voltage within the quantization voltage range and one well outside then it is obvious that the ADC it is part of is not going to meet any +/-1 LSB accuracy requirements. On the other hand if it was much better than the quantization voltage range then it could be part of a higher resolution ADC. The point here is that the parts of the ADC have to be in balance. The DAC also has to settle well within a bit time, but that is an easier job than the comparator.

So, where does that leave us. It is obvious that the oversampling is taking multiple ADC samples and in so doing hoping to spread the comparator decision point around within the quantisation voltage. This means that if the input voltage was, say, at 75% of the quantization voltage then for three samples out of four the LSB would be a 1. Is this valid? Well, no, the comparator simply doesn't have the reduced aperture time necessary to make decisions on fractional parts of its quantization voltage. This seems to me to be the crucial point, oversampling is trying to use something that doesn't exist. As mentioned above if the comparator did have the smaller aperture time to switch in a fraction of the quantization voltage then the whole ADC would be sold as being faster. What is the point of deliberatly selling yourself short?

I say aperture time, not really sure what else to call it. Could be jitter time, think of a scope timebase triggering, at some point it has to switch from no to yes to start the timebase. On a stable input waveform this results in jitter. Tunnel diodes where extremely good for this, their aperture time was picoseconds, took a long time for comapartors to catch up.

This has gone through several revisions, is it clear? Is it correct?

Title: Re: Oversampling an ADC for more resolution
Post by: T3sl4co1l on March 17, 2015, 09:17:05 pm
What if the input is inherently noisy (or has noise added upon it)?

You still may not know the exact offset voltage (it's probably better than 10mV; as I recall, most 8 bit ADCs are a hair better than the nameplate 8 bits would suggest on average), but statistics allows a more nuanced resolution, of fractional LSBs.

Errors like "1/2 LSB" and "better than the nameplate would suggest on average" are defined based on how the actual transfer function (count vs. voltage) compares to an ideal linear fit.  An evenly weighted ADC with 10mV/count steps will ideally transition at 5, 15, 25, etc. mV, not 10, 20, 30, etc., yielding an error (LSBs) of 0.5 peak and 0.288 RMS.  With dithering and averaging, this can be reduced arbitrarily.  You get the precision directly from the accuracy, if there are no systematic errors (like input offset voltage).

Sample rate, clock rate or aperture duration have nothing to do with quantization.  Those factors do tend to limit the smallest of bits, where it takes a long time for a comparator to "make up its mind" on the few mV residual down there.  But that's specific to design speed and operating clock rate, and not exclusively part of, or characteristic of, the DC transfer function.

Tim
Title: Re: Oversampling an ADC for more resolution
Post by: Marco on March 17, 2015, 09:50:25 pm
SAR ADCs generally have track and hold circuits.

Hysteresis doesn't really matter much with sufficient noise for oversampling, it's an unbiased error and as such still falls away in averaging.