Author Topic: Question about ADC oversampling concept (metrology may not be correct place?)  (Read 3662 times)

0 Members and 1 Guest are viewing this topic.

Offline wb0gazTopic starter

  • Regular Contributor
  • *
  • Posts: 200
I'm not sure if this is the correct group with eevblog forum, so please feel free to recommend other venue.

I am trying to understand (by example, preferably) what happens during ADC oversampling.

For example, if I have a 10-bit single-ended ADC with a 1.024V reference voltage, that tells me that the voltage associated with the LSB is 1 mV. Now suppose the ADC is fed by either 0.512V (just at the mid-point). If the ADC and input signal had zero noise, then the ADC would presumably read back it's representation of 0.512V (or a value of 01000000.) From practical experience, I've found that an ADC in this sort of condition sample stream reads a few counts + or - the mid-point value (assuming the input signal is actually at the mid-point.)

Now suppose I add to the input a +256 uV DC bias (basically 1/4 of the LSB value, and DC so it's just a fixed value, so sampling rate can be ignored.) If the environment (including the midpoint bias and +256 uV bias) were truly noise-free, then the I'd imagine the ADC would continue to read 0100000000. However, in the real world the ADC readings always seem to jump around +/- a few counts during sampling. Would the presence of the 256 uV DC bias (which of course has its own miniscule noise component) cause the distribution of samples to be slightly higher in value, even though they are apparently still random? If I now were to average say 16 samples (like I can set averaging to readings from a VNA), would this make the presence or absence of the 256 uV DC bias more evident?

My goal in the real world is to introduce oversampling when recovering a too-small-amplitude AC signal that's present around the mid-point bias voltage going into a 10-bit ADC (512 mV mid-point bias and 1.024V reference).

In effect, what I'd like to have happen is make the ADC appear to have a higher resolution through oversampling than it does in hardware.The spectrum (range of frequencies) of the electrical input to the ADC (containing the tiny AC signal) component) will be compatible with the ADC sampling rate (so there's not an obvious aliasing problem), however, the intended signal to be recovered is low within the overall spectrum. For example, sampling rate 8 kHz, input spectrum is low-pass filtered (suppose ideal filter) to 4 kHz, and the signal to be recovered is just a sinusoid at 500 Hz.

Sorry for the long description, but trying to set the stage for my question because I don't wan to attempt to implement then test implementation without a clear concept of what I'm trying to accomplish!

Thanks for any advice (including pointers to reading material, preferably practically-oriented, or suggestions as to where I could take this sort of question.)
« Last Edit: January 29, 2022, 01:48:07 pm by wb0gaz »
 


Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3383
  • Country: ua
oversampling means that you capture data with higher sampel rate and then apply low pass filter to cut off noise power above cut-off frequency. Since part of full ADC bandwidth noise power will be removed with low pass filter, the rest bandwidth remains with lower noise power. Such approach allows to improve noise floor (increase dynamic range).

But it won't works if you have constant output. You're needs some noise or a small signal on the input to allow ADC output switching between different values. When you apply low pass filter these switching will be filtered and you get low frequency signal reconstructed with higher dynamic range than ADC resolution.

If you're needs to measure constant value with no noise, you're needs to use dithering (add specific noise on the ADC input to allow ADC output switching). But if your signal has some noise, dithering doesn't needed, because it's already done with signal noise.

This dynamic range improvement is known as "process gain" and can be calculated as follows:

Process gain (dB) = 10 * log10( IN_BW / OUT_BW )

where
IN_BW - processing input bandwidth
OUT_BW - processing output bandwidth

Since ADC dynamic range is the following:

SNR = 6.02 * N + 1.76

and ADC bandwidth is a half of sample rate:

BW = SR / 2

where N is ADC bit resolution, you can combine both and get total SNR after low pass filtering:

SNR = 6.02 * N + 1.76 + 10 * log10( SR / (2*OUT_BW) )

where
N - ADC bits
SR - ADC sampel rate
OUT_BW - low pass filter output bandwidth (cut off frequency)

Average is a kind of low pass filter. But not optimal.

In your case for 10 bit ADC working at 8 kHz sample rate has dynamic range:

SNR = 6.02 * 10 + 1.76 = 61.96 dB

and

ADC bandwidth = 8000 / 2 = 4000 Hz

When you apply low pass filter to ADC output with cut-off at 800 Hz you will get process gain:

Process gain = 10 * log10( 4000 / 800 ) = 6.99 dB

So, the total dynamic range will be:

SNR = 61.96 + 6.99 = 68.95 dB

Since 1 bit gives 6.02 dB, you can calculate that ADC resolution after decimation to sample rate 1600 Hz will be improved for:

6.99 / 6.02 = 1.16 bits
« Last Edit: January 29, 2022, 02:41:39 pm by radiolistener »
 
The following users thanked this post: ch_scr

Offline wb0gazTopic starter

  • Regular Contributor
  • *
  • Posts: 200
OK that helps, thank you both!

My initial description (of a small incremental DC bias superimposed on the ADC's mid-point bias) sounds like it did not help with my statement of problem; it was rather just trying to get a handle in time domain of oversampling process.

The AC signal I am concerned about is a signal that has arrived on a radio carrier (but has been demodulated back to baseband), so it will contain noise (that will have arrived in the radio spectrum coincident with the signal frequency, as well as that introduced by the circuitry processing the signal+noise to bring it to baseband, where it would appear (in my example) as a 500 Hz sinusoid (of 250 uV amplitude, or something which is in any event somewhat less than the resolution of the ADC) along with noise (of unknown amplitude, but less than that of the 500 Hz signal). The noise (and anything else unwanted) will arrive after low pass filtering to meet nyquist requirement of the ADC sampling rate.

I'll likely need to work through my example using the calculations described by radiolistener so I can grasp the mechanics of what's happening.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Does the slope of the cutoff make a significant difference?  And does it matter how much lowpassing is done in the analog frontend vs. DSP?

The attached draft schematic for a 2-channel EEG reader, for example.  I'm thinking to run each channel into its own input of a single MUX'ed SAR ADC.  Haven't chosen an MCU yet, but I think it's safe to assume at least 10 bits and >10ksps for the ADC itself.  That would be >5ksps each for 2 channels, >2.5ksps for 4 channels, etc.  I could end up with something faster, higher resolution, and/or some more channels, but I'd say that two 10-bit channels at 5ksps each would be a good minimum for our discussion here.  (the ADC itself runs at 10ksps with the ISR keeping track and de-interleaving the results)

If I always run the ADC at the same constant rate, does it make a difference (besides Nyquist) if I play with the analog filter's cutoff frequency and rolloff rate, while keeping it above the digital lowpass cutoff?  (a single R-C lowpass, for example, instead of a 2nd-order LP with an opamp)  What if I also play with the digital lowpass rate while keeping the same cutoff frequency?  (almost-braindead 1st-order LP / "exponential average", vs. 2 of them cascaded or a biquad LP, etc.)



As an example of how "braindead" a 1st-order LP can be:
Code: [Select]
#define SHIFT 2    //adjust experimentally
output -= (output >> SHIFT);
output += ( input >> SHIFT);
And that's it!  It's rearranged a bit from the classic form, but it's arithmetically identical and does the exact same thing.
The example here uses 16-bit unsigned integers; but you can modify it to use floats, rename SHIFT to ALPHA, and multiply by it instead of shifting; or you can have a different shift amount for each line to give a different fixed-point format and/or account for a right-justified ADC giving you a bunch of shifts for free; etc...
 
The following users thanked this post: ch_scr

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3383
  • Country: ua
Does the slope of the cutoff make a significant difference?  And does it matter how much lowpassing is done in the analog frontend vs. DSP?

It depends on your needs. Analog low pass filter (antialiasing filter) needs to cut off frequencies above Nyquist frequency. Since analog filter slope is not good, it's better to have some margin in ADC bandwidth which covers analog filter slope. It helps to reduce aliasing from high frequency components which is passed through analog low pass filter.

For a good result you can use ADC sample rate 10 times higher than bandwidth of analog low pass filter. In such case you can use more simple analog filter with a sloping slope. Such approach is used in digital oscilloscopes, where it's too hard to design analog filter with a sharp slope.

If I always run the ADC at the same constant rate, does it make a difference (besides Nyquist) if I play with the analog filter's cutoff frequency and rolloff rate, while keeping it above the digital lowpass cutoff?

Yes, if you reduce cut-off frequency of analog filter it reduces aliasing. So, when you're don't needs a wide signal bandwidth, you can reduce analog filter cut-off for better results. But on the other hand this is very hard and expensive to control analog filter bandwidth, so in most cases it's more easy and cheaper to keep analog bandwidth at fixed value.

Another issue with variable bandwidth of analog filter is a flatness response within working bandwidth. When you change analog filter bandwidth, the response flatness also will be changed and it will be to hard to compensate it.
« Last Edit: January 29, 2022, 04:41:03 pm by radiolistener »
 

Offline wb0gazTopic starter

  • Regular Contributor
  • *
  • Posts: 200
Aaron D's posting is helpful to my original question (although the specific application differs), so it is appreciated.

In my case, the ADC sampling rate is 10X the max frequency of intended signals (30 kHz vs 3 kHz); the low pass filter preceding the ADC has only one pole so the roll-off is very gentle. There are two (switchable) use cases for this hardware - one will process 3 kHz wide signal passband (using ~30 kHz sampling rate) natively (that is, without an attempt to oversample); in the alternate case I proposed, I'd like to use oversampling to improve SNR when the only content of interest in the signal spectrum is below 750 Hz (two octaves lower.) The ADC sampling rate would remain the same (30 kHz).

In this case, to enable oversampling, would I need an alternate low pass filter with lower corner frequency?

 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14210
  • Country: de
Incresing the effective resolution only works well, if there is enough noise or alternatively a added dithering signal. If the noise / background is too small the ADC would stick to the quantization levels and no change visible. So it needes noise at a few LSB or the dithering signal. A dithering signal is more predictable and thus can also be a bit larger.

Simple averaging of blocks is a digital filter with sinc type response. This helps with anti aliasing for the reduces data rate after averaging. In some cases, with a strong out of band background one may need more anit aliasing filtering, but often one does not need extra AA filtering beyound what is needed for the original higher sampling rate. Ane could use a different (more attenuation for the high frequencies, but slower settling) digital filtering before decimation to reduce the need for extra AA filtering.
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1183
  • Country: de
The AC signal I am concerned about is a signal that has arrived on a radio carrier (but has been demodulated back to baseband), so it will contain noise (that will have arrived in the radio spectrum coincident with the signal frequency, as well as that introduced by the circuitry processing the signal+noise to bring it to baseband, where it would appear (in my example) as a 500 Hz sinusoid (of 250 uV amplitude, or something which is in any event somewhat less than the resolution of the ADC) along with noise (of unknown amplitude, but less than that of the 500 Hz signal).

Ideally you should drive an ADC almost full scale.
If the signal is so small, why don't you amplify it (significantly) before feeding it into the ADC?
Or is your 250µV wanted signal buried in 1Vpp of noise and/or unwanted signals, so that full scale is already exploited?

EDIT: Is the wanted signal really a continuous sine wave with a fixed 500Hz frequency and a constant amplitude?
So what's the aim at the end? Just detect the presence of the 500Hz tone?
« Last Edit: January 29, 2022, 10:23:06 pm by gf »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Does the slope of the cutoff make a significant difference?  And does it matter how much lowpassing is done in the analog frontend vs. DSP?

It depends on your needs. Analog low pass filter (antialiasing filter) needs to cut off frequencies above Nyquist frequency. Since analog filter slope is not good, it's better to have some margin in ADC bandwidth which covers analog filter slope. It helps to reduce aliasing from high frequency components which is passed through analog low pass filter.

For a good result you can use ADC sample rate 10 times higher than bandwidth of analog low pass filter. In such case you can use more simple analog filter with a sloping slope. Such approach is used in digital oscilloscopes, where it's too hard to design analog filter with a sharp slope.

I'm trying to get away from the term "bandwidth", because in the sense that we're using here, it seems to me to imply a yet-unspecified threshold of an acceptable noise floor.  (for one example of this concept, it's okay to alias, if the response to the aliased frequencies is that low)  Where the frequency response crosses that level, determines the "bandwidth" in the way that I'm thinking of.  It can be calculated from the -3dB cutoff frequency, rolloff rate, and other characteristics if you're not content with a rough average, but "bandwidth" for me is not necessarily the same as "cutoff freq".

Using the definitions that I'm thinking of, the same "bandwidth" can be achieved with a distant cutoff and a gradual slope, or a close cutoff and a steep slope.  That's really what I'm exploring here, on *both* sides of the ADC.  In an ideal world, with infinite resolution and infinite sample rate, it wouldn't matter what is done on one side of the ADC or the other - it's just two LPF's, cascaded - but since the ADC uses finite chunks in both dimensions, it does matter.

If there's a term that I haven't heard of yet, for what I'm calling "bandwidth" here, please fill me in.  Thanks!

If I always run the ADC at the same constant rate, does it make a difference (besides Nyquist) if I play with the analog filter's cutoff frequency and rolloff rate, while keeping it above the digital lowpass cutoff?

Yes, if you reduce cut-off frequency of analog filter it reduces aliasing. So, when you're don't needs a wide signal bandwidth, you can reduce analog filter cut-off for better results. But on the other hand this is very hard and expensive to control analog filter bandwidth, so in most cases it's more easy and cheaper to keep analog bandwidth at fixed value.

Another issue with variable bandwidth of analog filter is a flatness response within working bandwidth. When you change analog filter bandwidth, the response flatness also will be changed and it will be to hard to compensate it.

Having read that response, what I actually meant by "(besides Nyquist)" was "(besides aliasing)".  I was just thinking at the time, that either one of those concepts necessarily includes the other, and so I just picked one.  Sorry for the confusion.

Also, there wouldn't be any user-adjustment here.  The only adjustment is to get out the hot air gun and replace parts.  (which may indeed happen for some early prototypes)  So I'm not sure what you mean by changing the flatness response.  Unless you mean like Bessel, Butterworth, Chebyshev, etc.?
« Last Edit: January 30, 2022, 01:26:28 am by AaronD »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
EDIT: Is the wanted signal really a continuous sine wave with a fixed 500Hz frequency and a constant amplitude?
So what's the aim at the end? Just detect the presence of the 500Hz tone?

That got me wondering too.  If wb0gaz's application is indeed to detect a single frequency, then I might suggest a digital bandpass filter with its center there, as selective as practical, and then watch the amplitude of the BP output.
You could just put a threshold on that to give a present/absent result, or you might run the actual output through an audio-style noise-gate with that same threshold (<threshold fades out with time constant A, >threshold fades in with time constant B, maybe different thresholds for some hysteresis...) and gain it up to a useful level to feed a DAC.

For my application, I'm toying with having several bandpasses in parallel for each channel, kinda like an audio RTA, and see if I can pick up any interesting nuances from that, instead of just a single BP that covers my entire range of interest.

Regardless though, if there's going to be a digital bandpass anyway, is there even a need to put a lowpass after the ADC?  Or does the BP do that job too?  I'm thinking yes, it does do that job too, so there's no need to spend the processing time on an explicit digital lowpass, but I'm unsure enough to feel like I need to ask.
« Last Edit: January 30, 2022, 01:43:53 am by AaronD »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
If the purpose is tone detection then you will want some digital post processing. If the analog bandpass filter is good enough then this post processing won't need to be filtering though. The problem with using some kind of filter algorithm (FFT, Goertzel, notch filter) is that the detection level versus the frequency isn't flat so at the edges of the frequency window your sensitivity will be lower. My goto solution is to use a wider band pass filter (analog or digital), a frequency counter (in the digital domain) and a level detector (in the digital domain). This way you can get very accurate tone detection with exact limits where it comes to level and frequency.

Oversampling can be tricky though. I have been in situations where the ADC of a microcontroller (10 bit) didn't had enough noise to get extra resolution. All in all I would recommend to filter and amplify the signal so the ADC has a decent signal level to work with.
« Last Edit: January 30, 2022, 01:54:40 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
For example, if I have a 10-bit single-ended ADC with a 1.024V reference voltage, that tells me that the voltage associated with the LSB is 1 mV. Now suppose the ADC is fed by either 0.512V (just at the mid-point). If the ADC and input signal had zero noise, then the ADC would presumably read back it's representation of 0.512V (or a value of 01000000.) From practical experience, I've found that an ADC in this sort of condition sample stream reads a few counts + or - the mid-point value (assuming the input signal is actually at the mid-point.)

Some ADCs have a low enough input noise that under ideal conditions, they will read back a single value.

Quote
Now suppose I add to the input a +256 uV DC bias (basically 1/4 of the LSB value, and DC so it's just a fixed value, so sampling rate can be ignored.) If the environment (including the midpoint bias and +256 uV bias) were truly noise-free, then the I'd imagine the ADC would continue to read 0100000000. However, in the real world the ADC readings always seem to jump around +/- a few counts during sampling. Would the presence of the 256 uV DC bias (which of course has its own miniscule noise component) cause the distribution of samples to be slightly higher in value, even though they are apparently still random? If I now were to average say 16 samples (like I can set averaging to readings from a VNA), would this make the presence or absence of the 256 uV DC bias more evident?

As the input approaches the transition between two codes, noise will cause both codes to be returned with a ratio indicating the true value.

Quote
In effect, what I'd like to have happen is make the ADC appear to have a higher resolution through oversampling than it does in hardware.The spectrum (range of frequencies) of the electrical input to the ADC (containing the tiny AC signal) component) will be compatible with the ADC sampling rate (so there's not an obvious aliasing problem), however, the intended signal to be recovered is low within the overall spectrum. For example, sampling rate 8 kHz, input spectrum is low-pass filtered (suppose ideal filter) to 4 kHz, and the signal to be recovered is just a sinusoid at 500 Hz.

That can be done to achieve higher resolution, however the linearity remains that of the original ADC.  In some designs, noise is deliberately added for exactly that reason.
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1183
  • Country: de
If I always run the ADC at the same constant rate, does it make a difference (besides Nyquist) if I play with the analog filter's cutoff frequency and rolloff rate, while keeping it above the digital lowpass cutoff?  (a single R-C lowpass, for example, instead of a 2nd-order LP with an opamp)  What if I also play with the digital lowpass rate while keeping the same cutoff frequency?  (almost-braindead 1st-order LP / "exponential average", vs. 2 of them cascaded or a biquad LP, etc.)

Back to the term "oversampling". It is still regular sampling - just at a higher sample rate. (Significantly) higher than it were actually necessary for the bandwidth of the useful signal component of interest. There can be several reasons for doing it. One potential reason is to relax the cut-off requirement for an analog anti-aliasing filter in front of the ADC. For many use cases the oversampled signal is low-pass filtered in the digital domain and down-sampled again to a lower sample rate.

The frequency response of an analog filter in front of the ADC just gets cascaded with any processing you apply in the digital domain. You can do most of the processing in the digital domain. The analog lowpass filter in front of the ADC only needs to ensure that no frequencies >= fs/2 (where fs is the sample rate) are entering the ADC. Eliminating them completely is impossible of course, but the filter needs to attenute them sufficiently, where "sufficent" depends on the particular use case. This can be (say) 40dB, 60dB, or sometimes even more. Note that the sampling process folds all frequencies beyond fs/2 (if they are still present) into the 0...fs/2 band (like a concertina), where they appear as "unwanted" signal, added to the wanted signal. That's aliasing. It depends on the use case how much unwanted signal can be tolerated. Example: If you want to get 40dB attenuation at fs/2 with a 1st order lowpass (20dB/decade), then the -3dB cut-off needs to be as low as fs/200! As you see, the cut-off requirements for an anti-alias filter are anything but relaxed, and with a 1st order lowpass you won't get very far, unless you accept a very large oversampling factor.
« Last Edit: January 30, 2022, 11:49:30 am by gf »
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1183
  • Country: de
The problem with using some kind of filter algorithm (FFT, Goertzel, notch filter) is that the detection level versus the frequency isn't flat so at the edges of the frequency window your sensitivity will be lower. My goto solution is to use a wider band pass filter (analog or digital), a frequency counter (in the digital domain) and a level detector (in the digital domain). This way you can get very accurate tone detection with exact limits where it comes to level and frequency.

Depends on the SNR. If the tone drowns in noise, then a narrow-band filter may be required to "pull it out of the noise", and wider band may not be an option. Even a narrow filter can have a flat top, of course, at the cost of a larger settling time. The required minimum bandwidth eventually depends on the accurcy of the tone and ADC clock frequency. If the noise level is high and the frequency is inaccurate, too, then reliable detection may not be possible.
 

Offline Bud

  • Super Contributor
  • ***
  • Posts: 6912
  • Country: ca
If the goal is to detect tone presense vs measuring its level, different aproach should be used. If you want to still use FFT vs single frequency bin algorithm, you select FFT parameters in a way that your 500Hz signal falls in the middle of a frequency bin (vs on the edge between two adjacent bins). Then you calculate energy (signal + noise) in that bin and divide it to the total energy of the entire FFT frame (sum of energy in all bins) to get a ratio which you then compare to a predefined threshold. You continue doing this in  a sliding window manner along time and you count positively detected events within a decided length of time. Once the counter exceeds a certain threshold , you consider detection successful.
« Last Edit: January 30, 2022, 02:34:30 pm by Bud »
Facebook-free life and Rigol-free shack.
 

Offline jonpaul

  • Super Contributor
  • ***
  • Posts: 3366
  • Country: fr
Bonjour, we worked with ADC and DACs since 1968, for signal analysis and audio, FFT.

The post seems to have mixed  SNR, ENOB, (effective number of bits) noise and oversampling (OS) ADC theory.

RE  issues of zero input, several LSB bits are active, see  classic noise theory, all ADC have noise floor, OS or not.
Many fine sites, papers and references on SNR and ENOB in ADC.

The advent of powerful real time DSP made practical the oversampling ADC, first by Nippon TT
https://patents.google.com/patent/EP0190694A2/en

Motorola  ADC IC  1985 app note:
http://xanthippi.ceid.upatras.gr/people/psarakis/courses/DSP_APL/demos/APR8-sigma-delta.pdf
Soon all the analog/ADC firms were deploying OS ADC and wrote  app notes: 1980s,,1990s See Analog Devices, Cirrus Logic, TI, AKM, etc.

Oversampling is based on  information theory and practical FIR filters.
By sampling much faster then the Nyquist theory would have, eg 100X or 1000X and permits  a fast but low res ADC, one has effective captured the information at  resolution info but spread the spectrum 100X or 1000X wider.
The result needs a real time digital filter, often and FIR 30-100 stages.
The filtered result is a high resolution digital signal of the original analog input. It is possible to OS so fast the OS  bitstream is one bit!

Again, many sites, papers and app notes, here is a TI paper for RF and IF,  still good background,
https://www.ti.com/lit/an/slaa594a/slaa594a.pdf?ts=1643469546792

With many great OS ADC and DAC easily available, no need to DIY the converter!

Hope this note clarifies the situation,

Kind Regards,

Jon



« Last Edit: January 30, 2022, 05:34:01 pm by jonpaul »
Jean-Paul  the Internet Dinosaur
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
The filtered result is a high resolution digital signal of the original analog input. It is possible to OS so fast the OS  bitstream is one bit!

Yes, that is an interesting point to consider, as an extreme example of oversampling.  In that case, the dithering signal that we tend to think of as a necessary evil, adding noise on purpose that needs to be removed later and creating more work to "clean it up" after conversion, becomes an absolute necessity to get any useful result at all.

I imagine an analog comparator chip that is fed the original signal at one input, and a fast free-running triangle wave at the other input.  Thus, a 1-bit ADC, with full-scale analog levels equal to the peaks of the triangle.  The resulting bitstream can then be sampled in the DSP at much higher rate than the triangle's frequency, translated to the full-scale values of whatever format you're using, and then lowpassed as usual.  Once the lowpass is done at that crazy-high sample rate, with a cutoff and slope suitable to not alias at the desired lower rate, you can just grab samples at that lower rate and discard the rest.

Then it becomes interesting to consider that this is all on a continuum, so that there's no hard switch between that and adding a small analog dithering signal to make a 10-bit converter into a 12-bit one...
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1183
  • Country: de
In that case, the dithering signal that we tend to think of as a necessary evil, adding noise on purpose that needs to be removed later and creating more work to "clean it up" after conversion, becomes an absolute necessity to get any useful result at all.

Once noise has been added, it cannot be removed any more. Filtering helps, but can only remove the part of the noise which is outside the frequency band of interest. You can't get rid of of the part which falls into the frequency band of interest.

Alternatively you can use a known deterministic dithering signal (usually a stream of pseudo-random numbers). With help of a DAC you can turn it into an anlog signal. Like analog dithering noise you add it to the signal in the analog domain before entering the ADC, but since you now know exactly what you have added, you can subtract it again in the digital domain. At the end the ADC quantization error gets dithered and whitened, but besides that, no additional noise gets added to the signal.

Quote
I imagine an analog comparator chip that is fed the original signal at one input, and a fast free-running triangle wave at the other input.

That's basically how a PWM signal is generated from an analog signal. A delta-sigma modulator is yet another (IMO more sophisitcated) way for 1-bit A/D conversion, providing better noise shaping.
« Last Edit: January 30, 2022, 10:37:49 pm by gf »
 
The following users thanked this post: ch_scr

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
In that case, the dithering signal that we tend to think of as a necessary evil, adding noise on purpose that needs to be removed later and creating more work to "clean it up" after conversion, becomes an absolute necessity to get any useful result at all.
Once noise has been added, it cannot be removed any more. Filtering helps, but can only remove the part of the noise which is outside the frequency band of interest. You can't get rid of of the part which falls into the frequency band of interest.

Presumably, an intentional dithering signal would be entirely out-of-band and thus filtered out...which requires the ADC sample rate to be high enough that the dither doesn't alias either.  So the frequency spectrum going into the ADC itself, would have the original range of interest at the bottom, then some space for the digital lowpass to "turn over", then another range for the dithering signal, then finally the ADC Nyquist frequency at fs/2.  The analog anti-aliasing filter doesn't necessarily have to account for the dithering signal, if the dither is added between that and the ADC, but the ADC itself does need to be designed as if the dither were part of the interesting stuff.

Doing it that way allows the "noise" to be removed completely (or close enough), as it doesn't overlap the range of interest.  Even better if, as you mentioned, you have an exact record of what it was and just subtract it out.  (except for some residual analog "fuzziness" because nothing is ever *equal* in analog)
Or if you have enough environmental noise already to cover an LSb or three, in frequencies above the range of interest, then you can just sample fast enough to capture that noise without aliasing it, and use the same digital lowpass as before.  The disadvantage though, of not reducing the environmental noise so you can use it for this, is that it probably also covers the range of interest, and you can't get rid of that.

So now the question is, "When you run the ADC slowly, how much of the resulting noise in the digital signal was originally at that frequency, and how much is aliased from what could have been dithering and filtered out?"  Of course, the answer to that is different for each application.

For some of what I've done, I just ran a hand-control potentiometer directly to the on-chip ADC of my MCU with no analog filtering at all, ran the ADC as fast as it would go, and put a digital lowpass around 10Hz or so in the ADC handler code.  It seemed to be an improvement over a slow ADC used directly, so I'm pretty sure that the slow-ADC version has a lot of high-frequency noise aliased *into* the range of interest.  Running the ADC faster both keeps it from aliasing as much, which already reduces in-band noise in the result, and allows the out-of-band noise to be used as dither and then filtered out.

Am I getting it about right?
 

Offline Hermann W

  • Contributor
  • Posts: 15
  • Country: de
I recently made an oversampling test with an AVR ADC. The specially built board with LCD display and potentiometer for signal injection resulted in a stable measurement up to LSB.

I made the oversampling arrangement according to AVR121. 64 measurements were added, and 3 bits were truncated, theoretically giving 3 additional bits. Artificial noise was generated via a timer, exactly synchronized with the ADC measurement, i.e. in-phase samples to the noise signal. From the square wave of the timer output via RC low pass a triangle voltage of about 50mV was generated, which was added via a C as AC signal to the input, so that +- 2 Bits of dithering were created.

The result has fully met my expectations. Of course you don't have 3 additional usable bits, because the nonlinearities of the ADC have an effect. But 2 additional bits were just reachable with the test chip (Tiny861) and one could measure the nonlinearity well. A second chip (Mega328) had a similar result with additional offset.
 
The following users thanked this post: ch_scr

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
I made the oversampling arrangement according to AVR121. 64 measurements were added, and 3 bits were truncated, theoretically giving 3 additional bits. Artificial noise was generated via a timer, exactly synchronized with the ADC measurement, i.e. in-phase samples to the noise signal. From the square wave of the timer output via RC low pass a triangle voltage of about 50mV was generated, which was added via a C as AC signal to the input, so that +- 2 Bits of dithering were created.

...2 additional bits were just reachable...

So you took a 10-bit ADC (according to the datasheets), masked off the bottom 3 bits to make it a 7-bit ADC, and then oversampled by 64x with dither to turn those 7 bits into 9?

(of course, in a non-academic design, you'd probably just use the 10-bit ADC as-is, or use this technique to extend it *beyond* its native resolution, but the exercise here is to prove a point)
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
I made the oversampling arrangement according to AVR121. 64 measurements were added, and 3 bits were truncated, theoretically giving 3 additional bits. Artificial noise was generated via a timer, exactly synchronized with the ADC measurement, i.e. in-phase samples to the noise signal. From the square wave of the timer output via RC low pass a triangle voltage of about 50mV was generated, which was added via a C as AC signal to the input, so that +- 2 Bits of dithering were created.

...2 additional bits were just reachable...

So you took a 10-bit ADC (according to the datasheets), masked off the bottom 3 bits to make it a 7-bit ADC, and then oversampled by 64x with dither to turn those 7 bits into 9?

(of course, in a non-academic design, you'd probably just use the 10-bit ADC as-is, or use this technique to extend it *beyond* its native resolution, but the exercise here is to prove a point)

Adding 64 10-bit values adds 6 bits, and then removing 3 bits yields a 13 bit result.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Ah!  Okay.  That makes a little bit more sense.  Though the report sounds to me like 12 bits usable before the non-linearity becomes a problem.
 

Offline Hermann W

  • Contributor
  • Posts: 15
  • Country: de
David's explanation is correct, Aaron's remark about the 12 usable bits as well.
There are 13 bits of resolution, but the last bit is lost in the nonlinearity. I have attached the measurement. It shows the deviation from the measurement calculated in the AVR to my 34401 in mV.
 

Offline dietert1

  • Super Contributor
  • ***
  • Posts: 2073
  • Country: br
    • CADT Homepage
There are 13 bits. Nonlinearity is easy to correct by table based calibration. You may need two tables though to handle temperature changes properly. It's a piece of stone like any other precision part.

Regards, Dieter
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
David's explanation is correct, Aaron's remark about the 12 usable bits as well.
There are 13 bits of resolution, but the last bit is lost in the nonlinearity. I have attached the measurement. It shows the deviation from the measurement calculated in the AVR to my 34401 in mV.

I think you were lucky to get that level of performance.  Usually microcontroller ADCs are 2 bits less linear their than their monotonic resolution, so a 12-bit ADC provides 10 bits of linearity.  ADCs with 1 bit of integral non-linearity exist as premium parts.

 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14210
  • Country: de
David's explanation is correct, Aaron's remark about the 12 usable bits as well.
There are 13 bits of resolution, but the last bit is lost in the nonlinearity. I have attached the measurement. It shows the deviation from the measurement calculated in the AVR to my 34401 in mV.

I think you were lucky to get that level of performance.  Usually microcontroller ADCs are 2 bits less linear their than their monotonic resolution, so a 12-bit ADC provides 10 bits of linearity.  ADCs with 1 bit of integral non-linearity exist as premium parts.
I would not fix the linearity to LSB. It is more like the µC internal SAR ADCs get linearity to about 10 bit level, no matter if in the older µC they are 10 bit ADCs or in many newer ones they offer 12 Bits.

With enough dithering the oversampling can also reduce the worst case DNL part a little and thus to a limited extend improve the linearity by smoothing out the worst points (usually were the MSB changes).
 

Offline nfmax

  • Super Contributor
  • ***
  • Posts: 1562
  • Country: gb
David's explanation is correct, Aaron's remark about the 12 usable bits as well.
There are 13 bits of resolution, but the last bit is lost in the nonlinearity. I have attached the measurement. It shows the deviation from the measurement calculated in the AVR to my 34401 in mV.

I think you were lucky to get that level of performance.  Usually microcontroller ADCs are 2 bits less linear their than their monotonic resolution, so a 12-bit ADC provides 10 bits of linearity.  ADCs with 1 bit of integral non-linearity exist as premium parts.
I would not fix the linearity to LSB. It is more like the µC internal SAR ADCs get linearity to about 10 bit level, no matter if in the older µC they are 10 bit ADCs or in many newer ones they offer 12 Bits.

With enough dithering the oversampling can also reduce the worst case DNL part a little and thus to a limited extend improve the linearity by smoothing out the worst points (usually were the MSB changes).
There also exists the technique of large-scale dithering, where the dither signal derives from a PRBS sequence, added in analogue form at the ADC input and digitally subtracted from its output. The dither signal may be about half full scale of the ADC. The idea is to make any given input signal voltage correspond to any one of about half the ADC bit transitions, pseudo-randomly varying over time. This 'smears out' the effect of differing size bit transitions, improving the overall ADC linearity (at the expense of maximum input voltage)

It was used by HP in the 89410A vector signal analyser. There is a description of the technique in the December 1993 issue of the Hewlett-Packard Journal (pages 36-40)
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
There are 13 bits. Nonlinearity is easy to correct by table based calibration. You may need two tables though to handle temperature changes properly. It's a piece of stone like any other precision part.

Stability over time and temperature is always an issue, and linearity correction requires calibration.  There are some interesting ways to do self calibration of linearity but they add complexity.  These days the cost is high enough to generally pay for using a premium more linear part to start with.  I think the only application I regularly see linearity correction in is RF transmitters which use predistortion to improve transmitter linearity.
« Last Edit: February 02, 2022, 12:17:06 am by David Hess »
 

Offline dietert1

  • Super Contributor
  • ***
  • Posts: 2073
  • Country: br
    • CADT Homepage
Pulse oximetry is a limited bandwidth application (1 to 25 Hz) where one wants about 100 to 120 dB noise free. We do it with a MSP430 and it's internal 12 bit ADC with massive oversampling. It requires the CPU to sleep during data aquisition, an external reference and a four layer board.
Once into massive oversampling and if enough computing power is available, one can use median noise filtering to supplement averaging. There is a large variety of models with different advantages.

Regards, Dieter
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf