Author Topic: Question about ADC oversampling concept (metrology may not be correct place?)  (Read 3711 times)

0 Members and 1 Guest are viewing this topic.

Online dietert1

  • Super Contributor
  • ***
  • Posts: 2091
  • Country: br
    • CADT Homepage
There are 13 bits. Nonlinearity is easy to correct by table based calibration. You may need two tables though to handle temperature changes properly. It's a piece of stone like any other precision part.

Regards, Dieter
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
David's explanation is correct, Aaron's remark about the 12 usable bits as well.
There are 13 bits of resolution, but the last bit is lost in the nonlinearity. I have attached the measurement. It shows the deviation from the measurement calculated in the AVR to my 34401 in mV.

I think you were lucky to get that level of performance.  Usually microcontroller ADCs are 2 bits less linear their than their monotonic resolution, so a 12-bit ADC provides 10 bits of linearity.  ADCs with 1 bit of integral non-linearity exist as premium parts.

 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14263
  • Country: de
David's explanation is correct, Aaron's remark about the 12 usable bits as well.
There are 13 bits of resolution, but the last bit is lost in the nonlinearity. I have attached the measurement. It shows the deviation from the measurement calculated in the AVR to my 34401 in mV.

I think you were lucky to get that level of performance.  Usually microcontroller ADCs are 2 bits less linear their than their monotonic resolution, so a 12-bit ADC provides 10 bits of linearity.  ADCs with 1 bit of integral non-linearity exist as premium parts.
I would not fix the linearity to LSB. It is more like the µC internal SAR ADCs get linearity to about 10 bit level, no matter if in the older µC they are 10 bit ADCs or in many newer ones they offer 12 Bits.

With enough dithering the oversampling can also reduce the worst case DNL part a little and thus to a limited extend improve the linearity by smoothing out the worst points (usually were the MSB changes).
 

Offline nfmax

  • Super Contributor
  • ***
  • Posts: 1562
  • Country: gb
David's explanation is correct, Aaron's remark about the 12 usable bits as well.
There are 13 bits of resolution, but the last bit is lost in the nonlinearity. I have attached the measurement. It shows the deviation from the measurement calculated in the AVR to my 34401 in mV.

I think you were lucky to get that level of performance.  Usually microcontroller ADCs are 2 bits less linear their than their monotonic resolution, so a 12-bit ADC provides 10 bits of linearity.  ADCs with 1 bit of integral non-linearity exist as premium parts.
I would not fix the linearity to LSB. It is more like the µC internal SAR ADCs get linearity to about 10 bit level, no matter if in the older µC they are 10 bit ADCs or in many newer ones they offer 12 Bits.

With enough dithering the oversampling can also reduce the worst case DNL part a little and thus to a limited extend improve the linearity by smoothing out the worst points (usually were the MSB changes).
There also exists the technique of large-scale dithering, where the dither signal derives from a PRBS sequence, added in analogue form at the ADC input and digitally subtracted from its output. The dither signal may be about half full scale of the ADC. The idea is to make any given input signal voltage correspond to any one of about half the ADC bit transitions, pseudo-randomly varying over time. This 'smears out' the effect of differing size bit transitions, improving the overall ADC linearity (at the expense of maximum input voltage)

It was used by HP in the 89410A vector signal analyser. There is a description of the technique in the December 1993 issue of the Hewlett-Packard Journal (pages 36-40)
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
There are 13 bits. Nonlinearity is easy to correct by table based calibration. You may need two tables though to handle temperature changes properly. It's a piece of stone like any other precision part.

Stability over time and temperature is always an issue, and linearity correction requires calibration.  There are some interesting ways to do self calibration of linearity but they add complexity.  These days the cost is high enough to generally pay for using a premium more linear part to start with.  I think the only application I regularly see linearity correction in is RF transmitters which use predistortion to improve transmitter linearity.
« Last Edit: February 02, 2022, 12:17:06 am by David Hess »
 

Online dietert1

  • Super Contributor
  • ***
  • Posts: 2091
  • Country: br
    • CADT Homepage
Pulse oximetry is a limited bandwidth application (1 to 25 Hz) where one wants about 100 to 120 dB noise free. We do it with a MSP430 and it's internal 12 bit ADC with massive oversampling. It requires the CPU to sleep during data aquisition, an external reference and a four layer board.
Once into massive oversampling and if enough computing power is available, one can use median noise filtering to supplement averaging. There is a large variety of models with different advantages.

Regards, Dieter
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf