Author Topic: Accurately measuring <1mV using an oscolloscope  (Read 2510 times)

0 Members and 1 Guest are viewing this topic.

Offline Pack34

  • Frequent Contributor
  • **
  • Posts: 667
Accurately measuring <1mV using an oscolloscope
« on: December 16, 2017, 01:23:14 am »
What kind of oscilloscope would I need to buy to do this? What I'm trying to track down is <10 counts on a 16-bit ADC that's sampling at 500kHz.
 

Offline joeqsmith

  • Super Contributor
  • ***
  • Posts: 5906
  • Country: us
Re: Accurately measuring <1mV using an oscolloscope
« Reply #1 on: December 16, 2017, 01:27:00 am »
From the title, I just amplify the signal.  I have no idea what you are trying to measure.   
How electrically robust is your meter?? https://www.youtube.com/channel/UCsK99WXk9VhcghnAauTBsbg
 

Offline Pack34

  • Frequent Contributor
  • **
  • Posts: 667
Re: Accurately measuring <1mV using an oscolloscope
« Reply #2 on: December 16, 2017, 01:59:50 am »
It's the output of a linear silicon sensor. One of the biggest challenges is that there's up to 2048 individual pixels being read out and I need to compare multiple pixels on the screen to see that delta in between the readings.
 

Offline joeqsmith

  • Super Contributor
  • ***
  • Posts: 5906
  • Country: us
Re: Accurately measuring <1mV using an oscolloscope
« Reply #3 on: December 16, 2017, 02:16:42 am »
I'm still lost but that is normal.   Why are you wanting to use a scope vs just digitize the data and feed that to a PC?  Once it's in the PC, you could do pretty much anything with it.   I'm sure there is a reason you want to use a scope.   
How electrically robust is your meter?? https://www.youtube.com/channel/UCsK99WXk9VhcghnAauTBsbg
 

Offline Pack34

  • Frequent Contributor
  • **
  • Posts: 667
Re: Accurately measuring <1mV using an oscolloscope
« Reply #4 on: December 18, 2017, 04:35:21 pm »
I'm still lost but that is normal.   Why are you wanting to use a scope vs just digitize the data and feed that to a PC?  Once it's in the PC, you could do pretty much anything with it.   I'm sure there is a reason you want to use a scope.

I'm trying to track down where it's originating from. From the theoretical and marketing polished datasheet for the sensor, I'm getting a 5 ADC counts of added noise between the pixels. I could rip everything up and re-attempt, but I would prefer to understand where the issue is coming from. Maybe the regulators feeding the sensor have some noise in them, or maybe the routing throughout the analog circuit.

I kept all of the analog circuitry completely isolated from the control signals and power rails and the layout itself is as tight as I could make it. There just has to be something I'm missing.
 

Offline MadTux

  • Frequent Contributor
  • **
  • Posts: 525
Re: Accurately measuring <1mV using an oscolloscope
« Reply #5 on: December 18, 2017, 05:42:54 pm »
Tek 7A22?  ;D ;D ;D
 

Offline tecman

  • Frequent Contributor
  • **
  • Posts: 434
  • Country: us
Re: Accurately measuring <1mV using an oscolloscope
« Reply #6 on: December 18, 2017, 05:53:05 pm »
If you find an old 561 (or 564) and a 3A9 vert plug-in it will go down to 10uv/div

paul
 

Offline Pack34

  • Frequent Contributor
  • **
  • Posts: 667
Re: Accurately measuring <1mV using an oscolloscope
« Reply #7 on: December 18, 2017, 06:05:20 pm »
So, for something like this I actually want to go down to an analog scope?
 

Offline voltsandjolts

  • Supporter
  • ****
  • Posts: 754
  • Country: gb
Re: Accurately measuring <1mV using an oscolloscope
« Reply #8 on: December 18, 2017, 06:16:56 pm »
What is the reference voltage on the ADC?
What does 5 ADC counts equate to in terms of ADC input voltage?
What is the thermal noise of the sensor?
 

Offline Pack34

  • Frequent Contributor
  • **
  • Posts: 667
Re: Accurately measuring <1mV using an oscolloscope
« Reply #9 on: December 18, 2017, 06:25:35 pm »
1. Voltage reference for the ADC is 5V with a 16-bit resolution.
2. Voltage swing of the sensor is 1V. I gain it 4x and invert the signal at about 4.5V to get the most out of the ADC range.
3. Marketing datasheet for the sensor has us at an expected 8-10 counts expected due to read noise, dark current, etc. It does exclude a couple other sources that are not mathematically defined.
4. Existing noise is about +/- 20 counts. This is about 1.5mV total +/- between pixels.

I think it's safe to assume that there will be at least another 5 counts of noise from other sources internal to the sensor. They mention them in their design guides but do not precisely quantify them. So I'm looking at attempting to reduce the total noise by 5 ADC counts, or 0.38mV.

My first thought is to place some additional pF caps in critical areas to mitigate it, but I need to be able to measure it first so I don't waste a board spin.
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 16329
  • Country: nz
  • Taupaki Technologies Ltd. NZ Siglent Distributor
    • Taupaki Technologies Ltd.
Re: Accurately measuring <1mV using an oscolloscope
« Reply #10 on: December 18, 2017, 06:36:02 pm »
So, for something like this I actually want to go down to an analog scope?
What scope are you using ATM ?

Would one of the newish Siglent SDS1000X-E's with full BW and unmagnified 500uV/div be sensitive enough ?
Avid Rabid Hobbyist
 

Online egonotto

  • Regular Contributor
  • *
  • Posts: 233
Re: Accurately measuring <1mV using an oscolloscope
« Reply #11 on: December 18, 2017, 06:41:07 pm »
Perhaps an 16 bit high resolution PicoScope 4262?

 

Offline voltsandjolts

  • Supporter
  • ****
  • Posts: 754
  • Country: gb
Re: Accurately measuring <1mV using an oscolloscope
« Reply #12 on: December 18, 2017, 06:51:40 pm »
Just asking questions to provoke discussion, which often helps!

Quote
1. Voltage reference for the ADC is 5V with a 16-bit resolution.
So at the ADC input, 1 bit equates to 76uV

Quote
2. Voltage swing of the sensor is 1V. I gain it 4x and invert the signal at about 4.5V to get the most out of the ADC range.
So 1 bit now equates to 76uV / 4 = 19uV approx. from the sensor. The amplifier will add noise depending, amoung other things, on bandwidth, which is?
Edit: Ahh, I see you already said sampling rate was 500kHz.

Quote
3. Marketing datasheet for the sensor has us at an expected 8-10 counts expected due to read noise, dark current, etc. It does exclude a couple other sources that are not mathematically defined.
Does the sensor datasheet really quote noise in counts? Counts with respect to what? Can you tell us what the sensor is?

Quote
4. Existing noise is about +/- 20 counts. This is about 1.5mV total +/- between pixels.
20 counts is 20*19uV = 380uV = 0.38mV at the sensor output, x4 = 1.5mV as you say at the ADC input.

Quote
I think it's safe to assume that there will be at least another 5 counts of noise from other sources internal to the sensor. They mention them in their design guides but do not precisely quantify them. So I'm looking at attempting to reduce the total noise by 5 ADC counts, or 0.38mV.

Do you have time to average multiple datasets from the sensor in microcontroller or whatever?
Reduce bandwidth of x4 amplifier, use it to do some averaging?
« Last Edit: December 18, 2017, 07:13:54 pm by voltsandjolts »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 2677
  • Country: us
Re: Accurately measuring <1mV using an oscolloscope
« Reply #13 on: December 18, 2017, 07:02:17 pm »
If it's a photodiode array, shield it from light *and* radiation.  Then collect a lot of data by reading the sensor array and look at the statistics of each pixel.  If you can read the pixels in random order, do so using a Mersenne Twister PRNG.

Without knowing the details of the device, it's hard to say how to proceed.  If it's being read out through a shift register as seems likely, you probably  have correlated noise that you need to compensate for.

I can't think of a way to use an 8-10 bit ADC to hunt noise in a 16 bit ADC design.  Given a bunch of data collected with the 16 bit ADC I can tell you how to analyze it given enough information about the design.
 

Offline Pack34

  • Frequent Contributor
  • **
  • Posts: 667
Re: Accurately measuring <1mV using an oscolloscope
« Reply #14 on: December 18, 2017, 07:47:37 pm »
So, for something like this I actually want to go down to an analog scope?
What scope are you using ATM ?

Would one of the newish Siglent SDS1000X-E's with full BW and unmagnified 500uV/div be sensitive enough ?

The only scope I have access to at work is a Tek TDS 1012C-EDU 100MHz. At home I have an Agilient MSOX2024A. The problem with the Agilent is that it can go down to 1mV/div but everything is just noise at that point and the first couple divisions seem useless. Everything I need to look at is below the noise floor.

Just asking questions to provoke discussion, which often helps!

Quote
1. Voltage reference for the ADC is 5V with a 16-bit resolution.
So at the ADC input, 1 bit equates to 76uV

Quote
2. Voltage swing of the sensor is 1V. I gain it 4x and invert the signal at about 4.5V to get the most out of the ADC range.
So 1 bit now equates to 76uV / 4 = 19uV approx. from the sensor. The amplifier will add noise depending, amoung other things, on bandwidth, which is?
Edit: Ahh, I see you already said sampling rate was 500kHz.

Quote
3. Marketing datasheet for the sensor has us at an expected 8-10 counts expected due to read noise, dark current, etc. It does exclude a couple other sources that are not mathematically defined.
[bold]Does the sensor datasheet really quote noise in counts? Counts with respect to what? Can you tell us what the sensor is?[/bold]

Quote
4. Existing noise is about +/- 20 counts. This is about 1.5mV total +/- between pixels.
20 counts is 20*19uV = 380uV = 0.38mV at the sensor output, x4 = 1.5mV as you say at the ADC input.

Quote
I think it's safe to assume that there will be at least another 5 counts of noise from other sources internal to the sensor. They mention them in their design guides but do not precisely quantify them. So I'm looking at attempting to reduce the total noise by 5 ADC counts, or 0.38mV.

Do you have time to average multiple datasets from the sensor in microcontroller or whatever?
Reduce bandwidth of x4 amplifier, use it to do some averaging?

Re: Datasheet Noise
Noise is represented in electrons. I then have to scale up using the node sensitivity of the detector, then number of rows, and then scale up using the analog gain of the circuit after the output of the detector.

Re: Averaging
Applying any sort of averaging on the output removes the issue. As does doing post processing tricks like interpolation and boxcar smoothing. However, the issue is that we need to be using the raw data. In some applications you have to let the detector collect charge for a significant amount of time (60seconds). Averaging at that point would require needing the end user to sit there for minutes to get a use-able sample. There's also the need for detecting information within 18 counts about the noise floor. Being able to remove 5 counts of electrical noise would be absolutely huge.

If it's a photodiode array, shield it from light *and* radiation.  Then collect a lot of data by reading the sensor array and look at the statistics of each pixel.  If you can read the pixels in random order, do so using a Mersenne Twister PRNG.

Without knowing the details of the device, it's hard to say how to proceed.  If it's being read out through a shift register as seems likely, you probably  have correlated noise that you need to compensate for.

I can't think of a way to use an 8-10 bit ADC to hunt noise in a 16 bit ADC design.  Given a bunch of data collected with the 16 bit ADC I can tell you how to analyze it given enough information about the design.

It's essentially a photodiode array, silicon based. Everything is read-out linearly using a horizontal shift register that is internal to the detector. You get a vertically binned line of the area-scan detector.

I attempt to use some correlatted double sampling to remove electrical noise and offset by taking a sample of some momentary electrical dark before that pixel's charge is released from the output shift register.
 
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 2677
  • Country: us
Re: Accurately measuring <1mV using an oscolloscope
« Reply #15 on: December 18, 2017, 10:50:49 pm »
This may sound tangential, but I think it's the best way to address what you want to do.  this is as close as I can think of without discussing what you're doing in more detail.

Consider reading the output of a spectrometer using a linear CCD array. The objective is to measure the light intensity at different points along the array.  With this model you have a number of noise sources in addition to the statistical nature of light.  Their are losses in the charge transfers in the shift register, random ionizing radiation and EMI from the device.

Conventionally, you have resolution equal to the number of pixels in your array.  However, courtesy of work by David Donoho and Emmanuel Candes of Stanford, you can do 5-10x better using compressive sensing. 

If rather than sample each pixel individually, you instead sample sets of pixels with a random set turned off in each sample, you can then increase both your spatial and amplitude resolution by solving an L1 problem using linear programming or one of several other algorithms with very exotic names which accomplish the same thing.

The extreme example is the single pixel camera devised by a team at Rice led by Richard Baraniuk.  Here's a link:

http://www.wisdom.weizmann.ac.il/~vision/courses/2010_2/papers/csCamera-SPMag-web.pdf

TI does this in their near infrared photospectrometer product:

https://www.element14.com/community/roadTestReviews/2302/l/dlp-nirscan-nano-evaluation-module-review

The mathematics are nothing short of agonizing to read.  However, actually doing it is quite easy.  You only have to suffer if you want the proof it works.

The first monograph to come out was:

"A Mathematical Introduction to Compressive Sensing"
Foucar & Rauhut 2013

There are others now, but I've only read Foucart & Rauhut.

Aside from improving the resolution, you can also reduce the data acquisition time.  This is being done for MRI scanners.  I'm sure there are already other commercial applications.  The mathematics is very general and in addition to compressive sensing, solves matrix completion, commonly called the" Netflix problem", solves inverse problems in physics and blind source separation which selects a single conversation out of a room full of people talking using a very small number of microphones placed at random locations in the room.

To apply this to your problem requires being able to explicitly describe all instances of the desired signal.  That is all the possible answers.  Finding a sparse L1 solution finds the optimal L0 solution if and only if the solution is sparse.  In matrix notation, we're solving Ax=y where y is the measured data, x is the desired result and A is a matrix in which all the columns are uncorrelated.  I usually call this L1 basis pursuit as that seems to me the most general name, but there are plenty of others.  To use the language of Mallat in "A Wavelet Tour of Signal Processing" the A matrix is a dictionary containing all the possible answers and  x is a sparse vector which selects the combination of columns of A which best fit y. If an L1 (least summed absolute error) solution for x exists which is a sparse vector it is unique and is the optimal L0 solution.  The proof of this was done by David Donoho of Stanford in 2004.

To continue with the spectrometer example, A would be a matrix in which each column described the amplitude of the light at different positions along the array for a particular element. x would be the amount of that element present and y would be the CCD array output.

I've glossed over an immense amount of stuff, so I'm not sure what I've written makes sense.  The links are much longer and more thorough, though even those don't explain why it works.
 
The following users thanked this post: thm_w, Dubbie

Offline alm

  • Super Contributor
  • ***
  • Posts: 1257
  • Country: 00
Re: Accurately measuring <1mV using an oscolloscope
« Reply #16 on: December 18, 2017, 11:29:51 pm »
The only scope I have access to at work is a Tek TDS 1012C-EDU 100MHz. At home I have an Agilient MSOX2024A. The problem with the Agilent is that it can go down to 1mV/div but everything is just noise at that point and the first couple divisions seem useless. Everything I need to look at is below the noise floor.
I wonder if you are suffering from common mode noise. What does the noise look like if you short the probe tip to the ground lead while still connected to the device under test?

Many of the very high gain amplifiers, like in some of those old Tek scopes, will have differential inputs to avoid noise sources like ground loops.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 10019
  • Country: us
  • DavidH
Re: Accurately measuring <1mV using an oscolloscope
« Reply #17 on: December 19, 2017, 05:16:19 am »
A general purpose oscilloscope is the wrong tool for this.  Even with 16-bit resolution, they will have too much noise and drift.  It might be possible with a Tektronix 7A22 amplifier which supports 10uV/div with a controlled bandwidth for low noise being used as a differential comparator.  LeCroy makes a modern version of the 7A22 for use with any oscilloscope.

It is very easy to miss the datasheet performance of a 16-bit ADC so I would start there by measuring the RMS and peak-to-peak noise as the input is shorted further and further toward the sensor.  At 500 kSamples/second, reference noise is also going to be a big problem.  An FFT of the output to recover the noise spectrum might be informative.

 

Offline Pack34

  • Frequent Contributor
  • **
  • Posts: 667
Re: Accurately measuring <1mV using an oscolloscope
« Reply #18 on: December 19, 2017, 03:39:18 pm »
A general purpose oscilloscope is the wrong tool for this.  Even with 16-bit resolution, they will have too much noise and drift.  It might be possible with a Tektronix 7A22 amplifier which supports 10uV/div with a controlled bandwidth for low noise being used as a differential comparator.  LeCroy makes a modern version of the 7A22 for use with any oscilloscope.

It is very easy to miss the datasheet performance of a 16-bit ADC so I would start there by measuring the RMS and peak-to-peak noise as the input is shorted further and further toward the sensor.  At 500 kSamples/second, reference noise is also going to be a big problem.  An FFT of the output to recover the noise spectrum might be informative.

Re: ADC Performance
That was actually one of the first things I checked. I'm using an AD7980 (datasheet linked below). The largest error impactor is linearity error. This would be about four counts of error as the 16-bit window is transversed. Transition noise is only 0.75 counts and gain error is +/- 2 counts. However, I'm not actually seeing a reference to read-noise in the datasheet.

For the voltage reference, I've already put a ferrite in series to the voltage reference with some additional capacitors. I figure I would take the "nuke it from orbit" approach. The reference is an ADR445. Then I have a 1k ferrite (MMZ1005S102CT000) in series at the pin on the ADC, with a 10uF, 0.1uF, and 10,000pF cap under the chip. Even though the reference is supposed to be ultra-low noise, it could be picking up something along the way to the ADC.

I ended up doing the same on the sensor. It has +24V, +12V, +3V, and -8V power supplies so I added a bead and footprints for the 10u, 0.1u, 10,000p, and 100p just to be sure. Previously, I just had the 10u and 0.1u with no beading other than on the input connector to the board. Since each voltage pin is getting it's own I was able to find cheap 1k 150mA ferrites in a 0402 package that should work well. On the input connector I had to use ferrites with a much lower resistance rating due to the sum of all of the power supplies needed.

Hopefully this will do the trick...

Datasheets:
http://www.analog.com/en/products/analog-to-digital-converters/ad7980.html
http://www.analog.com/media/en/technical-documentation/data-sheets/ADR440_441_443_444_445.pdf
 

Offline Pack34

  • Frequent Contributor
  • **
  • Posts: 667
Re: Accurately measuring <1mV using an oscolloscope
« Reply #19 on: December 19, 2017, 03:53:56 pm »
This may sound tangential, but I think it's the best way to address what you want to do.  this is as close as I can think of without discussing what you're doing in more detail.

Consider reading the output of a spectrometer using a linear CCD array. The objective is to measure the light intensity at different points along the array.  With this model you have a number of noise sources in addition to the statistical nature of light.  Their are losses in the charge transfers in the shift register, random ionizing radiation and EMI from the device.

Conventionally, you have resolution equal to the number of pixels in your array.  However, courtesy of work by David Donoho and Emmanuel Candes of Stanford, you can do 5-10x better using compressive sensing. 

If rather than sample each pixel individually, you instead sample sets of pixels with a random set turned off in each sample, you can then increase both your spatial and amplitude resolution by solving an L1 problem using linear programming or one of several other algorithms with very exotic names which accomplish the same thing.

The extreme example is the single pixel camera devised by a team at Rice led by Richard Baraniuk.  Here's a link:

http://www.wisdom.weizmann.ac.il/~vision/courses/2010_2/papers/csCamera-SPMag-web.pdf

TI does this in their near infrared photospectrometer product:

https://www.element14.com/community/roadTestReviews/2302/l/dlp-nirscan-nano-evaluation-module-review

The mathematics are nothing short of agonizing to read.  However, actually doing it is quite easy.  You only have to suffer if you want the proof it works.

The first monograph to come out was:

"A Mathematical Introduction to Compressive Sensing"
Foucar & Rauhut 2013

There are others now, but I've only read Foucart & Rauhut.

Aside from improving the resolution, you can also reduce the data acquisition time.  This is being done for MRI scanners.  I'm sure there are already other commercial applications.  The mathematics is very general and in addition to compressive sensing, solves matrix completion, commonly called the" Netflix problem", solves inverse problems in physics and blind source separation which selects a single conversation out of a room full of people talking using a very small number of microphones placed at random locations in the room.

To apply this to your problem requires being able to explicitly describe all instances of the desired signal.  That is all the possible answers.  Finding a sparse L1 solution finds the optimal L0 solution if and only if the solution is sparse.  In matrix notation, we're solving Ax=y where y is the measured data, x is the desired result and A is a matrix in which all the columns are uncorrelated.  I usually call this L1 basis pursuit as that seems to me the most general name, but there are plenty of others.  To use the language of Mallat in "A Wavelet Tour of Signal Processing" the A matrix is a dictionary containing all the possible answers and  x is a sparse vector which selects the combination of columns of A which best fit y. If an L1 (least summed absolute error) solution for x exists which is a sparse vector it is unique and is the optimal L0 solution.  The proof of this was done by David Donoho of Stanford in 2004.

To continue with the spectrometer example, A would be a matrix in which each column described the amplitude of the light at different positions along the array for a particular element. x would be the amount of that element present and y would be the CCD array output.

I've glossed over an immense amount of stuff, so I'm not sure what I've written makes sense.  The links are much longer and more thorough, though even those don't explain why it works.

I'm working with a Hamamatsu sensor. All the vertical pixels are binned down into an output shift register that are then pushed out sequentially. I like the idea of scrapping neighboring pixels but the sensors I'm working with don't allow this. Plus it would have some serious negative consequences for the system's optical performance. Glass is much more expensive than a PCB.

A primary source of noise for the sensor is dark current, which has a relationship to temperature and time. I've put a lot of work this year into optimizing the thermal setup in these. I've actually been able to get the system to be 0.01C stable across the 0-40C operating environment. I verified this using an external 6.5 digital DMM external to the device that's monitoring the thermistor inside the sensor. Because of this, I'm confident that I'm getting the sensor cooled sufficiently to the -15C target and that it's stable when it reaches that point. Extrapolating the dark current from the sensor using the provided figures regarding the e-/pixel/sec figure, and the uV/e- node sensitivity, and the full well capacity, I believe that this portion is working as expected.

Read-noise is secondary to this. Using the effective gain figure of the analog-front-end I was able to determine that it's about 4.1 times the raw output swing from the detector. Using this figure and extrapolating the worst-case read-noise I should be expecting about +/- 6 counts just from this.

The unknowns in the sensor are the shot and dark-shot noise, which are not defined or seem to be easily calculated. The shot noise is related to the volume of charge collected by the detector at that given time, whereas the dark-shot noise is the same but related to the accumulated dark-current for that sample.

So, if I'm getting about +/- 6 counts from read-noise and if I assume that I'm getting the same from the shot and dark-shot noise, then I should be getting about +/- 12 counts. This is a bit lower than what I'm seeing so I'm now in the effort to quantify and minimize the amount of electrical noise that's being injected into the system.

In the grand-scheme of things, when you have any sort of decent signal for lab experiments the added ~10 counts of noise really doesn't matter. Especially when the detector saturates at 10ms when a basic light source is used. However, for exceptionally low signal situations used in some bleeding-edge applications, this is added 10 counts makes it unuseable.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 2677
  • Country: us
Re: Accurately measuring <1mV using an oscolloscope
« Reply #20 on: December 19, 2017, 04:58:17 pm »
The selection of random pixels would be done in software, not hardware.

What you're facing is a math problem.  From what you've stated, the engineering is at the limit of what can be done.

You've got several noise sources.  Some of them are correlated, e.g. the dark noise and dark shot noise.  As suggested earlier you should collect a bunch of data under uniform conditions and compute spectra.  Collect a bunch of samples for various lengths of time, pad with 1024 zeros, do an FFT and average the FFTs.  You want thousands of samples to average.  The zeros are to prevent wraparound in the FFT.  It's an important and often overlooked issue.

The average spectra should be flat. The phase should be near zero and the amplitude should be constant.  If that is not the case then you *might* have something you can do in HW.  The state of development is such I think it unlikely you can get better performance from HW changes.

What you need is a denoising algorithm.  There are a great many of these starting with Wiener's work in the 40's through to Donoho, Candes and others today.  You can get a good overview in Mallat's 3rd ed of "A Wavelet Tour of Signal Processing". The current state of the art is L1 basis pursuit denoising which is treated in "A Mathematical Introduction to Compressive Sensing", though the focus is more narrow.  It is, however, a standard consideration in compressive sensing.

Compressive sensing got started when Candes decided to experiment with L1 solutions to Ax = y where y was the sum of a small number of sinusoids and random impulses which were sampled randomly at sub-Nyquist rates.  This turned out to work really well  and set off a flurry of work by Candes and Donoho in 2004.  I consider it the most important work in applied mathematics since Wiener, Shannon et al.  For an old guy who spent his working career in a major oil company R&D setting doing variations on their work, that's a *very* strong compliment!

Other than Mallat, I don't have a mongraph that specifically treats denoising in a manner appropriate to your work.  However, I have several very large folders of papers by Candes, Donoho and their students.  I know that there are several treatments of the problem in those, so I'll have a look through them this evening.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 10019
  • Country: us
  • DavidH
Re: Accurately measuring <1mV using an oscolloscope
« Reply #21 on: December 19, 2017, 07:20:45 pm »
For the voltage reference, I've already put a ferrite in series to the voltage reference with some additional capacitors. I figure I would take the "nuke it from orbit" approach. The reference is an ADR445. Then I have a 1k ferrite (MMZ1005S102CT000) in series at the pin on the ADC, with a 10uF, 0.1uF, and 10,000pF cap under the chip. Even though the reference is supposed to be ultra-low noise, it could be picking up something along the way to the ADC.

So you have the reference decoupling covered but ... the reference output impedance (from the datasheet) combined with the 10uF decoupling capacitor makes a low noise filter with a cutoff frequency above 10 kHz.  By 10 kHz, the reference noise is already 66uV peak-to-peak (at least, see below about noise and single point grounds) so the reference noise is greater than the LSB and transition noise of the AD7980.  A low noise active filter for the reference at the ADC could knock this down considerably.

https://www.edn.com/design/analog/4327729/Filter-your-voltage-reference-for-low-noise-performance
http://www.linear.com/solutions/7994

If the reference shares a ground with the ADC at a distance, then any ground current noise between the ADC and reference is going to contribute directly to the reference noise.  This gets tricky with a combined analog and digital ground when more than one device uses the reference because single point ground.  A solid ground plane by itself is not enough for a high precision mixed signal design although sometimes slots may be cut into the ground plane to route disruptive currents around sensitive nodes.

The AD7980's pseudo differential input mitigates this problem for the signal input but the AD7980 lacks a differential input for the reference so there is an assumption that the reference shares a single point ground with the ADC.  Sometimes a low noise instrumentation amplifier is used to convert a differential reference signal into a single ended signal at the ADC; it subtracts the difference between the reference ground and ADC ground from the reference removing this noise.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf