Author Topic: Why are V(rms) measurements frequency dependant?  (Read 7337 times)

0 Members and 1 Guest are viewing this topic.

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Why are V(rms) measurements frequency dependant?
« on: August 05, 2018, 04:03:29 am »
So, I've got this great new add-on for my current measurements, the uCurrent. 

It is going to allow me to measure DC current up to 999.99 mA with a 0.05% accuracy with only 10 milliohm added impedance.

The problem is that in order to measure AC current, the VAC from the meter must be used. 

I have measured a signal from my FG and have found that the V(rms) is changing from 45hz up to 10khz.

My meters (Mastech 8040), and all of the meters I've seen so far, seem to all have the error problem that is listed in the accuracy table specifications.

They all have an error that is frequency dependent.

Why is this?

Can it be solved like a uCurrent add-on to get an accurate VAC measurement that would cancel any frequency dependence?

Is there a meter that is already out that can accurately measure VAC across a broad range of frequency up to say 1 Mhz, or even non frequency dependent?

I know that I could just use low turn on diodes to find a Vp, and reverse calculate it, but I'm not sure even it that would be non frequency dependent. 

Would this be possible as a realistic piece of equipment so that this is not necessary?
 
The following users thanked this post: cncjerry

Offline JS

  • Frequent Contributor
  • **
  • Posts: 947
  • Country: ar
Re: Why are V(rms) measurements frequency dependant?
« Reply #1 on: August 05, 2018, 05:33:47 am »
Because of frequency reaponse of the DMM chipset. When using an onboard TRMS converter inside the DMM chipset is uaual to get reasonable specs up to 1kHz and then starts to be less accurate. Better DMM have a dedicated IC to make the TRMS conversion and get good specs up to 100kHz or so. Compare BM235 and 121GW, EEVBlog's DMMs and you'll see. BM235 is using the chipset and only specified up to 800Hz, 121GW is using an external IC and specs go up to 5kHz. There are audio TRMS converters which do a good job over 20kHz depending on the level.


If you want to go higher you could use a dedicated  higher freq instrument. TRMS doesn't go much higher but you could build a higher freq measurement by just using a diode. μCurrent leaves you with a decent level to amplify a bit further and use a diode rectifier to get a value. If you use something like a voltage doubler you will be seeing something close to a peak to peak value.

JS
« Last Edit: August 05, 2018, 05:42:06 am by JS »
If I don't know how it works, I prefer not to turn it on.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #2 on: August 05, 2018, 06:23:49 am »
I get what you are saying, and it makes sense.

How expensive would it be to make a high quality DIY type Vrms meter.

Basically, it would be only for AC Vrms.

It would output VDC so that normal 4.5 digit meters could be used, and use that 2032 battery.

It sounds like a lot of programing firmware, some pricey ICs, and a custom board, but I think it would be much cheaper than a 100000 count meter.


Where do I start?
 

Offline JS

  • Frequent Contributor
  • **
  • Posts: 947
  • Country: ar
Re: Why are V(rms) measurements frequency dependant?
« Reply #3 on: August 05, 2018, 06:39:27 am »
A single 2032 might not be enough, you usually want higher levels to get a bettee linearity. Start looking at a TRMS converter from a parametric search on your supplier. Once you pick one see what you need around it... With that approach no need for programming anything. What are the specs you are trying to get from it?

JS

If I don't know how it works, I prefer not to turn it on.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #4 on: August 05, 2018, 06:59:55 am »
Well, I guess I just want a Vrms meter that's gives me the correct Vrms output without having to worry about frequency.

That might be asking too much...maybe 100khz bandwidth being a flat output? That might be asking too much too.

I took a look at PMIC RMS to DC converters, and digikey does not look like they have a good selection.

The AD8436 is in there, but I think the thing to do is to rip open a really really expensive 100000 count meter and check out what it's packing.

Maybe I'll find some toober videos that show their guts.



 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: Why are V(rms) measurements frequency dependant?
« Reply #5 on: August 05, 2018, 09:47:20 am »
The analog RMS converter used in Fluke 8842A has accuracy around 0.5% or worse (from the Fluke 8842A manual). Similar accuracy can be achieved using an analog RMS converter from Analog Devices up to few kHz, and with additional 1% error up to 200 kHz with sufficient input signal level. The dynamic range an analog RMS-converter is around 60 dB (1:1000), so one will not get very many digits of accuracy, although the resolution might be a digit or so more. In order to get higher accuracy one could use LTC1968 up to 150 kHz and 0.1% accuracy. The RMS-converters are quite sensitive to the input voltage, which means that the [autoranging] input circuitry needs to track the input voltage so that the RMS-converter will see optimal input voltage with a sufficient crest factor margin.

Edit: The AD8436 looks pretty good.
« Last Edit: August 05, 2018, 09:52:14 am by Kalvin »
 
The following users thanked this post: sourcecharge

Offline maukka

  • Supporter
  • ****
  • Posts: 107
  • Country: fi
Re: Why are V(rms) measurements frequency dependant?
« Reply #6 on: August 05, 2018, 12:46:02 pm »
Check out HKJ's DMM review page. The popup window on top of the DMM name will show the measured AC volt bandwidth.

Also Parametrek's DMM search engine has a couple brands crawled and there's several Uni-T's with a specified bandwidth of 100 kHz.
 

Offline Zero999

  • Super Contributor
  • ***
  • Posts: 19494
  • Country: gb
  • 0999
Re: Why are V(rms) measurements frequency dependant?
« Reply #7 on: August 05, 2018, 12:53:48 pm »
The analog RMS converter used in Fluke 8842A has accuracy around 0.5% or worse (from the Fluke 8842A manual). Similar accuracy can be achieved using an analog RMS converter from Analog Devices up to few kHz, and with additional 1% error up to 200 kHz with sufficient input signal level. The dynamic range an analog RMS-converter is around 60 dB (1:1000), so one will not get very many digits of accuracy, although the resolution might be a digit or so more. In order to get higher accuracy one could use LTC1968 up to 150 kHz and 0.1% accuracy. The RMS-converters are quite sensitive to the input voltage, which means that the [autoranging] input circuitry needs to track the input voltage so that the RMS-converter will see optimal input voltage with a sufficient crest factor margin.

Edit: The AD8436 looks pretty good.
Don't some meters do the RMS calculation digitally? That might be more accurate, but it will could also use more energy, than doing it the analogue way.
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #8 on: August 05, 2018, 01:42:09 pm »
....
Don't some meters do the RMS calculation digitally? That might be more accurate, but it will could also use more energy, than doing it the analogue way.

Some of the bench-top meters measure do the RMS measurement digital. Keysight calls this true-volt.
This method has some advantages, like good linearity, temperature stability and also working well at a relatively low level, but it still has a limited bandwidth from the fast ADC used.

If one accepts a limited amplitude range (much like with the analog solutions) the 10 or 12 bit resolution ADC inside modern µCs can by used. The calculation is not that difficult: just root mean square and some extra compensations for an offset. I have made this once with an 8 bit AVR (10 bit ADC) - it works surprisingly good, about comparable to the low end analog chips and thus useful if the amplitude is more than about 2% of full scale and with a limed bandwidth (e.g. around 5 kHz).

One of the nasty points with the analog solutions is, that the bandwidth depends on amplitude - usually less bandwidth at low amplitude. This is one reason they don't work well over a large amplitude range. With higher frequency parts like in a square wave there can also be some nonlinearity / interaction of the harmonics - so not all waveforms wort equally well.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #9 on: August 05, 2018, 03:48:31 pm »
I've been reviewing half a dozen spec sheets from meters and the best I found at a reasonable cost was the Siglent SDM3055 with 20khz bandwidth of 0.02% of reading and 0.05% of range error for 1 year.  Unfortunately it's still almost 500 bucks!

EDIT: Siglent SDM3055 has a 20khz bandwidth of 0.2% of reading and 0.05% of range error for 1 year.

So back to the DIY try.

I've been eyeing that LTC1968 rms to dc converter...at only about 9 bucks, it really looks like a winner.

Also, I guess I would need some kind of V reference?

Is the voltage reference for the rms to DC converter Vcc?

Or is it for the "the [autoranging] input circuitry needs to track the input voltage so that the RMS-converter will see optimal input voltage with a sufficient crest factor margin."

I found a good 10V reference from TI.

ref102 with only a +- 0.0025 error and 2.5ppm/C and at only about 11 bucks each, not bad right?

But if I need a 5V for the LTC1968, the MAX6325CPA has only a 1 ppm/C with only a +-0.02% error for about the same cost.

So what about this autoranging circuitry?

I'm guessing some kind of op amp multiplexing for the input and output....

Can't this be done with a switch like the uCurrent?  Maybe switch in and out different resistor networks?

What if, the circuit under test is measured first by the multimeter, then the add-on is switched to the resistor network required for the opamp to put out the correct voltage for the rms to dc converter, which also switches it for the output too as well?

Then the add-on is connected after it has been switched?

hmmm...
« Last Edit: August 08, 2018, 05:55:09 am by sourcecharge »
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #10 on: August 05, 2018, 04:17:56 pm »
The  LTC1968 looks like a good one. I don't see a need for a voltage reference here - the voltage reference would be needed for the measurement of the DC output only, which would be a normal DMM in voltage mode here.  The  LTC1968 would just need a reasonably regulated supply.

For the auto-ranging, there is no absolute need for this here. It could be done by hand too if there are suitable indications. The relevant numbers are the peak voltages - so one should have some extra circuitry to check the peak voltages. As a minimum this would be something like 2 comparators to check the upper limits and than use manual adjustment with try and error (use smallest range that does not indicate error from peak values).

The actual gain setting can be quite tricky for higher BW if it needs to be really accurate. This is because the divider would be not just resistors, but also with parallel capacitance that needs adjustment (a little like the compensation at scope probes). Also electronic switches have limited isolation when off.
 

Offline Zero999

  • Super Contributor
  • ***
  • Posts: 19494
  • Country: gb
  • 0999
Re: Why are V(rms) measurements frequency dependant?
« Reply #11 on: August 05, 2018, 04:54:16 pm »
I've just quickly read through the data-sheet for the LTC1968. The reference is the common voltage for the AC waveform. The IC measures the difference between the voltage on its inputs. At least one input must be DC coupled to a steady voltage between the supply rails. If it's a single supply application, connect one pin to a potential divider with a bypass capacitor to 0V and the other input to the signal source, via a capacitor. See page 12.
http://www.analog.com/media/en/technical-documentation/data-sheets/1968f.pdf

The output of the LTC1968 is high impedance and needs a buffer amplifier, before going to the DVM. A decent, low offset, high input impedance, low bias current, op-amp is required for the buffer.
 
The following users thanked this post: sourcecharge

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: Why are V(rms) measurements frequency dependant?
« Reply #12 on: August 05, 2018, 06:21:57 pm »
The analog RMS converter used in Fluke 8842A has accuracy around 0.5% or worse (from the Fluke 8842A manual). Similar accuracy can be achieved using an analog RMS converter from Analog Devices up to few kHz, and with additional 1% error up to 200 kHz with sufficient input signal level. The dynamic range an analog RMS-converter is around 60 dB (1:1000), so one will not get very many digits of accuracy, although the resolution might be a digit or so more. In order to get higher accuracy one could use LTC1968 up to 150 kHz and 0.1% accuracy. The RMS-converters are quite sensitive to the input voltage, which means that the [autoranging] input circuitry needs to track the input voltage so that the RMS-converter will see optimal input voltage with a sufficient crest factor margin.

Edit: The AD8436 looks pretty good.
Don't some meters do the RMS calculation digitally? That might be more accurate, but it will could also use more energy, than doing it the analogue way.
The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

In order to get high signal dynamic range with sufficient room for crest factor, the ADC has to have as many bits as possible, say 16 bits with 3-4 bits reserved for crest factor (ie. for the peak values of the signal compared to the RMS of the signal https://en.wikipedia.org/wiki/Crest_factor) leaving 12 - 13 bits RMS for computation.

At low signal levels the resolution will suffer due to the quantization. In order to compensate the quantization effects one may need to either increase the sample rate with oversampling or increase the number of bits of the ADC from 16 bits to 20 - 24 bits, for example, which will increase the cost of the ADC. Alternatively one may arrange the input signal level so that it will be kept as high as possible without clipping (autoranging or manual ranging) in order to get as many significant bits as possible for best accuracy and resolution.

After sampling the computation is quite straight forward requiring some DSP computation. There are nice algorithms available for computing the RMS: https://www.embedded.com/design/configurable-systems/4006520/Improve-your-root-mean-calculations

My guesstimate  is that getting 3.75 digits for resolution is quite the practical limit with one can achieve with a signal bandwidth of > 100 kHz with a typical 16-bit ADC and optimal signal level with crest factor of 10. Probably one could achieve one extra digit with a state-of-the art, fast 24-bit ADC. One can obtain better estimation on resolution/accuracy and effects of different signal levels by performing some simulation and running mathematical/numerical analysis for the quantized signals.
 

Offline JS

  • Frequent Contributor
  • **
  • Posts: 947
  • Country: ar
Re: Why are V(rms) measurements frequency dependant?
« Reply #13 on: August 05, 2018, 06:45:27 pm »


The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

In order to get high signal dynamic range with sufficient room for crest factor, the ADC has to have as many bits as possible, say 16 bits with 3-4 bits reserved for crest factor (ie. for the peak values of the signal compared to the RMS of the signal https://en.wikipedia.org/wiki/Crest_factor) leaving 12 - 13 bits RMS for computation.

At low signal levels the resolution will suffer due to the quantization. In order to compensate the quantization effects one may need to either increase the sample rate with oversampling or increase the number of bits of the ADC from 16 bits to 20 - 24 bits, for example, which will increase the cost of the ADC. Alternatively one may arrange the input signal level so that it will be kept as high as possible without clipping (autoranging or manual ranging) in order to get as many significant bits as possible for best accuracy and resolution.

After sampling the computation is quite straight forward requiring some DSP computation. There are nice algorithms available for computing the RMS: https://www.embedded.com/design/configurable-systems/4006520/Improve-your-root-mean-calculations

My guesstimate  is that getting 3.75 digits for resolution is quite the practical limit with one can achieve with a signal bandwidth of > 100 kHz with a typical 16-bit ADC and optimal signal level with crest factor of 10. Probably one could achieve one extra digit with a state-of-the art, fast 24-bit ADC. One can obtain better estimation on resolution/accuracy and effects of different signal levels by performing some simulation and running mathematical/numerical analysis for the quantized signals.

No need for that much adc and dsp, you can do oversampling and decimation after rectification so the resolution and frequency response can be much better than what you said... You could sample at 1kHz and still get a response up to MHz if the sampling is short enough lthe adc frequency resppnae is the limit, not the sampling frequency) and after averaging you get the resolution under one LSB, useful if the ADC linearity is gpod enough but you don't need the data and computation to deal with greater ADCs. Using a pseudorandom sampling frequency makes for a better frrquency response, minimizing the comb filter at multiples of the sampling freq.

JS

If I don't know how it works, I prefer not to turn it on.
 

Offline Zero999

  • Super Contributor
  • ***
  • Posts: 19494
  • Country: gb
  • 0999
Re: Why are V(rms) measurements frequency dependant?
« Reply #14 on: August 05, 2018, 08:16:34 pm »


The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

In order to get high signal dynamic range with sufficient room for crest factor, the ADC has to have as many bits as possible, say 16 bits with 3-4 bits reserved for crest factor (ie. for the peak values of the signal compared to the RMS of the signal https://en.wikipedia.org/wiki/Crest_factor) leaving 12 - 13 bits RMS for computation.

At low signal levels the resolution will suffer due to the quantization. In order to compensate the quantization effects one may need to either increase the sample rate with oversampling or increase the number of bits of the ADC from 16 bits to 20 - 24 bits, for example, which will increase the cost of the ADC. Alternatively one may arrange the input signal level so that it will be kept as high as possible without clipping (autoranging or manual ranging) in order to get as many significant bits as possible for best accuracy and resolution.

After sampling the computation is quite straight forward requiring some DSP computation. There are nice algorithms available for computing the RMS: https://www.embedded.com/design/configurable-systems/4006520/Improve-your-root-mean-calculations

My guesstimate  is that getting 3.75 digits for resolution is quite the practical limit with one can achieve with a signal bandwidth of > 100 kHz with a typical 16-bit ADC and optimal signal level with crest factor of 10. Probably one could achieve one extra digit with a state-of-the art, fast 24-bit ADC. One can obtain better estimation on resolution/accuracy and effects of different signal levels by performing some simulation and running mathematical/numerical analysis for the quantized signals.

No need for that much adc and dsp, you can do oversampling and decimation after rectification so the resolution and frequency response can be much better than what you said... You could sample at 1kHz and still get a response up to MHz if the sampling is short enough lthe adc frequency resppnae is the limit, not the sampling frequency) and after averaging you get the resolution under one LSB, useful if the ADC linearity is gpod enough but you don't need the data and computation to deal with greater ADCs. Using a pseudorandom sampling frequency makes for a better frrquency response, minimizing the comb filter at multiples of the sampling freq.

JS
Yes, you should be able to use a lower sample frequency, than the bandwidth of the signal, because the waveform will more than likely be repeating and you want an average over a long time period, to do RMS calculations anyway.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #15 on: August 06, 2018, 09:45:05 am »
The  LTC1968 looks like a good one. I don't see a need for a voltage reference here - the voltage reference would be needed for the measurement of the DC output only, which would be a normal DMM in voltage mode here.  The  LTC1968 would just need a reasonably regulated supply.

For the auto-ranging, there is no absolute need for this here. It could be done by hand too if there are suitable indications. The relevant numbers are the peak voltages - so one should have some extra circuitry to check the peak voltages. As a minimum this would be something like 2 comparators to check the upper limits and than use manual adjustment with try and error (use smallest range that does not indicate error from peak values).

The actual gain setting can be quite tricky for higher BW if it needs to be really accurate. This is because the divider would be not just resistors, but also with parallel capacitance that needs adjustment (a little like the compensation at scope probes). Also electronic switches have limited isolation when off.

Well this is what I got so far, it's nothing fancy, but it will work in the ranges for the LTC1968 for 50mVrms to 500mVrms so that it can operate within the 0.1% error tolerance up to 150kHz.

It should give the add-on the ability to measure between 50uVrms to 500Vrms, with the output of the meter between 50mV DC to 500mV DC.

Any more additional components, and I think I would be increasing the error.

Basically, it's got the MAX4239 op amps like the uCurrent, and a resistor network that are switched between outputs.

The additional switch to the input of the 1st op amp is needed to limit the voltage on the input leg for higher voltages.

I was going to run the MAX4239 along with the LTC1968 with some type of voltage regulator at 5V and a 9V battery, but I'm not sure if the op amps still have the same characteristics when run at 5V vs 3V. 

I have discounted the op op amp LMV321 and all of the required capacitors that is on the uCurrent in this schematic, but they will be in the final design. 

I will also try to find 0.01% or better resistor tolerances.

I have not included the LTC1968 because B2 doesn't have even 1 rms to DC converter let alone this specific one, but the datasheet shows that one of the inputs to the converter needs to have a series capacitor and a capacitor on the output of the DC conversion.

I have included the datasheet for the LTC1968 and a 10V reference that I was going to use for a DC calibration for my bench meters.

I also got what I think are the most relevant graphs from the LTC1968 together to show the linearity.

Anyone have any thoughts, suggestions, maybe what switch to use, and maybe cheap improvements that don't increase the error?

The ranges for this network using the LTC1968 are of the following:

Closed Switch(es)- (while all other switches are open)
XSW8 and XSW1- Input: 50   uVrms   to    500  uVrms   Output: 50mV DC to 500 mVDC
XSW8 and XSW2- Input: 500 uVrms    to   5     mVrms   Output: 50mV DC to 500 mVDC
XSW8 and XSW3- Input: 5     mVrms   to   50   mVrms   Output: 50mV DC to 500 mVDC
                XSW4- Input: 50   mVrms    to  500 mVrms    Output: 50mV DC to 500 mVDC
                XSW5- Input: 500 mVrms    to   5      Vrms    Output: 50mV DC to 500 mVDC
                XSW6- Input: 5       Vrms    to   50     Vrms    Output: 50mV DC to 500 mVDC
                XSW7- Input: 50     Vrms    to   500   Vrms    Output: 50mV DC to 500 mVDC


EDIT: The resistor network shown is not available at digikey, but 20M, 2M, 200k, 20k + 2k + 200 (22200) are available at 0.01% 5ppm/C at about a total of 70 bucks.
« Last Edit: August 06, 2018, 05:00:31 pm by sourcecharge »
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #16 on: August 06, 2018, 11:34:41 am »
So, just checked the pricing at digikey and the only resistors with a greater tolerance of 0.1 were through hole types, and they were pricey.

Total cost of just the resistor network before tax and shipping, 196 bucks.
 :--
 

Offline Zero999

  • Super Contributor
  • ***
  • Posts: 19494
  • Country: gb
  • 0999
Re: Why are V(rms) measurements frequency dependant?
« Reply #17 on: August 06, 2018, 02:51:55 pm »
So, just checked the pricing at digikey and the only resistors with a greater tolerance of 0.1 were through hole types, and they were pricey.

Total cost of just the resistor network before tax and shipping, 196 bucks.
 :--
That's because you've used weird values. 9×10x is not a common resistor value, so it will be expensive, especially in 0.1% tolerance or better.

You would have more luck if you used standard E24 of E96 values. If you divide all of the precision resistor values in that circuit by 5, then it would give you much more widely available resistor values.
 
The following users thanked this post: sourcecharge

Online David Hess

  • Super Contributor
  • ***
  • Posts: 16607
  • Country: us
  • DavidH
Re: Why are V(rms) measurements frequency dependant?
« Reply #18 on: August 06, 2018, 03:22:02 pm »
The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

The only thing which matters is the sampling bandwidth; the sample rate is irrelevant except for uncertainty.  The RMS calculation is just the standard deviation.  Reducing the number of samples does not change the standard deviation so operating the analog to digital converter below the Nyquist frequency is completely acceptable.  Another way to look at it is that aliasing folds the signal over inside the Nyquist bandwidth but the standard deviation of the entire signal is still there to be measured.

This can also be done in the analog domain.  Use a sampler to capture the input and feed the sampler's output to a standard analog translinear RMS converter.  Now the input bandwidth is limited by the sampler and not the analog RMS converter.  The Racal-Dana 9301 and HP3406 sampling RF voltmeters worked this way to make RMS measurements into the GHz range.  Some old analog sampling oscilloscopes had a sampling output which could be attached to a low frequency RMS voltmeter to do the same thing up to 10+ GHz and beyond.

As far as the original question, most DSOs can do what is needed if their accuracy is acceptable.  Just be careful because not all DSOs compute the RMS function correctly.  This is likely to be a problem if they make measurements on the processed display record (this destroys the standard deviation) like the often recommended Rigol DS1000Z series.

If you want to build something simple without using the sampling and standard deviation method, then I suggest using the AD637 or LTC1967 RMS to DC converter IC.
 

Offline Tomorokoshi

  • Super Contributor
  • ***
  • Posts: 1212
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #19 on: August 06, 2018, 03:49:42 pm »
Is there a meter that is already out that can accurately measure VAC across a broad range of frequency up to say 1 Mhz, or even non frequency dependent?

Additionally, the Hewlett Packard 3403C True RMS Voltmeter can be coaxed to an accuracy of 10% at 100 MHz. At 1 MHz it can be good to around 3% depending on the input voltage level. These use a thermopile converter.

http://www.hpl.hp.com/hpjournal/pdfs/IssuePDFs/1972-03.pdf

http://www.analog.com/media/en/technical-documentation/application-notes/an106f.pdf

They show up on ebay for $100 to $200.
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #20 on: August 06, 2018, 04:11:55 pm »
The max4239 is not that suitable to amplify small AC signals. It has a rather limited BW and quite some higher frequency noise. With amplification it's usually better to separate the DC part out, so that no AZ OPs are needed.

The simple divider with 900 K will not work well without compensating caps. Normal mechanical switches may not offer sufficient isolation when off and the later OP stages driven to saturation may give quite some higher frequency components if really fast.
So the proposed circuit is not really good for higher frequencies - maybe up to a few kHz. With AC 0.1% resistors should be good enough - even with adjustment the caps will not be that accurate and stable.

When using the digital method there is no need for that much ADC resolution: for a single RMS reading there will be quite a lot of ADC samples (usually at least in the 10000 range) used and in a kind of oversampling way this will give a resolution for the RMS value that can be quite a bit higher than the ADC resolution. For a simple DIY test something like the STM32F3...  µC with a reasonable fast (a few MSPS) 12 Bit ADC would be a good start.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #21 on: August 06, 2018, 04:57:31 pm »
Ya, I wasn't thinking..

digikey has a 20M, 2M, 200k, 20k, 2k, and a 200 good through hole resistors...and those come out to be about 70 dollars (I think).

I didn't want to decrease the impedance too much so I just did the calculation this time with what was available.

These resistors are 0.01% tolerance and it looks like they are meant for this application.

The 0.1% error of the LTC1968 is good only up to 150khz, which should be plenty for what I was hoping for..not sure if it is though...

The max4239 has a BW of 6.5Mhz, and with only a 10x gain, that means the adjusted BW at the last op amp should be

BW(tot) = 6.5Mhz/10 x (2^(1/3)+1)^(1/2) because I am using three with all the same gain.

That gives a BW of 977khz.

The uCurrent has proven to me that these op amps can amplify DC, but you are saying that these can't do small AC signals?

Why?

Regarding the switches, what about isolation relays coupled with a simple switch that turns the relays on and off with 5V?

Not too sure what to do regarding the caps, what are the caps going to help with?  Do you think by using the high resistance values of 20M..etc..will limit that problem?

« Last Edit: August 06, 2018, 05:23:06 pm by sourcecharge »
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #22 on: August 06, 2018, 05:16:53 pm »
I've just quickly read through the data-sheet for the LTC1968. The reference is the common voltage for the AC waveform. The IC measures the difference between the voltage on its inputs. At least one input must be DC coupled to a steady voltage between the supply rails. If it's a single supply application, connect one pin to a potential divider with a bypass capacitor to 0V and the other input to the signal source, via a capacitor. See page 12.
http://www.analog.com/media/en/technical-documentation/data-sheets/1968f.pdf

The output of the LTC1968 is high impedance and needs a buffer amplifier, before going to the DVM. A decent, low offset, high input impedance, low bias current, op-amp is required for the buffer.

I was thinking about simply using another MAX4239 in unity gain for the buffer op amp, is that reasonable?
« Last Edit: August 06, 2018, 05:22:14 pm by sourcecharge »
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #23 on: August 06, 2018, 07:55:32 pm »
The max4239 is not unity gain stable. For the output the unity gain stable version max4238 would be better.

The problem with the max4239 at the input is having quite some noise, especially at the higher frequencies (e.g. the AZ frequency). So it would likely not make much sense to have that much amplification. The BW calculation is also a little off - it should be a little below 650 kHz. So at 150 kHz the loop gain would be somewhere around 5 and thus significant errors could start to appear.

The capacitive problem is that the 20 M resistor will have some parasitic capacitance in parallel. To make the divider work well, there should be a parallel capacitive divider with the same rations. So the smaller resistors would need correspondingly larger caps in parallel.  If there is some parasitic 1 pF at the 20 M , the 200 K should have some 100 pF and so on.  As an additional complication the OPs input and switch will also have some capacitance, that changes with the switch setting. So to make is reasonably work in all settings the capacitance should be large compared to the load capacitance - so one has to add larger caps, including one with the largest resistor. So it may be more like 10 pF, 100 pF, 1 nF , ....
The higher frequency divider would be set by the caps, not the resistors.

Also a 2 M resistor will have quite some noise by it's own, which would limit the use of smaller ranges. There is a good reason, why bench DMMs usually use a 1 M  resistive divider for the AC ranges. Those 20 M dividers are more made for DC.

For the higher frequency isolation relays are not per se better than manual switches.  For good attenuation, one usually uses more than just a single switch and avoids to send the signal to amplifiers that are not needed / used.
 
The following users thanked this post: sourcecharge

Offline Zero999

  • Super Contributor
  • ***
  • Posts: 19494
  • Country: gb
  • 0999
Re: Why are V(rms) measurements frequency dependant?
« Reply #24 on: August 06, 2018, 07:59:28 pm »
Ya, I wasn't thinking..

digikey has a 20M, 2M, 200k, 20k, 2k, and a 200 good through hole resistors...and those come out to be about 70 dollars (I think).

I didn't want to decrease the impedance too much so I just did the calculation this time with what was available.

These resistors are 0.01% tolerance and it looks like they are meant for this application.

The 0.1% error of the LTC1968 is good only up to 150khz, which should be plenty for what I was hoping for..not sure if it is though...

The max4239 has a BW of 6.5Mhz, and with only a 10x gain, that means the adjusted BW at the last op amp should be

BW(tot) = 6.5Mhz/10 x (2^(1/3)+1)^(1/2) because I am using three with all the same gain.

That gives a BW of 977khz.

The uCurrent has proven to me that these op amps can amplify DC, but you are saying that these can't do small AC signals?

Why?

Regarding the switches, what about isolation relays coupled with a simple switch that turns the relays on and off with 5V?

Not too sure what to do regarding the caps, what are the caps going to help with?  Do you think by using the high resistance values of 20M..etc..will limit that problem?
Why do you want 0.01% tolerance resistors, when the LTC1968 isn't that good?

The low resistor values makes no difference to the input impedance, when the op-amp is configured as a non-inverting amplifier.
 

Offline JS

  • Frequent Contributor
  • **
  • Posts: 947
  • Country: ar
Re: Why are V(rms) measurements frequency dependant?
« Reply #25 on: August 06, 2018, 08:51:32 pm »
Hard to get good tracking with discrete resistor and tc becomes a problem. You don't need precise resistors, you can calibrate the gain on each range and take notes, the readout would need some correction then but tc is still a problem.

JS

If I don't know how it works, I prefer not to turn it on.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #26 on: August 07, 2018, 03:51:47 am »
The max4239 is not unity gain stable. For the output the unity gain stable version max4238 would be better.

The problem with the max4239 at the input is having quite some noise, especially at the higher frequencies (e.g. the AZ frequency). So it would likely not make much sense to have that much amplification. The BW calculation is also a little off - it should be a little below 650 kHz. So at 150 kHz the loop gain would be somewhere around 5 and thus significant errors could start to appear.

The capacitive problem is that the 20 M resistor will have some parasitic capacitance in parallel. To make the divider work well, there should be a parallel capacitive divider with the same rations. So the smaller resistors would need correspondingly larger caps in parallel.  If there is some parasitic 1 pF at the 20 M , the 200 K should have some 100 pF and so on.  As an additional complication the OPs input and switch will also have some capacitance, that changes with the switch setting. So to make is reasonably work in all settings the capacitance should be large compared to the load capacitance - so one has to add larger caps, including one with the largest resistor. So it may be more like 10 pF, 100 pF, 1 nF , ....
The higher frequency divider would be set by the caps, not the resistors.

Also a 2 M resistor will have quite some noise by it's own, which would limit the use of smaller ranges. There is a good reason, why bench DMMs usually use a 1 M  resistive divider for the AC ranges. Those 20 M dividers are more made for DC.

For the higher frequency isolation relays are not per se better than manual switches.  For good attenuation, one usually uses more than just a single switch and avoids to send the signal to amplifiers that are not needed / used.

Ya, I calculated that wrong.

3x Max4239 = 3313 hz BW

2x MAX4239 = 41833 hz BW

Basically, if I use the MAX4239, I would only use 2 instead of three.

So, the lowest voltage measurement would be 500 uVrms with a limited BW of about 40khz but it would still output at the 50mV to 500mV levels which basically extends the digits on the Vrms of my 4.5 digit meter if the total error from the opamp network and the rms to dc converter is at the 0.1% up to 40khz

Well, at least the data sheet shows 40Khz in a chart with error of mV dc (out) - mV rms (in) vs the mVrms (in) between 50mVrms to 500mVrms so if 40khz is the max of the opamps then it kinda fits with the manufacturer data.

I am going the test the uCurrent with a 0.2 Vpp sinwave input through a 100 ohm resistor from 1khz to 40khz and use my scope to check the output for noise.  This should be a 20uVpp input to the 1st op amp, and I should be able to see 2mVpp output.  If the noise is too high, I will test it with higher Vpp inputs.

What do you think? 

Is there a better op amp for the job to increase the BW and decrease the noise?

So, the output op amp could be the unity gain opamp max4238, ya that makes sense, I remember reading that in the datasheet.

Do you think different opamp is better suited for the job?

Regarding the capacitance of the resistors, doesn't the capacitance in series decrease?

If the 20M resistor has 1pF, it is still going to have 20Mohm in parallel. 

This calculates to a dissipation factor of 1 or greater up to 7958 hz, if it's 10pF than DF of 1 or > is up to 795.8 hz.

DF = 1/(2*pi*F*Cp*Rp)

I do have an LCR meter mastech 5308, so I could just buy one and check the capacitance.

But Doesn't adding capacitance in parallel decrease the frequency at which DF changes from being completely resistive to being partly capacitive?

DF = 1/(2*pi*F*Cp*Rp)

The 0.01% tolerance is to limit the total error because the rms to dc converter is at 0.1% so I was thinking that in order to limit the total error, these seem to be pretty good.  I've included the datasheets, and although they are pricey, they seem to have 5 ppm/C or lower.

The last two resistor could be switched for 0.1% to save 50 bucks but the total error would increase as these have 15 ppm/C so they would change with temperature differently than the others.  Although, the temperature change would only be from environmental conditions because they would have about 22.2M limiting the current into them.

20M
USF340-20.0M-0.01%-5PPM
2M
USF340-2.00M-0.01%-5PPM
200k
USF340-200K-0.01%-5PPM
20k
USF340-20.0K-0.01%-5PPM
2k
USR2G-2KX1
200
Y1453200R000T9L

I'm thinking that the opamp tolerances should be 0.05% or lower just like the uCurrent.

Switch XSW8 is only on when the op amps are used, when the op amps are not used the input of the op amp must be isolated because it will damage the opamp input with high input Vrms.

I don't know, I think the high resistance networks are not used in meters because of the cost, not because of the electrical characteristics.

Maybe they know a better and cheaper way of doing it with op amps?



 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: Why are V(rms) measurements frequency dependant?
« Reply #27 on: August 07, 2018, 06:26:13 am »


The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

In order to get high signal dynamic range with sufficient room for crest factor, the ADC has to have as many bits as possible, say 16 bits with 3-4 bits reserved for crest factor (ie. for the peak values of the signal compared to the RMS of the signal https://en.wikipedia.org/wiki/Crest_factor) leaving 12 - 13 bits RMS for computation.

At low signal levels the resolution will suffer due to the quantization. In order to compensate the quantization effects one may need to either increase the sample rate with oversampling or increase the number of bits of the ADC from 16 bits to 20 - 24 bits, for example, which will increase the cost of the ADC. Alternatively one may arrange the input signal level so that it will be kept as high as possible without clipping (autoranging or manual ranging) in order to get as many significant bits as possible for best accuracy and resolution.

After sampling the computation is quite straight forward requiring some DSP computation. There are nice algorithms available for computing the RMS: https://www.embedded.com/design/configurable-systems/4006520/Improve-your-root-mean-calculations

My guesstimate  is that getting 3.75 digits for resolution is quite the practical limit with one can achieve with a signal bandwidth of > 100 kHz with a typical 16-bit ADC and optimal signal level with crest factor of 10. Probably one could achieve one extra digit with a state-of-the art, fast 24-bit ADC. One can obtain better estimation on resolution/accuracy and effects of different signal levels by performing some simulation and running mathematical/numerical analysis for the quantized signals.

No need for that much adc and dsp, you can do oversampling and decimation after rectification so the resolution and frequency response can be much better than what you said... You could sample at 1kHz and still get a response up to MHz if the sampling is short enough lthe adc frequency resppnae is the limit, not the sampling frequency) and after averaging you get the resolution under one LSB, useful if the ADC linearity is gpod enough but you don't need the data and computation to deal with greater ADCs. Using a pseudorandom sampling frequency makes for a better frrquency response, minimizing the comb filter at multiples of the sampling freq.

JS
Yes, you should be able to use a lower sample frequency, than the bandwidth of the signal, because the waveform will more than likely be repeating and you want an average over a long time period, to do RMS calculations anyway.

Yes, you can get an estimate - probably a sufficiently good one, too - by sampling at a lower frequency. And you will get a better estimate by increasing the sampling frequency until you reach the Nyqvist limit after which the sampling has gotten all information there is in the signal. If one samples the signal at lower sampling rate, the signal should be repetitive and the sampling should be synchronized to the signal, then one should sample with some randomness or varying trigger position in order to get as much information from the signal as possible over multiple signal periods. Using sampling with constant sample period will not increase the information content as one will always sample at identical places of the waveform. For non-periodic signal one can only get an estimate of the RMS by sampling at lower rate.

Summary: Sampling at lower rate will give you only an estimate of the signal to be measured where as sampling at sufficiently high rate will give you complete information of the band limited signal to measured. If the band limited signal is sampled at sufficient sample rate, the samples will comply with Parseval's theorem, and the samples will contain the complete signal representation containing all information for accurate RMS computation.
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: Why are V(rms) measurements frequency dependant?
« Reply #28 on: August 07, 2018, 06:45:23 am »
The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

The only thing which matters is the sampling bandwidth; the sample rate is irrelevant except for uncertainty.  The RMS calculation is just the standard deviation.  Reducing the number of samples does not change the standard deviation so operating the analog to digital converter below the Nyquist frequency is completely acceptable.  Another way to look at it is that aliasing folds the signal over inside the Nyquist bandwidth but the standard deviation of the entire signal is still there to be measured.

This can also be done in the analog domain.  Use a sampler to capture the input and feed the sampler's output to a standard analog translinear RMS converter.  Now the input bandwidth is limited by the sampler and not the analog RMS converter.  The Racal-Dana 9301 and HP3406 sampling RF voltmeters worked this way to make RMS measurements into the GHz range.  Some old analog sampling oscilloscopes had a sampling output which could be attached to a low frequency RMS voltmeter to do the same thing up to 10+ GHz and beyond.

As far as the original question, most DSOs can do what is needed if their accuracy is acceptable.  Just be careful because not all DSOs compute the RMS function correctly.  This is likely to be a problem if they make measurements on the processed display record (this destroys the standard deviation) like the often recommended Rigol DS1000Z series.

If you want to build something simple without using the sampling and standard deviation method, then I suggest using the AD637 or LTC1967 RMS to DC converter IC.

Sampling a band-limited bandpass signal at lower rate is called undersampling, which is quite acceptable and  a common technique in the communication systems. However, the remarks I made in my previous post will still hold.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #29 on: August 07, 2018, 07:35:36 am »
Well, I just checked the noise on those uCurrent boards with the AC signal, but there was too much noise to get any reading with the scope only.

I never bothered to hook it up to the meter, as I was just checking for noise.

After removing everything and simply using the scope on the outputs while in the mA, uA, and in the nA mode.

I found both uCurrent boards while in the mA, and uA output had a noise of about 8mVpp using the 1x probe and the nA output was offset with about the same noise.

I didn't bother to check the nA offset or the exact noise Vpp because I really only use the mA setting for the ability to measure current with only a 10milliohm added impedance. 

I measured the capacitance of the probes and they were only about 3pF.

Using my bench meters, I have measures DC current with it and it was very accurate.

I think I got 0.1% in relation to the current measurement from the meters, but they also have a +- 0.05% + 6 at 0.01 mV resolution with a measurement of about 240 mA. 

This seems reasonable that the would be 0.1% between them.
 
I was aware of the LMV321 problem and I even got the ST version of it LMV321ILT.

The DC read out from the meter has only like 0.03 mV when nothing is connected as this must be the offset.

Now I'm thinking that I should measure the Vrms at 60hz with my meter because it only has an error of +- 0.5% + 30 at 60hz.

Is this only my uCurrent that is doing this, or is anyone else's have 8mVpp noise?

Did I get bad parts?

Or is this the noise that you were saying there was?

I guess if this is not a part problem then in order to measure small signals cleanly, some type of high frequency op amp with low noise would be a better solution, even if it had more of a offset voltage.

Could that offset voltage be cancelled out somehow?

if it could, what op amps would be best for both the input and the output?

« Last Edit: August 07, 2018, 07:37:22 am by sourcecharge »
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #30 on: August 07, 2018, 11:09:54 am »
A rather good OP for AC amplification would be the OPA140 (often the cheaper version OPA141 will also work). Noise should be about 6 times lower than the max4239 and no extra chopper noise. There are lower noise, BJT based OPs, but these only work well with low impedance sources (e.g. < 1 Kohms). The OPA827 is even lower noise, but can be tricky with it's high input capacitance.

Having 8 mV pp noise would  be about 1.3 mV RMS. With a gain of 100 this corresponds to some 13 µV_rms at the input. This is about the noise level expected for the max4239 (30 nV/Sqrt(Hz) and some 200 kHz BW).

An OP behind the LTC1968 would be not really critical if needed at all. So the LMV321 should be OK.
The LTC1968 might need an OP for driving the input, if there is a high impedance source. DC wise the converter is high impedance, but there is some input capacitance and thus BW would be severely limited with a 2 M source. So for an accurate measurement it needs a buffer/amplifier before the LTC1968.

Besides the price, there are reasons why the better meters usually have a 1 M divider (and not higher impedance) for AC. One point is resistor noise (a 600 K resistor has a white noise level of 100 nV/Sqrt(Hz) and thus about 3 times the max4239. A lower impedance also makes it easier to have at least the 50/60 Hz range divider to be mainly determined by the resistors and thus more accurate.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #31 on: August 07, 2018, 12:48:07 pm »
Thanks for the recommendation, I'll check out the opa140/1, and the opa827.

So, ya, I measured a noise of about 8mVpp on the outputs, but then I took the battery out and it was still doing it.  It was about 6mVpp but it was about the same garbage.

Then I used the deeper memory function, and I found a 120hz spike of about twice the amplitude of the 8mVpp noise that was varying between 400khz to 4Mhz.   I took all of the wires off of the meters while connected to the uCurrent, still doing it.  I connect connected the scope probe to it's own ground.  Nothing there.  I checked the noise from the probes and it was about 2mVpp. There is a 60 hz signal in the air that is about 40 mVpp and even higher when I touch the positive test lead.  I'm sure that just residual from the mains but I'm not sure if that is doing something to it.  Anyways, I'll do some more test again soon.

I watched some of the eev videos of the uCurrent, and the output of those across dave's scope looks clean and the noise looked like its maybe half of what I was seeing.  IDK.  I'll try again soon.  I might have missed something.

Has anyone noticed that the TPS3809 chip is referenced backwards on the uCurrent?

The function of the chip works, but I measured the resistance between the Vcc and grd pins and they are actually different resistances when reversing the measurement. 160k in one direction and about 98k in the other.

I wonder if that might be causing the some of the problems.

First, I got to find where this noise is coming from even when the battery is taken out.

I'm guessing it might be from my scope. 


 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #32 on: August 07, 2018, 01:44:44 pm »
So, I just watched the Murphy video of the uCurrent and it looks like with the ST's LMV321ILT IC, Dave's scope says about 2.5 mVrms, which basically is 8 mVpp so I guess it must be normal.  I'm still going to take that voltage supervisor IC off and see if that does anything.

Anyways, so the resistor network seems to be a problem.

I will start by using some normal resistors that are not precision that I have laying around to see if there would be too much noise.



 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #33 on: August 07, 2018, 06:25:30 pm »
What about an inverting amplifier with a gain of 0.001, 0.01, and 0.1?
Then signal then could be routed to a non inverting amplifier with gains of 10, 100, and 1000.

That way the dividing network is limited to only 1 large resistor and 3 switched in resistors for the inverting amplifier that are all tied to the virtual ground when used, and the non inverting amplifier's input signal could go straight to the input leg, thus limiting the capacitance of the divider, and decreasing the resistor noise.

As long as the amplifier has unity gain, and has a high BW, (300Mhz) this thing might actually work up to 150khz.

Does the stability of an opamp change when using it for attenuation?





 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #34 on: August 07, 2018, 08:03:47 pm »
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #35 on: August 07, 2018, 08:14:00 pm »
An amplification smaller than 1 is not per se bad for stability.  Noise gain is still close to 1. The problem is more with the extra capacitance at the inverting input from the switches, that can have a negative effect on stability.

The inverting amplifier is definitely an option - some DMMs use it successfully.

Another possible option not considered so far is having different switchable gain stages in series. It is quite common to use something like a switchable 1:100 divider at the input and than switchable gains like 1:10:100 and maybe another 0.1 from later stages. Having a single switch to choose from more than 3 settings can be tricky with the load capacitance.  If it is planed to be fast I would really look at scope input stages to learn about gain switching - it does not have to be that fast, but 100 kHz is no more DC, especially if at 1 MOhms.
One may also have to include over voltage protection, as it also adds extra capacitance.

There is no real need for super fast OPs. Something like 10 MHz GBW OPs can be sufficient for stages with a gain of 10 or less.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #36 on: August 08, 2018, 04:59:30 am »
An amplification smaller than 1 is not per se bad for stability.  Noise gain is still close to 1. The problem is more with the extra capacitance at the inverting input from the switches, that can have a negative effect on stability.

The inverting amplifier is definitely an option - some DMMs use it successfully.

Another possible option not considered so far is having different switchable gain stages in series. It is quite common to use something like a switchable 1:100 divider at the input and than switchable gains like 1:10:100 and maybe another 0.1 from later stages. Having a single switch to choose from more than 3 settings can be tricky with the load capacitance.  If it is planed to be fast I would really look at scope input stages to learn about gain switching - it does not have to be that fast, but 100 kHz is no more DC, especially if at 1 MOhms.
One may also have to include over voltage protection, as it also adds extra capacitance.

There is no real need for super fast OPs. Something like 10 MHz GBW OPs can be sufficient for stages with a gain of 10 or less.

I'm having trouble visualizing your new option, would you give a schematic or further clarify?

The reason why I was thinking of using a inverting amplifier for a attenuator was so that I could use a high impedance.

This would limit the effect on the circuit being measured.

I was thinking 10Mohms would be a standard.

What do you think?

The reason why I was thinking that 300Mhz BW for the op amps was because the two op amps would have to have a Gain of 1000.

I'm guessing that even in a Gain of -1/1000, and if it was stable, then it would still correspond to the 1000 Gain calculation.

So, 300Mhz/1000 = 300khz

Is that right?  That way the BW is 2x more than the actual BW of the limit of 0.1% error of the Trms converter.

One thing that I've noticed about the Trms converter's datasheet is that it looks like the error is the most flat between 100mVrms to 200mVrms input.

Would it be overkill to design the divider networks to work within this range?

I was thinking that over voltage protection is a luxury, just like the auto-ranging.  I think that I would rather have accuracy than safety.  If something blows, its going to be an op amp or the Trms converter, both are relatively inexpensive.  As long as the highly precise resistor networks are not damaged, I don't think I would include it in at least the bare bones version of this thing.

Is there any way that you can think of, conventional or unconventional, to limit the capacitance that the switches would have?

Say for instance as an unconventional solution, what if the "switch" was to plug in the correct resistor in a gold contacted socket?

Just spit balling.
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #37 on: August 08, 2018, 05:57:41 am »
The LTC1968 does not have provisions to compensate for background noise (I don't know if other analog solutions have that). So there is some limitation on how small signals can be measured. With a background of some 10 µV, there is not much sense in ranges for less than 1 mV full scale. Measuring voltages like 0.1 mV (or smaller) would be limited by the noise background more than from using the 1 mV range.

So the overall gain does not need to go much beyond 100. Having something like 200 mV full scale seems reasonable as it allows the usual crest factor of about 3-5 V.

For a reliable measurement it needs some indication for overflow, as the RMS reading alone is not enough to detect possible peaks that are too large. This part can be relatively simple - e.g. 2 comparator to check for positive and negative peaks that are too large and some latching / stretching to make small peaks more visible.

Some over-voltage protection is essential, at least against ESD damages.

The inverting amplifier does not solve the noise problem - it's actually even slightly higher noise. The main advantage is that a single stage could be used for many gain / attenuator settings - though with problems at higher frequency accuracy. So it is not that attractive for a good solution, more like for a cheap one.
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #38 on: August 08, 2018, 07:34:38 am »
The LTC1968 does not have provisions to compensate for background noise (I don't know if other analog solutions have that). So there is some limitation on how small signals can be measured. With a background of some 10 µV, there is not much sense in ranges for less than 1 mV full scale. Measuring voltages like 0.1 mV (or smaller) would be limited by the noise background more than from using the 1 mV range.

So the overall gain does not need to go much beyond 100. Having something like 200 mV full scale seems reasonable as it allows the usual crest factor of about 3-5 V.

For a reliable measurement it needs some indication for overflow, as the RMS reading alone is not enough to detect possible peaks that are too large. This part can be relatively simple - e.g. 2 comparator to check for positive and negative peaks that are too large and some latching / stretching to make small peaks more visible.

Some over-voltage protection is essential, at least against ESD damages.

The inverting amplifier does not solve the noise problem - it's actually even slightly higher noise. The main advantage is that a single stage could be used for many gain / attenuator settings - though with problems at higher frequency accuracy. So it is not that attractive for a good solution, more like for a cheap one.

The input to the LTC1968 or an opamp that you recommended?

The input to the LTV1968 has to be within the 50mVrms to 500mVrms from the OPs for it to be within the 0.1% error up to 150khz.

right?

50uVrms could be the lowest input of the 1st range to an opamp with a 1000 gain.

That would give a 50mVrms input to the LTC1968.

If 500uVrms was the lowest, than the gain would only be 100, but we are talking about just another resistor for the opamp to measure down to 70uVrms.

Regarding the LTC1968, If you look at the graph marked, Linearity performance in the datasheet and in the 3 captured graphs in the extra pic, the flattest of the error at zero is between a 100mVrms input to 200mVrms input for the LTC1968.

What I meant was that the number of range resistors could be increased to make the input signal into the LTC1968 between that range, and then instead of using only a buffer on the output, use the same gain/attenuation to output a corresponding range, easily read by the V DC of the meter.

What would you use for the ESD protection, and how would it effect the overall accuracy?

Regarding the noise, are you talking about the op amp output noise, the resistor noise, or the parasitic capacitance?

I was hoping for an opamp that had a high BW, low noise, low offset, and unity gain up to 1000 gain up to 150 - 300khz.

But looking at the max4238/9, the graphs show the gain is not the max gain advertised all the way up to their BW, and the phase shift is a problem too, as the inverting output is 180 degrees from the input, if the phase changes, then there is going to be error.  The phase shift seems to be frequency dependent.

The resistor noise, I was thinking that the opamp would be able to take care of that, because of the virtual ground.

The resistor capacitance is resistor dependent.  One resistor is not going to have the same as another of the same resistance.  But a range of resistors made by the same manufacturer all as the same resistor line, may have a consistent capacitance ratio between the ratio of resistance.

The High impedance input resistor will have the least amount of capacitance in parallel, the higher the resistance = the lower the capacitance, and that capacitance is in series with the second parasitic capacitance of the inverting opamp attenuator.  Therefore the total capacitance has to be less than the lowest capacitance.  This would indicate that the higher the resistance of the 1st resistor, the less the total parasitic capacitance.  Is this right? Am I understanding this correctly?

Or is the virtual ground of the opamp only see the two parasitic capacitances as if they were in parallel?

 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #39 on: August 08, 2018, 10:15:17 am »
Just did some simulations, and they are in parallel.

it really increases the error quite a bit.

hmmm
 

Offline Richard Crowley

  • Super Contributor
  • ***
  • Posts: 4317
  • Country: us
  • KJ7YLK
Re: Why are V(rms) measurements frequency dependant?
« Reply #40 on: August 08, 2018, 10:31:33 am »
 

Offline Alex Nikitin

  • Super Contributor
  • ***
  • Posts: 1166
  • Country: gb
  • Femtoampnut and Tapehead.
    • A.N.T. Audio
Re: Why are V(rms) measurements frequency dependant?
« Reply #41 on: August 08, 2018, 11:18:54 am »
Just did some simulations, and they are in parallel.

it really increases the error quite a bit.

hmmm

I will ask just one question re. the DIY route - how would you calibrate your converter = how could you be sure that the result is correct to your required accuracy unless you can check it over the needed level and frequency ranges with either a calibrator or with a generator and a calibrated meter with a several times better accuracy than you are trying to achieve?

Cheers

Alex
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 16607
  • Country: us
  • DavidH
Re: Why are V(rms) measurements frequency dependant?
« Reply #42 on: August 08, 2018, 02:10:26 pm »
Once you start worrying about high impedance dividers and accurate AC division, you have entered oscilloscope design territory so there may be some lessons there. 

High impedance dividers are a real problem for low frequency noise at low division ratios where noise matters more.  If your input voltage range is not too great and you must have the lowest noise, consider a bootstrapped input buffer with so no high impedance input divider is needed.

If you have unexplained errors in your properly compensated high impedance divider, consider hook from the printed circuit board as an explanation.  If you do not have any way to control this, consider air wiring the high impedance dividers.

Error in the amplifier increases as open loop gain falls with frequency; 40dB of excess gain is needed for an error of 1% and 60dB for an error of 0.1%.  So for a 0.1% error at 100kHz and gain of 10, the open loop gain needs to be 80dB and the gain-bandwidth product needs to be 1GHz for a typical unity gain compensated amplifier.  (1) 0.1% is a little less than 0.01dB so calibration is going to be a problem and I think the high impedance attenuators are going to create greater errors than this anyway as described above.

Common unity gain compensated amplifiers have limited open loop gain at higher frequencies increasing error.  Consider decompensated amplifiers or adding a fast fixed voltage gain stage (2) within the feedback loop of a precision gain stage.  As a bonus, either of these things also raises the full power bandwidth.  High precision voltmeters may do either or both of these things even at the low frequencies they operate at.  Some audio designs go to extreme lengths to minimize distortion.

(1) I have not looked at this in a long time but I think I got it right and the results are consistent with the design and specifications of precision wideband instruments.

(2) Current feedback operational amplifiers are good for this as shown in the application note I linked and they also unload the precision amplifier.  Thermal feedback within an operational amplifier limits low frequency open loop gain limiting precision.
 
The following users thanked this post: sourcecharge

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #43 on: August 08, 2018, 03:08:45 pm »
I asked essentially the same question here:
https://www.eevblog.com/forum/testgear/frequency-response-121046/msg1654637/#msg1654637

I checked that page out an ya, I was looking too for a meter that would be acceptable.  The only one I found so for with +-0.2% reading +0.05% full range of measurement was that siglent, but they want 500 bucks!   :--

I'm trying to do this under 150 if possible.  I'm not sure if a used meter is going to be within specifications of the manufacturer due to the voltage reference only being guaranteed for like a year.

Thanks for the suggestion.

Just did some simulations, and they are in parallel.

it really increases the error quite a bit.

hmmm

I will ask just one question re. the DIY route - how would you calibrate your converter = how could you be sure that the result is correct to your required accuracy unless you can check it over the needed level and frequency ranges with either a calibrator or with a generator and a calibrated meter with a several times better accuracy than you are trying to achieve?

Cheers

Alex

I have a 10V TI reference IC coming and I was going to calibrate my meters to it.  I'm hoping the Vrms on all three will then match up too.

Then when I have a 10 V reference, I was going to check where that stood on my scope in reference to the probes, DC.

Then, I was going to use a 20Vpp 60hz signal in reference to the 10V, and check that against my scope. 

I really wish I knew how to change the actual voltage reference point on the scope in relation to the divisions. 

The old analog scopes used to allow you to do that. 

Oh well, I will simply have to observe the peak value on the scope and check it against the 10V Ref. 

Then add or subtract the difference to any measurement, which sucks.

Anyways, I took that voltage supervisor chip off of the uCurrent and the noise was still there.

I did measure the correct Vrms from the meters from the uCurrent.  The ground of the scope was feeding the meters 2mVrms probably from the probe noise.

So I was able to successfully measure 4.8 mArms (4.8 mVrms) at 60 hertz, and that the noise can be cancelled out by putting the scope in the average mode, increasing the depth of the measurement, and setting the trigger to video. 

The signal came out pretty clean.

After that I increased the frequency up to 100khz, and the peaks were the same on the scope but the Vrms on both the scope and the meters were registering only about 3.6 Vrms. 

Edit: 100khz

So the scope seems like a fair way of measuring the Vp, and reverse calculating, but I'm not sure how accurate that is even if I was using a single polarity measurement of the Vp using all 8 divisions.

Even after all of that, I wouldn't actually be able to measure an error of 0.1% unless I had some kind of reference that was 10x more accurate than this DIY try.

I did do some measurements of so large resistors, and found that one of my 10Mohm 1/4 W 5% has about 1pF, and a 1.6Mohm resistor 1/4 W 10% has about 0.5 pF.

That kinda is opposite to what I was thinking about the ratio of the capacitance to the resistance.

Any divider network, would need some type of capacitor network like @Kleinstein was saying.

I took a look that the capacitors at digikey, and they only have 1% tolerances for through hole type.

That's too high if 0.1% is supposed to be the total error.  :-// You got any ideas?

sooo...

Anyone know how to calibrate the DC voltage measurement of a Mastech MS8040 ?

@med6753 on that tear down thread said they were able to, but never actually said how it was done.

https://www.eevblog.com/forum/testgear/part-2-teardownevaluation-mastech-ms8040-bench-dmm/msg1089068/#msg1089068

Once you start worrying about high impedance dividers and accurate AC division, you have entered oscilloscope design territory so there may be some lessons there. 

High impedance dividers are a real problem for low frequency noise at low division ratios where noise matters more.  If your input voltage range is not too great and you must have the lowest noise, consider a bootstrapped input buffer with so no high impedance input divider is needed.

If you have unexplained errors in your properly compensated high impedance divider, consider hook from the printed circuit board as an explanation.  If you do not have any way to control this, consider air wiring the high impedance dividers.

Error in the amplifier increases as open loop gain falls with frequency; 40dB of excess gain is needed for an error of 1% and 60dB for an error of 0.1%.  So for a 0.1% error at 100kHz and gain of 10, the open loop gain needs to be 80dB and the gain-bandwidth product needs to be 1GHz for a typical unity gain compensated amplifier.  (1) 0.1% is a little less than 0.01dB so calibration is going to be a problem and I think the high impedance attenuators are going to create greater errors than this anyway as described above.

Common unity gain compensated amplifiers have limited open loop gain at higher frequencies increasing error.  Consider decompensated amplifiers or adding a fast fixed voltage gain stage (2) within the feedback loop of a precision gain stage.  As a bonus, either of these things also raises the full power bandwidth.  High precision voltmeters may do either or both of these things even at the low frequencies they operate at.  Some audio designs go to extreme lengths to minimize distortion.

(1) I have not looked at this in a long time but I think I got it right and the results are consistent with the design and specifications of precision wideband instruments.

(2) Current feedback operational amplifiers are good for this as shown in the application note I linked and they also unload the precision amplifier.  Thermal feedback within an operational amplifier limits low frequency open loop gain limiting precision.


That makes sence about the scope dividers, and old scopes used variable trimmer resistors and caps.

I don't think the noise is going to be a problem anymore after seeing the results of today's tests, the input of the LTC1968 would be either between 50mVrms and 500mVrms, or 100mVrms and 200mVrms.

I really don't like bootstrapping, it limits the frequency range doesn't it?

The errors in phase vs frequency of the inverting OP, would have significance, but after measuring the resistors today, it would require balancing capacitors. 

I'm starting to remember something like this, I actually made a 1000x scope probe once to measure 100kV, and it actually worked well at 5khz.  I'm not sure if it still works, but I still have it.

Anyways, because the capacitors are required, the first idea with just the 2 or 3 Gain OPs and the resistor/capacitor divider network is probably the cheapest, and it could use some variable resistors and caps to offset the tolerances.  Maybe also pot the fixed resistors and caps in corona dope.

That's not really a best solution because those things never stay put.  Basically, as a rule of thumb for the "add-on" the divider network would have to be calibrated every time you use it just to be sure the resistors and caps didn't change.

I'm not sure I'm using an open loop gain OP.

I thought the gain was caused by positive feedback with the non inverting OP, and negative feedback with the inverting OP for unity gain, and attenuation.

Would you provide a rough schematic of what you are talking about?

Thanks.
« Last Edit: August 08, 2018, 03:22:05 pm by sourcecharge »
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #44 on: August 08, 2018, 03:39:27 pm »
The precision AC divider would normal use fixed precision resistors or a resistor array. So DC division and the low frequency performance is set by the resistors. With reasonable good resistors no adjustment is needed. For the caps, it is very difficult to know the parasitic capacitance and it can change (e.g. if the humidity in the board changes). So the capacitive part usually needs variable capacitors to do the adjustment.  If the circuit is reasonable simple adjustment can be similar to scope probes: use a good quality square signal and adjust the edges.

The calibration is really a difficult point for a DIY solution.  The often best instrument for higher BW AC is often the DSO - with some luck in the 1% range.

A DC reference is of limited use for an AC meter. At best it could be used with a square wave test signal.
 

Offline Alex Nikitin

  • Super Contributor
  • ***
  • Posts: 1166
  • Country: gb
  • Femtoampnut and Tapehead.
    • A.N.T. Audio
Re: Why are V(rms) measurements frequency dependant?
« Reply #45 on: August 08, 2018, 03:42:30 pm »
Hmm, unless you really want to use a lot of time and money for purely educational purposes, I see no chance that you can reach your goals the way you've described. I would advise you to buy a good old meter (say, HP3456A has a pretty decent accuracy on AC and you can get it for under $200).

Cheers

Alex
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: Why are V(rms) measurements frequency dependant?
« Reply #46 on: August 08, 2018, 04:05:05 pm »
So, just checked the pricing at digikey and the only resistors with a greater tolerance of 0.1 were through hole types, and they were pricey.

Total cost of just the resistor network before tax and shipping, 196 bucks.
 :--
That's because you've used weird values. 9×10x is not a common resistor value, so it will be expensive, especially in 0.1% tolerance or better.

You would have more luck if you used standard E24 of E96 values. If you divide all of the precision resistor values in that circuit by 5, then it would give you much more widely available resistor values.

Actually exactly that series of values is available as a purpose designed divider network from Vishay, the CNS 471 series. Available with ratio tolerances of either 0.03%, 0.05% or 0.1% and tempco tracking of <2.5ppm/C. Datasheet attached. Not cheap, typical one off price (from memory) in the £30-40 GBP region, but that does get you the whole divider network.

Anybody got a syringe I can use to squeeze the magic smoke back into this?
 
The following users thanked this post: sourcecharge

Online David Hess

  • Super Contributor
  • ***
  • Posts: 16607
  • Country: us
  • DavidH
Re: Why are V(rms) measurements frequency dependant?
« Reply #47 on: August 08, 2018, 08:22:31 pm »
That makes sence about the scope dividers, and old scopes used variable trimmer resistors and caps.

I am not sure how newer oscilloscopes avoid this and some do not.  But newer oscilloscopes tend to only have one switchable high impedance attenuator simplifying things.  They pay for this by having to support a much higher input dynamic range.

Quote
I don't think the noise is going to be a problem anymore after seeing the results of today's tests, the input of the LTC1968 would be either between 50mVrms and 500mVrms, or 100mVrms and 200mVrms.

Probably not since the RMS converter has a relatively limited dynamic range.

Quote
I really don't like bootstrapping, it limits the frequency range doesn't it?

Yes, but not within the frequency range you are considering.  Bootstrapping is certainly feasible to 1 MHz and above but it involves a lot of extra complexity so should be avoided unless it is required to remove the need for problematical high impedance attenuators.

Quote
The errors in phase vs frequency of the inverting OP, would have significance, but after measuring the resistors today, it would require balancing capacitors.

The high impedance attenuators will always require balancing capacitors.  But even your low impedance attenuators may require balancing capacitors in a high precision design.  Oscilloscope vertical amplifiers include trims for AC performance in the low impedance stages and you may need to do the same thing.

It may not matter here but usually inverting amplifiers are used to avoid common mode input range and common mode rejection ratio limitations.  I do not think that will be an issue for you.  Note that the feedback network of the operational amplifiers has all of the same issues as a low impedance divider.

Quote
Anyways, because the capacitors are required, the first idea with just the 2 or 3 Gain OPs and the resistor/capacitor divider network is probably the cheapest, and it could use some variable resistors and caps to offset the tolerances.  Maybe also pot the fixed resistors and caps in corona dope.

Just be careful that any coating or potting does not create hook errors as described in the Tektronix article I linked.

Quote
That's not really a best solution because those things never stay put.  Basically, as a rule of thumb for the "add-on" the divider network would have to be calibrated every time you use it just to be sure the resistors and caps didn't change.

0.1% precision over time and temperature is feasible but maybe not for a discrete divider.  Hybrid construction can use laser trimmed resistors and capacitors which track each other.  A discrete design would require some experimentation to find capacitors and trimmer capacitors which track.

Quote
I'm not sure I'm using an open loop gain OP.

I thought the gain was caused by positive feedback with the non inverting OP, and negative feedback with the inverting OP for unity gain, and attenuation.

Would you provide a rough schematic of what you are talking about?

You already drew it; I considered the design you showed with 3 cascaded x10 non-inverting amplifiers.  Achieving 0.1% flatness to 100kHz with a x10 gain stage is not trivial and where the open loop gain with frequency becomes a serious problem.  There is an example in the Linear Technology application note that I linked of a similar cascaded amplifier.
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14181
  • Country: de
Re: Why are V(rms) measurements frequency dependant?
« Reply #48 on: August 08, 2018, 09:01:31 pm »
The scope inputs are quite a bit more critical when it comes to frequency response. For the RMS measurement a pure phase shift does not matter -  for a scope phase shifts are usually the bigger problem, as they often start to come up well below the transition frequency.  One could still a similar construction.

I am not sure how they can get away with less adjustable caps for the dividers. One part can be using ready made programmable divider chips and also having a well reproducible layout, to that fixed caps (measured, tested with prototypes). It also helps that SMT parts usually have lower parasitic capacitance.  AFAIK modern scopes often only have the coarse divider before the amplifier and the fine steps behind the amplifier at lower impedance.

Noise is not a big problem at the input of the LTC1968 with some 50 mV-500 mV amplitude. However it still limits the usefulness of lower ranges with more amplification. Unless the input is very low noise (or with an extra bandwidth limit), there is little use for a gain higher than 100.
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: Why are V(rms) measurements frequency dependant?
« Reply #49 on: August 08, 2018, 10:38:33 pm »
The application note from Analog / LInear by Jim Williams "Instrumentation Circuitry Using RMS-to-DC Converters" covers LTC1967/LTC1968/LTC1969 RMS-to_DC converters and gives very useful insight how to apply these devices for high precision measurements from 10 uV upwards. The application note provides also good overview what to expect from these devices, and the application note covers also the 1000X preamplifier design.
http://www.analog.com/media/en/technical-documentation/application-notes/an106f.pdf
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #50 on: August 08, 2018, 10:45:36 pm »
Hmm, unless you really want to use a lot of time and money for purely educational purposes, I see no chance that you can reach your goals the way you've described. I would advise you to buy a good old meter (say, HP3456A has a pretty decent accuracy on AC and you can get it for under $200).

Cheers

Alex

I think I have figured out how the calibrate the 0.1%.

I have a 10V ref coming that will be able to reference a 10x probe, a FG, and 3 meters.

This 10V ref has a 2.75mVpp max error of the temp range, but the datasheet says it really only about >1mV at 23 deg, but let's assume 2.75mVpp.

That gives the error of the voltage reference of 0.0275%

The meters have a +-0.05%+30 DC on all ranges.

The FG can output a DC with a range to 10.000 V.  the 10x probes can offset the 10V ref signal -10V to put the 10V output of the probe in the middle of the scope vertical center while the scope is at 500mV/div.  This sounds strange, but, between the 8 vertical div, there are 5 ticks per div, and between the ticks, the courser for the voltage measurement move 5 pixels per tick.   :palm:  Ok stay with me......so each pixel represents 10mV

(500mV/div) x (1 div/5 ticks) x (1 tick/5 pixels) = 10mV / pixel

That gives a resolution down to 10.01 V, which is exactly 0.1%

Here's the catch, I just measured an un-calibrated 10V in the process that I described.  The FG had 9.970 V out while the probes output was centered on on the scope's 10V.  2/3 meters measured 9.970V from the FG.  Most scopes have a 1% error. so that represents 100mV.  The scope is 30mV high, and it falls within the error.

The next step is to check the output of the FG 60hz 10Vp AC signal to check if the top of the sine wave is at exactly the same peak value.

If so, then the 10Vp 60hz signal from the FG that was calibrated to the 10V ref would be then calibrated to to the scope with a 30mV offset in amplitude. 

This represents the "input" calibration.  This calibration will measure the input signal through the divider network against the output, and can serve as a reference for all other points with respect to the 3 meters.

The output calibration would be about the same except, I would have to get a 0.4 V ref that has an error of 0.75% I found at digikey and calibrate it to everything again except now I use the 1x probe.  At 0.75% error, the 0.4Vref error is really only about 3mV but the datasheet shows it to actually be much more stable at room temp on the initial reading, so more like 1mV, but lets assume 3mV. The scope would have to be set at 2mV/div, and offset of the output of the probe from the 0.4V ref to the scope so that it is centered vertically.  Each pixel is now, 0.08mV/pixel. This would effectively give a error reading of 0.02% of an output of 400mV DC from the rms to dc converter output.  The error of the 0.4Vref will be obvious on the scope because the FG and the meters were already calibrated with a 10Vref with +- 0.0275%.  I would have to calculate the scope's vertical off set and record that and adjust it with the output of the rms to dc converter.

Since the output of the LTC1968 is between 50mV and 500mV, this calibration would directly allow an even high tolerance than what the LTC1968 can achieve.

This would represent the "output" calibration, and it would also serve to measure the small signal inputs to the op amps.

Let me know what you think about the iffy pixel theory.

I'm not sure if that really could do the trick.

So, just checked the pricing at digikey and the only resistors with a greater tolerance of 0.1 were through hole types, and they were pricey.

Total cost of just the resistor network before tax and shipping, 196 bucks.
 :--
That's because you've used weird values. 9×10x is not a common resistor value, so it will be expensive, especially in 0.1% tolerance or better.

You would have more luck if you used standard E24 of E96 values. If you divide all of the precision resistor values in that circuit by 5, then it would give you much more widely available resistor values.

Actually exactly that series of values is available as a purpose designed divider network from Vishay, the CNS 471 series. Available with ratio tolerances of either 0.03%, 0.05% or 0.1% and tempco tracking of <2.5ppm/C. Datasheet attached. Not cheap, typical one off price (from memory) in the £30-40 GBP region, but that does get you the whole divider network.



Thanks, those seem great... :-+

The precision AC divider would normal use fixed precision resistors or a resistor array. So DC division and the low frequency performance is set by the resistors. With reasonable good resistors no adjustment is needed. For the caps, it is very difficult to know the parasitic capacitance and it can change (e.g. if the humidity in the board changes). So the capacitive part usually needs variable capacitors to do the adjustment.  If the circuit is reasonable simple adjustment can be similar to scope probes: use a good quality square signal and adjust the edges.

The calibration is really a difficult point for a DIY solution.  The often best instrument for higher BW AC is often the DSO - with some luck in the 1% range.

A DC reference is of limited use for an AC meter. At best it could be used with a square wave test signal.

I think the ltc1968 is only calibrated for sine waves, they show some error for square.

I think I have solve the cal prob, but I still need to solve the capacitance network problem.  I really wish there was a better easier way.


The application note from Analog / LInear by Jim Williams "Instrumentation Circuitry Using RMS-to-DC Converters" covers LTC1967/LTC1968/LTC1969 RMS-to_DC converters and gives very useful insight how to apply these devices for high precision measurements from 10 uV upwards. The application note provides also good overview what to expect from these devices, and the application note covers also the 1000X preamplifier design.
http://www.analog.com/media/en/technical-documentation/application-notes/an106f.pdf

Thanks I'll read that over.
 

Offline Alex Nikitin

  • Super Contributor
  • ***
  • Posts: 1166
  • Country: gb
  • Femtoampnut and Tapehead.
    • A.N.T. Audio
Re: Why are V(rms) measurements frequency dependant?
« Reply #51 on: August 09, 2018, 07:25:40 am »
Actually exactly that series of values is available as a purpose designed divider network from Vishay, the CNS 471 series. Available with ratio tolerances of either 0.03%, 0.05% or 0.1% and tempco tracking of <2.5ppm/C. Datasheet attached. Not cheap, typical one off price (from memory) in the £30-40 GBP region, but that does get you the whole divider network.

There is also Caddock version of such a divider, the 1776-C6815 has 0.05% ratio tolerance and 5ppm/C ratio tempco (unlike the Vishay part, that is the maximum tempco), it is somewhat cheaper and available from stock at Mouser.

Cheers

Alex
« Last Edit: August 09, 2018, 07:29:24 am by Alex Nikitin »
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: Why are V(rms) measurements frequency dependant?
« Reply #52 on: August 09, 2018, 03:46:01 pm »
Actually exactly that series of values is available as a purpose designed divider network from Vishay, the CNS 471 series. Available with ratio tolerances of either 0.03%, 0.05% or 0.1% and tempco tracking of <2.5ppm/C. Datasheet attached. Not cheap, typical one off price (from memory) in the £30-40 GBP region, but that does get you the whole divider network.

There is also Caddock version of such a divider, the 1776-C6815 has 0.05% ratio tolerance and 5ppm/C ratio tempco (unlike the Vishay part, that is the maximum tempco), it is somewhat cheaper and available from stock at Mouser.

Cheers

Alex

To be clear Vishay specify a typical tracking tempco of <2.5ppm and Caddock specify a maximum of 5ppm. I suspect that means the actual underlying specs are the same - Vishay specify their resistance material as nichrome, Caddock as TetrinoxTM (which for all we know, and all I could discover, could be a fanciful name for nichrome). The Caddock spec is for the wider industrial temperature range, Vishay's for the commercial temperature range.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #53 on: August 09, 2018, 11:49:35 pm »
I messed up on my calculation, the product of (500/5)/5 does not equal 10, it's 20, that means that the initial calibration is only at 0.2%, not 0.1%.

This remains to be a problem as well as the capacitor network.

 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #54 on: August 13, 2018, 12:10:45 am »
I messed up on my calculation, the product of (500/5)/5 does not equal 10, it's 20, that means that the initial calibration is only at 0.2%, not 0.1%.

This remains to be a problem as well as the capacitor network.

So, I've not fully given up on this idea yet, and I have been able to pin down a AC 20Vpp and DC 10V measurement with 20mV accuracy between the scope and the FG.

The FG DC output was compared to the REF102 10V reference IC with the 0.0275% max error over it's temp range.  I have been able to measure within 0.1mA of a 4khz measurement between a 20Vpp sine input through a 1.5kohm load using the ucurrent.  Taking a conservative visual observation of the peak voltage, the measurement was 6.56mV when it should have been 6.65mV. The noise was just a bit too much to overcome with only a 128 sample rate scope.  This represents 1.37% error, but the measurement was AC, and it was at 4khz.  That means it's already more accurate than my current meter at 4khz.  What I would like to figure out is how to increase the bias level of the channels.  If the bias level of the channels could be increased, then the V/div could be decreased from 500mV/div to 200mV/div or 100nV/div so that each pixel represents 8mV or 4mV/div respectively.

I've seen some interesting hacks for different DSOs, is this even possible?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf