Electronics > Beginners
Why are V(rms) measurements frequency dependant?
<< < (3/11) > >>
Kleinstein:
The  LTC1968 looks like a good one. I don't see a need for a voltage reference here - the voltage reference would be needed for the measurement of the DC output only, which would be a normal DMM in voltage mode here.  The  LTC1968 would just need a reasonably regulated supply.

For the auto-ranging, there is no absolute need for this here. It could be done by hand too if there are suitable indications. The relevant numbers are the peak voltages - so one should have some extra circuitry to check the peak voltages. As a minimum this would be something like 2 comparators to check the upper limits and than use manual adjustment with try and error (use smallest range that does not indicate error from peak values).

The actual gain setting can be quite tricky for higher BW if it needs to be really accurate. This is because the divider would be not just resistors, but also with parallel capacitance that needs adjustment (a little like the compensation at scope probes). Also electronic switches have limited isolation when off.
Zero999:
I've just quickly read through the data-sheet for the LTC1968. The reference is the common voltage for the AC waveform. The IC measures the difference between the voltage on its inputs. At least one input must be DC coupled to a steady voltage between the supply rails. If it's a single supply application, connect one pin to a potential divider with a bypass capacitor to 0V and the other input to the signal source, via a capacitor. See page 12.
http://www.analog.com/media/en/technical-documentation/data-sheets/1968f.pdf

The output of the LTC1968 is high impedance and needs a buffer amplifier, before going to the DVM. A decent, low offset, high input impedance, low bias current, op-amp is required for the buffer.
Kalvin:

--- Quote from: Hero999 on August 05, 2018, 12:53:48 pm ---
--- Quote from: Kalvin on August 05, 2018, 09:47:20 am ---The analog RMS converter used in Fluke 8842A has accuracy around 0.5% or worse (from the Fluke 8842A manual). Similar accuracy can be achieved using an analog RMS converter from Analog Devices up to few kHz, and with additional 1% error up to 200 kHz with sufficient input signal level. The dynamic range an analog RMS-converter is around 60 dB (1:1000), so one will not get very many digits of accuracy, although the resolution might be a digit or so more. In order to get higher accuracy one could use LTC1968 up to 150 kHz and 0.1% accuracy. The RMS-converters are quite sensitive to the input voltage, which means that the [autoranging] input circuitry needs to track the input voltage so that the RMS-converter will see optimal input voltage with a sufficient crest factor margin.

Edit: The AD8436 looks pretty good.

--- End quote ---
Don't some meters do the RMS calculation digitally? That might be more accurate, but it will could also use more energy, than doing it the analogue way.

--- End quote ---
The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

In order to get high signal dynamic range with sufficient room for crest factor, the ADC has to have as many bits as possible, say 16 bits with 3-4 bits reserved for crest factor (ie. for the peak values of the signal compared to the RMS of the signal https://en.wikipedia.org/wiki/Crest_factor) leaving 12 - 13 bits RMS for computation.

At low signal levels the resolution will suffer due to the quantization. In order to compensate the quantization effects one may need to either increase the sample rate with oversampling or increase the number of bits of the ADC from 16 bits to 20 - 24 bits, for example, which will increase the cost of the ADC. Alternatively one may arrange the input signal level so that it will be kept as high as possible without clipping (autoranging or manual ranging) in order to get as many significant bits as possible for best accuracy and resolution.

After sampling the computation is quite straight forward requiring some DSP computation. There are nice algorithms available for computing the RMS: https://www.embedded.com/design/configurable-systems/4006520/Improve-your-root-mean-calculations

My guesstimate  is that getting 3.75 digits for resolution is quite the practical limit with one can achieve with a signal bandwidth of > 100 kHz with a typical 16-bit ADC and optimal signal level with crest factor of 10. Probably one could achieve one extra digit with a state-of-the art, fast 24-bit ADC. One can obtain better estimation on resolution/accuracy and effects of different signal levels by performing some simulation and running mathematical/numerical analysis for the quantized signals.
JS:



--- Quote from: Kalvin on August 05, 2018, 06:21:57 pm ---The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

In order to get high signal dynamic range with sufficient room for crest factor, the ADC has to have as many bits as possible, say 16 bits with 3-4 bits reserved for crest factor (ie. for the peak values of the signal compared to the RMS of the signal https://en.wikipedia.org/wiki/Crest_factor) leaving 12 - 13 bits RMS for computation.

At low signal levels the resolution will suffer due to the quantization. In order to compensate the quantization effects one may need to either increase the sample rate with oversampling or increase the number of bits of the ADC from 16 bits to 20 - 24 bits, for example, which will increase the cost of the ADC. Alternatively one may arrange the input signal level so that it will be kept as high as possible without clipping (autoranging or manual ranging) in order to get as many significant bits as possible for best accuracy and resolution.

After sampling the computation is quite straight forward requiring some DSP computation. There are nice algorithms available for computing the RMS: https://www.embedded.com/design/configurable-systems/4006520/Improve-your-root-mean-calculations

My guesstimate  is that getting 3.75 digits for resolution is quite the practical limit with one can achieve with a signal bandwidth of > 100 kHz with a typical 16-bit ADC and optimal signal level with crest factor of 10. Probably one could achieve one extra digit with a state-of-the art, fast 24-bit ADC. One can obtain better estimation on resolution/accuracy and effects of different signal levels by performing some simulation and running mathematical/numerical analysis for the quantized signals.

--- End quote ---

No need for that much adc and dsp, you can do oversampling and decimation after rectification so the resolution and frequency response can be much better than what you said... You could sample at 1kHz and still get a response up to MHz if the sampling is short enough lthe adc frequency resppnae is the limit, not the sampling frequency) and after averaging you get the resolution under one LSB, useful if the ADC linearity is gpod enough but you don't need the data and computation to deal with greater ADCs. Using a pseudorandom sampling frequency makes for a better frrquency response, minimizing the comb filter at multiples of the sampling freq.

JS

Zero999:

--- Quote from: JS on August 05, 2018, 06:45:27 pm ---


--- Quote from: Kalvin on August 05, 2018, 06:21:57 pm ---The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s.

In order to get high signal dynamic range with sufficient room for crest factor, the ADC has to have as many bits as possible, say 16 bits with 3-4 bits reserved for crest factor (ie. for the peak values of the signal compared to the RMS of the signal https://en.wikipedia.org/wiki/Crest_factor) leaving 12 - 13 bits RMS for computation.

At low signal levels the resolution will suffer due to the quantization. In order to compensate the quantization effects one may need to either increase the sample rate with oversampling or increase the number of bits of the ADC from 16 bits to 20 - 24 bits, for example, which will increase the cost of the ADC. Alternatively one may arrange the input signal level so that it will be kept as high as possible without clipping (autoranging or manual ranging) in order to get as many significant bits as possible for best accuracy and resolution.

After sampling the computation is quite straight forward requiring some DSP computation. There are nice algorithms available for computing the RMS: https://www.embedded.com/design/configurable-systems/4006520/Improve-your-root-mean-calculations

My guesstimate  is that getting 3.75 digits for resolution is quite the practical limit with one can achieve with a signal bandwidth of > 100 kHz with a typical 16-bit ADC and optimal signal level with crest factor of 10. Probably one could achieve one extra digit with a state-of-the art, fast 24-bit ADC. One can obtain better estimation on resolution/accuracy and effects of different signal levels by performing some simulation and running mathematical/numerical analysis for the quantized signals.

--- End quote ---

No need for that much adc and dsp, you can do oversampling and decimation after rectification so the resolution and frequency response can be much better than what you said... You could sample at 1kHz and still get a response up to MHz if the sampling is short enough lthe adc frequency resppnae is the limit, not the sampling frequency) and after averaging you get the resolution under one LSB, useful if the ADC linearity is gpod enough but you don't need the data and computation to deal with greater ADCs. Using a pseudorandom sampling frequency makes for a better frrquency response, minimizing the comb filter at multiples of the sampling freq.

JS

--- End quote ---
Yes, you should be able to use a lower sample frequency, than the bandwidth of the signal, because the waveform will more than likely be repeating and you want an average over a long time period, to do RMS calculations anyway.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod