Author Topic: Why are V(rms) measurements frequency dependant?  (Read 7529 times)

0 Members and 2 Guests are viewing this topic.

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #50 on: August 08, 2018, 10:45:36 pm »
Hmm, unless you really want to use a lot of time and money for purely educational purposes, I see no chance that you can reach your goals the way you've described. I would advise you to buy a good old meter (say, HP3456A has a pretty decent accuracy on AC and you can get it for under $200).

Cheers

Alex

I think I have figured out how the calibrate the 0.1%.

I have a 10V ref coming that will be able to reference a 10x probe, a FG, and 3 meters.

This 10V ref has a 2.75mVpp max error of the temp range, but the datasheet says it really only about >1mV at 23 deg, but let's assume 2.75mVpp.

That gives the error of the voltage reference of 0.0275%

The meters have a +-0.05%+30 DC on all ranges.

The FG can output a DC with a range to 10.000 V.  the 10x probes can offset the 10V ref signal -10V to put the 10V output of the probe in the middle of the scope vertical center while the scope is at 500mV/div.  This sounds strange, but, between the 8 vertical div, there are 5 ticks per div, and between the ticks, the courser for the voltage measurement move 5 pixels per tick.   :palm:  Ok stay with me......so each pixel represents 10mV

(500mV/div) x (1 div/5 ticks) x (1 tick/5 pixels) = 10mV / pixel

That gives a resolution down to 10.01 V, which is exactly 0.1%

Here's the catch, I just measured an un-calibrated 10V in the process that I described.  The FG had 9.970 V out while the probes output was centered on on the scope's 10V.  2/3 meters measured 9.970V from the FG.  Most scopes have a 1% error. so that represents 100mV.  The scope is 30mV high, and it falls within the error.

The next step is to check the output of the FG 60hz 10Vp AC signal to check if the top of the sine wave is at exactly the same peak value.

If so, then the 10Vp 60hz signal from the FG that was calibrated to the 10V ref would be then calibrated to to the scope with a 30mV offset in amplitude. 

This represents the "input" calibration.  This calibration will measure the input signal through the divider network against the output, and can serve as a reference for all other points with respect to the 3 meters.

The output calibration would be about the same except, I would have to get a 0.4 V ref that has an error of 0.75% I found at digikey and calibrate it to everything again except now I use the 1x probe.  At 0.75% error, the 0.4Vref error is really only about 3mV but the datasheet shows it to actually be much more stable at room temp on the initial reading, so more like 1mV, but lets assume 3mV. The scope would have to be set at 2mV/div, and offset of the output of the probe from the 0.4V ref to the scope so that it is centered vertically.  Each pixel is now, 0.08mV/pixel. This would effectively give a error reading of 0.02% of an output of 400mV DC from the rms to dc converter output.  The error of the 0.4Vref will be obvious on the scope because the FG and the meters were already calibrated with a 10Vref with +- 0.0275%.  I would have to calculate the scope's vertical off set and record that and adjust it with the output of the rms to dc converter.

Since the output of the LTC1968 is between 50mV and 500mV, this calibration would directly allow an even high tolerance than what the LTC1968 can achieve.

This would represent the "output" calibration, and it would also serve to measure the small signal inputs to the op amps.

Let me know what you think about the iffy pixel theory.

I'm not sure if that really could do the trick.

So, just checked the pricing at digikey and the only resistors with a greater tolerance of 0.1 were through hole types, and they were pricey.

Total cost of just the resistor network before tax and shipping, 196 bucks.
 :--
That's because you've used weird values. 9×10x is not a common resistor value, so it will be expensive, especially in 0.1% tolerance or better.

You would have more luck if you used standard E24 of E96 values. If you divide all of the precision resistor values in that circuit by 5, then it would give you much more widely available resistor values.

Actually exactly that series of values is available as a purpose designed divider network from Vishay, the CNS 471 series. Available with ratio tolerances of either 0.03%, 0.05% or 0.1% and tempco tracking of <2.5ppm/C. Datasheet attached. Not cheap, typical one off price (from memory) in the £30-40 GBP region, but that does get you the whole divider network.



Thanks, those seem great... :-+

The precision AC divider would normal use fixed precision resistors or a resistor array. So DC division and the low frequency performance is set by the resistors. With reasonable good resistors no adjustment is needed. For the caps, it is very difficult to know the parasitic capacitance and it can change (e.g. if the humidity in the board changes). So the capacitive part usually needs variable capacitors to do the adjustment.  If the circuit is reasonable simple adjustment can be similar to scope probes: use a good quality square signal and adjust the edges.

The calibration is really a difficult point for a DIY solution.  The often best instrument for higher BW AC is often the DSO - with some luck in the 1% range.

A DC reference is of limited use for an AC meter. At best it could be used with a square wave test signal.

I think the ltc1968 is only calibrated for sine waves, they show some error for square.

I think I have solve the cal prob, but I still need to solve the capacitance network problem.  I really wish there was a better easier way.


The application note from Analog / LInear by Jim Williams "Instrumentation Circuitry Using RMS-to-DC Converters" covers LTC1967/LTC1968/LTC1969 RMS-to_DC converters and gives very useful insight how to apply these devices for high precision measurements from 10 uV upwards. The application note provides also good overview what to expect from these devices, and the application note covers also the 1000X preamplifier design.
http://www.analog.com/media/en/technical-documentation/application-notes/an106f.pdf

Thanks I'll read that over.
 

Offline Alex Nikitin

  • Super Contributor
  • ***
  • Posts: 1218
  • Country: gb
  • Femtoampnut and Tapehead.
    • A.N.T. Audio
Re: Why are V(rms) measurements frequency dependant?
« Reply #51 on: August 09, 2018, 07:25:40 am »
Actually exactly that series of values is available as a purpose designed divider network from Vishay, the CNS 471 series. Available with ratio tolerances of either 0.03%, 0.05% or 0.1% and tempco tracking of <2.5ppm/C. Datasheet attached. Not cheap, typical one off price (from memory) in the £30-40 GBP region, but that does get you the whole divider network.

There is also Caddock version of such a divider, the 1776-C6815 has 0.05% ratio tolerance and 5ppm/C ratio tempco (unlike the Vishay part, that is the maximum tempco), it is somewhat cheaper and available from stock at Mouser.

Cheers

Alex
« Last Edit: August 09, 2018, 07:29:24 am by Alex Nikitin »
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: Why are V(rms) measurements frequency dependant?
« Reply #52 on: August 09, 2018, 03:46:01 pm »
Actually exactly that series of values is available as a purpose designed divider network from Vishay, the CNS 471 series. Available with ratio tolerances of either 0.03%, 0.05% or 0.1% and tempco tracking of <2.5ppm/C. Datasheet attached. Not cheap, typical one off price (from memory) in the £30-40 GBP region, but that does get you the whole divider network.

There is also Caddock version of such a divider, the 1776-C6815 has 0.05% ratio tolerance and 5ppm/C ratio tempco (unlike the Vishay part, that is the maximum tempco), it is somewhat cheaper and available from stock at Mouser.

Cheers

Alex

To be clear Vishay specify a typical tracking tempco of <2.5ppm and Caddock specify a maximum of 5ppm. I suspect that means the actual underlying specs are the same - Vishay specify their resistance material as nichrome, Caddock as TetrinoxTM (which for all we know, and all I could discover, could be a fanciful name for nichrome). The Caddock spec is for the wider industrial temperature range, Vishay's for the commercial temperature range.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #53 on: August 09, 2018, 11:49:35 pm »
I messed up on my calculation, the product of (500/5)/5 does not equal 10, it's 20, that means that the initial calibration is only at 0.2%, not 0.1%.

This remains to be a problem as well as the capacitor network.

 

Offline sourcechargeTopic starter

  • Regular Contributor
  • *
  • Posts: 199
  • Country: us
Re: Why are V(rms) measurements frequency dependant?
« Reply #54 on: August 13, 2018, 12:10:45 am »
I messed up on my calculation, the product of (500/5)/5 does not equal 10, it's 20, that means that the initial calibration is only at 0.2%, not 0.1%.

This remains to be a problem as well as the capacitor network.

So, I've not fully given up on this idea yet, and I have been able to pin down a AC 20Vpp and DC 10V measurement with 20mV accuracy between the scope and the FG.

The FG DC output was compared to the REF102 10V reference IC with the 0.0275% max error over it's temp range.  I have been able to measure within 0.1mA of a 4khz measurement between a 20Vpp sine input through a 1.5kohm load using the ucurrent.  Taking a conservative visual observation of the peak voltage, the measurement was 6.56mV when it should have been 6.65mV. The noise was just a bit too much to overcome with only a 128 sample rate scope.  This represents 1.37% error, but the measurement was AC, and it was at 4khz.  That means it's already more accurate than my current meter at 4khz.  What I would like to figure out is how to increase the bias level of the channels.  If the bias level of the channels could be increased, then the V/div could be decreased from 500mV/div to 200mV/div or 100nV/div so that each pixel represents 8mV or 4mV/div respectively.

I've seen some interesting hacks for different DSOs, is this even possible?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf