EEVblog Electronics Community Forum
Electronics => Beginners => Topic started by: Chris935 on April 04, 2018, 07:09:12 pm
-
Lets say I have a bench PSU powering a project via relatively thin wires, and the circuit uses a relatively high current. I want to measure the voltage drop across the positive supply wire, and I want to do this with as little error as possible.
The DC V accuracy spec of my hypothetical meter is +- 1%, +5 counts, on all ranges. The meter is 6000 counts. If I put one probe on the PSU end of the wire, and another on and circuit end, and I read 550.0mV this means that the real value lies somewhere between (550.0 - 0.5 X 0.99) = 544.005mV and (550.0 + 0.5 X 1.01) = 556.005mV. I'm calculating the counts before the percentage as I think that error is introduced later in the process, so should be undone in the reverse order. I have a potential error of 12mV.
My understanding is that the % error is due to the potential discrepancy between assumed voltage and the actual voltage of the internal voltage reference, and that the counts error is due to internal noise and the resolution of the ADC.
If the above is correct, could I eliminate the percentage error by instead making a relative measurement, comparing one absolute measurement to another? As I'm using the same internal voltage reference with the same error for both measurements the difference between them should still be correct. Of course the ADC resolution would still have a potential error, and I'd have two counts errors rather than just one.
Say I measured 5.000V between the PSU positive and negative terminals, and 4.450V between PSU negative and the circuit end of the positive wire. If I don't have to consider the percentage error, the real value lies between (5.000 + 0.005) - (4.450 - 0.005) = 0.56V and (5.000 - 0.005) - (4.450 + 0.005) = 0.54V, a potential error of 20mV.
In this example the error is greater, but would this be a valid thing to do on a meter with particularly bad % error, but fairly low counts error? If I used an unrealistic example with only 10% accuracy the second example would of course be preferable. Maybe on a meter with uncertain cal, as the voltage reference may drift whereas the ADC resolution won't?
Is my thinking correct?
Chris
-
Do some math. Assume you have a meter that is consistently 10% low. Measuring across the wire will obviously be 10% low, but so will measuring each end and subtracting.
Worse, with a real meter, you have no guarantee that the percentage error will be the same on each measurement, so doing two measurements and subtracting can make things significantly worse.