Author Topic: Suitable resistor temperature coefficient for different DMM accuracy.  (Read 1126 times)

0 Members and 1 Guest are viewing this topic.

Offline NaxFMTopic starter

  • Regular Contributor
  • *
  • Posts: 124
  • Country: it
I was wondering if you have some sort of rule of thumb for choosing a proper resistance reference for a given instrument accuracy.
I'm trying to acquire a small set of references for homemade sanity checks and calibrations of my multimeters, but since I have many things I want to buy for my lab I can't just buy the most expensive and stable resistors I can find.
Of course the best possible choice would be an ultra stable 0.001ppm resistor reference, but that would be way overkill and extremely expensive for checking a 6.5 or even 7.5 digit multimeter.
What do you think is a sensible choice for a 6.5 digits multimeter like the keysight 34461a?
Would 10ppm be sufficient? Or maybe the absolute minimum is a 0.2ppm bulk foil resistor?
I'm trying to get the best compromise between cost and accuracy, what would you buy if you were in my place?

All these resistors would be measured the first time with a calibrated 7.5 digit DMM at my university and used at home to periodically verify the calibration and drift of my dmms.
 

Offline ywara

  • Contributor
  • Posts: 36
  • Country: us
The rule of thumb for any calibration is a TUR of 4:1.

Probably that's excessive in your case, so let's go with 2:1.

Example:

The 34401A has a rated 1-year accuracy of .014% for a 100 Ohm resistor on the 100 Ohm range. This is +/- 14 mOhms. For a TUR of 2:1 your 100 Ohm standard resistor's value should have an uncertainty, including TCR and the accuracy of the ohmmeter used to verify it, of +/- 7 mOhms or 70 ppm. In this case maybe you'd use a 10 ppm resistor and an ohmmeter with 60 ppm or better accuracy to get the calibrated value. A 34470A 7.5 digit meter calibrated within the last 24 hours would just barely meet this requirement.

Obviously there are many other considerations that can (and probably should) be factored into the uncertainties, but that's the basics.
 
The following users thanked this post: splin

Offline Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 3356
  • Country: nl
The principle is quite simple.

You start with an error budget. If you want the last digit of a 4 digit (9999cnt) meter to be accurate, then you have a 100ppm error budget.
Then you make a list of all errors. Initial tolerances, changes with temperature, expected temperature range your device must be accurate, effects of temperature on opamp bias/offset currents, etc.

The static errors such as the initial tolerances of voltage references and resistors can be calibrated out.
Others can be reduced by for example adding a heater to get your device at a steady (above ambient) temperature.
Keep in mind that leakage current for semiconductors increase exponentially with temperature!

Sometimes you can partially compensate for some of these errors.

Then you have to divide the error budget over the parts you are using. some of the error sources may be intrinsically small and can be neglected. Other error sources may totally dominate the total error budget. You have to find a way that the sum of all errors keeps within your budget.

There is also a very big difference in what you want to make. If you want absolute accuracy then you design differently then if you are mostly interested in relative changes. A good metrology ADC is not the same as a good audio ADC.
 

Offline Conrad Hoffman

  • Super Contributor
  • ***
  • Posts: 1930
  • Country: us
    • The Messy Basement
FWIW, the best our local cal labs will do is +/- 6 ppm on a 1k resistor.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf