Author Topic: when do you really need 6 1/2 DMM ? , I mean the res and/or the accuracy  (Read 11550 times)

0 Members and 1 Guest are viewing this topic.

Offline PA4TIM

  • Super Contributor
  • ***
  • Posts: 1116
  • Country: nl
  • instruments are like rabbits, they multiply fast
    • PA4TIMs shelter for orphan measurement stuff




Home build Vref based on LM399 connected to my Fluke 720 . 100nV resolution
(got 2 x 7,5 digit, 2 x 6,5 digit and 1 x 5,5 digit and a bunch calibrators
« Last Edit: May 18, 2013, 12:23:48 am by PA4TIM »
www.pa4tim.nl my collection measurement gear and experiments Also lots of info about network analyse
www.schneiderelectronicsrepair.nl  repair of test and calibration equipment
https://www.youtube.com/user/pa4tim my youtube channel
 

Offline babysitter

  • Frequent Contributor
  • **
  • Posts: 804
  • Country: de
  • pushing silicon at work
To Show off:-), for low Level stuff, physiological Signals, long-term Experiments and in troubleshooting.

Sent from mobile!
I'm not a feature, I'm a bug! ARC DG3HDA
 

Offline onlooker

  • Frequent Contributor
  • **
  • Posts: 384
Quote
Quote

    6.5 digit multimeter is the bare minimum for digital work. I recommend a 8.5 for analog.

bullshit. for digital work you only need a 1 digit 7 segment ...

I always thought the "6.5..8.5" line is just for kidding around. Now it got a serious reply. Is this a higher level of kidding around?
« Last Edit: May 18, 2013, 12:44:15 am by onlooker »
 

Offline c4757p

  • Super Contributor
  • ***
  • Posts: 7805
  • Country: us
  • adieu
I always thought the "6.5..8.5" line is just for kidding around.

:-+ Yep, I thought it was a joke too.
No longer active here - try the IRC channel if you just can't be without me :)
 

Offline robrenz

  • Super Contributor
  • ***
  • Posts: 3035
  • Country: us
  • Real Machinist, Wannabe EE
The specs I stated in my posts were correct but I was just playing along with what I thought was a joke. ::)

Offline eevblogfan

  • Frequent Contributor
  • **
  • Posts: 569
  • Country: 00
not when the LDS are jumping around  :scared:
 

Offline c4757p

  • Super Contributor
  • ***
  • Posts: 7805
  • Country: us
  • adieu
not when the LDS are jumping around  :scared:

Most Mormons I know don't do much jumping...
No longer active here - try the IRC channel if you just can't be without me :)
 

Offline eevblogfan

  • Frequent Contributor
  • **
  • Posts: 569
  • Country: 00
when the ripple is above certain amount, there is DC variation , when the NPLC is pretty bad , you don't get enough averaging , resulting to instability of the LSD(s)

while the 3478A preforms well in that regard 
 

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2007
  • Country: au
Speaking of metrology hard-ons
What sort of setup should you have to take small measurements or observe small changes in voltage, current and/or resistance.

For example what features should you enable/disable in the meter?

What about test leads? Keep them short? shielded? guarding? avoid certain insulation? (whats that term where movement of the lead creates noise).

I've seen pictures where BNC to banana adaptors are used in conjunction with coax cable or heavy external braiding.
What about environmental control such as temperature, humidity and perhaps even barometric pressure?

 

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2007
  • Country: au
(whats that term where movement of the lead creates noise).
Just found it. Triboelectric noise
 

alm

  • Guest
The Keithley 'Low level measurements handbook' contains answers to some of these questions, for example which types of insulation are best for various applications. It will also explain why those primitive 6.5 digit DMMs are not the best tool for some low-level measurements ;). I think it's available for free from their website, unless the evil Danaher group made them pull it.
 

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2007
  • Country: au
The Keithley 'Low level measurements handbook' contains answers to some of these questions, for example which types of insulation are best for various applications. It will also explain why those primitive 6.5 digit DMMs are not the best tool for some low-level measurements ;). I think it's available for free from their website, unless the evil Danaher group made them pull it.
Thanks alm, I already have a copy, just forgot about it  :palm:
 

Offline quantumvolt

  • Frequent Contributor
  • **
  • Posts: 395
  • Country: th
Given time I guess one can find many situations where you need some resolution / accuracy. I searched the web for 'lab experiment 6 1/2 digit dmm'. Most lab guides from universities were junk - like measuring an AA cell or a 1kOhm resistor.

But one of them was kind of interesting: A Thermocouple producing (-0.059, 0.000, 0.059) mV for  (-1, 0, +1) degrees Celsius.

http://www.omega.com/temperature/z/pdf/z206.pdf

Scroll down the website - it seems to be serious industrial sensor stuff. If you want  to discern 0 degrees from -0.1 and +0.1 you should be able to detect 0.0059 mV (if you can linearly interpolate for the junction). Practically imo I would then demand to be almost sure that my meter measured real 0 to between -0.003 and 0.003, real 0.0059 to between 0.0031 and 0.0090 etc. The limits here are not important (imo), but basically I am asking to measure 0.003 mV on a 20 mV range. Well 0.003/20=0.00015 which is 0.015% of full scale.

The 34401A has basic DC accuracy +-0.0035% which is less than 5 times what is needed. If the Agilent is out of calibration it might very well not be able to do this measurement.

My quasi-math might be flawed, but I am sure some forum members will correct or refine what I say. Also one should may be discuss resolution and accuracy separately. I'll leave that to people more knowledgeable than me (i.e. less lazier :=\). Here is the link to the lab where they state 'Thermocouples generate very small voltages, which must normally be amplified to be read by a DAQ or oscilloscope, although the 6-1/2 Digit DMM has sufficient range.' Then they go on putting a DC amplifier in front anyway? :scared: Makes me wonder how many digits and what accuracy you need to be reasonably sure that the offset for the DC amplifier is neglectable in this case ...


Edit: Link: http://www.eng.hmc.edu/NewE80/TemperatureLab.html
« Last Edit: May 19, 2013, 11:30:20 am by quantumvolt »
 

Offline robrenz

  • Super Contributor
  • ***
  • Posts: 3035
  • Country: us
  • Real Machinist, Wannabe EE
Much easier with a RTD if it covers your temp range.  Some temp gages alone that can measure to .001 deg C resolution cost more than the 8846A

Offline saturation

  • Super Contributor
  • ***
  • Posts: 4788
  • Country: us
  • Doveryai, no proveryai
    • NIST
Concur, the Keithley manual is great summary and a must for everyone. If you work with such signals regularly its pays say to get an electrometer vs a high digit DMM, at some point its the only way to work.  For e.g. an electrometer has teraohm input impedance, good DMM like the 3456a has gigaohms, but even then it may load some systems too much.  The Agilent 1252a HH DMM can resolve 1 or 10uV I believe, and  has >1G ohm ohm input impedance at under 1V scale, 10+ megaohm otherwise, and beware, 1 megaohm in dual display mode!  Its low voltage capability is a reason I grabbed it while it was still being made.



Such work is purely analog.  Detecting tiny voltages in organic systems [ e.g. nerve potentials] or materials [ piezo, thermo electric etc.,] are all the realm of low power signals requiring very high input impedance.  For materials labeled as high in the Keithley graph, a good DMM works very well [e.g. the nerve of an octopus is fairly large versus a single axon in rats].

Likewise, detecting low variation in an otherwise 'normal' signal like a NiMH self discharge voltage, can only be detected if you have a high precision DMM, that is looking at uV variations within 1V signal: 1.000 000VDC.  In many modern designs, you can check the quiescent power drain of 'smart off' devices that are really never power off mechanically using the LSD of your DMM.




The Keithley 'Low level measurements handbook' contains answers to some of these questions, for example which types of insulation are best for various applications. It will also explain why those primitive 6.5 digit DMMs are not the best tool for some low-level measurements ;). I think it's available for free from their website, unless the evil Danaher group made them pull it.
« Last Edit: May 19, 2013, 01:49:23 pm by saturation »
Best Wishes,

 Saturation
 

Offline madshaman

  • Frequent Contributor
  • **
  • Posts: 699
  • Country: ca
  • ego trans insani
when do you really need 6 1/2 DMM ? , I mean the res and/or the accuracy
« Reply #40 on: May 20, 2013, 03:12:37 pm »




Home build Vref based on LM399 connected to my Fluke 720 . 100nV resolution
(got 2 x 7,5 digit, 2 x 6,5 digit and 1 x 5,5 digit and a bunch calibrators

1) Jealous
2) Do you keep one of your 7.5 meters as a master reference and do-you/how-to-you keep all your meters calibrated around your home-built vref?
3) Have you had your vref calibrated to a NIST traceable standard?

Benchtop-wise, I have one 7.5 digit meter, 4x6.5 digit meters, and the only decent handhelds I have are two Brymen 867s.

I would really like to have all my meters dead-on accurate as well as precise, but I don't really know all I need and how to build myself a calibration schedule and procedure.

(All my meters track most values down to the LSD together, so they're close to each other, but I have know idea how close to a reference they are).

Total side note: is it obscenely difficult to set things up so your own home lab can procude NIST traceable calibration certificates itself?  (as a non industry hobbiest, I know nothing or little avout this)
« Last Edit: May 20, 2013, 03:17:19 pm by madshaman »
To be responsible, but never to let fear stop the imagination.
 

Offline saturation

  • Super Contributor
  • ***
  • Posts: 4788
  • Country: us
  • Doveryai, no proveryai
    • NIST
At some point you need to reference it against a standard.  But the big question you have is can you create a standard that has no other reference but itself, like the JJ is for volts in the world?

JJ compare themselves against frequency standards, and making conversions is the key.  In this paper, you see 2 separate JJ setups compared against a single microwave source and shows the degree of agreement.

http://iopscience.iop.org/0026-1394/31/1/007

Since you cannot do a similar task with semiconductor references, what you can do is the old-style metrology methods.  To make a long story short you need to track the drift of your reference vs time.  The curve will show cyclic variations with climate, and a long term variation over its mean.   You can then reference your measurement against itself, to the past, and knowing at the zero point, you were in calibration against the standard, estimate the variation from the true value.  This creates a level of certainty and thus, your standard is now expressed both as a value, and a degree of certainty; what is purely physical now becomes partly statistical.

Referencing the photo, you cannot use a reference so young that it drifts in mean as is A, either up or down in value.  You want drifts like b.  Choose a type of reference design that reduces the width of c.  Your stable reference should give a mean value of d and the certainty is effectively the width of c.

The X axis is time, the Y is volts.



The graphic is a simplification, the real data can be hard to decipher until you amass enough data to see the forest from the trees.  An example of real data, taken from a volt nuts post.



..
I would really like to have all my meters dead-on accurate as well as precise, but I don't really know all I need and how to build myself a calibration schedule and procedure.

Total side note: is it obscenely difficult to set things up so your own home lab can procude NIST traceable calibration certificates itself?  (as a non industry hobbiest, I know nothing or little avout this)
« Last Edit: May 21, 2013, 01:13:56 pm by saturation »
Best Wishes,

 Saturation
 

Offline madshaman

  • Frequent Contributor
  • **
  • Posts: 699
  • Country: ca
  • ego trans insani
when do you really need 6 1/2 DMM ? , I mean the res and/or the accuracy
« Reply #42 on: May 21, 2013, 04:52:15 pm »
Thanks saturation, that gives me a good idea of the general process.

Given that I'm unlikely to buy references with calibration history or will more likely roll my own (money), I take it I start with getting them calibrated often at first and establish and log their long-term drifting behaviour.  If I calibrate my other instruments against these references, I log this action.

Then, at any point I can extrapolate both any instrument's expected drift at the time a measurement was taken and also a certainty metric?

So in essence, keeping a well calibrated lab is mostly documentation/logging (and the ability to extrapolate from the collected data) ?
To be responsible, but never to let fear stop the imagination.
 

Offline saturation

  • Super Contributor
  • ***
  • Posts: 4788
  • Country: us
  • Doveryai, no proveryai
    • NIST
You're welcome, and yes, that's exactly right.  Also the "c" part of the graph can be kept very narrow by controlling the climate, that's a key factor in variation, temperature is the easiest, humidity somewhat and pressure difficult. Metrology labs keep very detailed careful records, but today, its all automated.  Most engineers will look at the data and analyze it for quality assurance, as practically, most labs will do annual comparisons against national lab standards anyway to meet ISO procedures, which you won't have to and can rely on statistics.   Frequent comparisons to a standard reduces uncertainty to a lowest possible value.

Its not difficult work, but one has to be 'anal' about doing collecting data.

For my lab, I stagger the calibration dates, so they don't expire at the same time, and so in theory the 'calibrated' instruments check the uncalibrated ones like a round-robbin, eventually they all 'calibrate' themselves.


Thanks saturation, that gives me a good idea of the general process.

Given that I'm unlikely to buy references with calibration history or will more likely roll my own (money), I take it I start with getting them calibrated often at first and establish and log their long-term drifting behaviour.  If I calibrate my other instruments against these references, I log this action.

Then, at any point I can extrapolate both any instrument's expected drift at the time a measurement was taken and also a certainty metric?

So in essence, keeping a well calibrated lab is mostly documentation/logging (and the ability to extrapolate from the collected data) ?

Best Wishes,

 Saturation
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf