The limiting factor is the resolution and that does give you a bit of a hit with regards to your imported ppm.
I wouldn't call it a "limiting factor" as much as a "invalidating factor".
In your example you calculate the standard deviation using formulas which assume a bell-shaped noise process, but your lack of resolution, relative to the noise in the signal gives you a step-noise process (ie: "±1" noise).
When resolution is insufficient for the noise, the stddev thus determined is useless, because it depends on magnitude of the measured artifact.
Assume a perfect digital voltmeter which measures volt with 100mV resolution, it can show 4.8, 4.9, 5.0, 5.1, 5.2 and so on, and it rounds perfectly.
If you measure a 5.0000…V reference, it will constantly show "5.0", it will be doing that all the way from 4.9500…1 to 5.04999.....9 volt input.
Congratulations: You have a meter with zero stddev!
If instead you measure a 5.05000… reference, the meter will show "5.0" half the time and "5.1" the other half and your stddev is now 0.534 V.
...snip
In practice it is probably not much of a problem, because nobody really cares about the actual stddev of low resolution instruments
I reacted to this because the example you used was almost a school-book example of the problem: When your measurements have ±1 nature, ie: digital presentation, it should have no effect on the resulting uncertainty, if you change any single measurement one step up or down and your example failed that test.
As a general rule of thumb: If you only have one, two or three different sequential numbers in your data, stddev is not what you want.
This is my problem with calibration services. Calibrating once a year with the 1-year specs as criterion, especially if not providing hard numbers on the measured margin to those specs, that leaves the UUT possibly outside the 1-year specs for the whole year. Well the next calibration, the year after, should catch that the UUT is no longer in the 1-year specs and thus indicate that the measurements during the past year were less accurate than assumed. This makes calibration mostly useful to verify past measurements, not guarantee the performance in the coming year. In contrast, naively reading the datasheet would have you believe that a 1-year calibration schedule awards you the 1-year accuracy specs. Anyone care to comment if this weakness is well-known?
So what is the probability of finding the true value?
So what is the probability of finding the true value?
mendip_discovery thank you for this thread. I read it with great interest.
Can you give an example of a complete calculation for example for calibrating 10V exit Fluke 732a (1) using 3458A and other Fluke 732a (2) to which there is data from the calibration laboratory of the top level.
Those. So we took and received 100 measurements that show the difference between both standards. We know the parameters of the Laboratory Fluke 732a, the temperature in the laboratory, and the 3458A passport data.
Can you make an example of how we get the data for the calibrated Fluke 732a (1)?
The Imported Unc is the bit that weights up the reading.
2. But if it did you could keep a similar Unc just adjust the ppm stuff for the 10.0000472V (rather than 10V) that you will calibrate to, its what we do with standard resistors.
Many thanks for the expansion answer. If you do not mind I will ask moreThe Imported Unc is the bit that weights up the reading.1. Number 536.9 μV describes the measuring laboratory? Those (lab source standard + NULL Meter) has a 536.9 μV uncertainty?
Quote2. But if it did you could keep a similar Unc just adjust the ppm stuff for the 10.0000472V (rather than 10V) that you will calibrate to, its what we do with standard resistors.2. Where did the number of 10.0000472v come from? I do not see this number in calculations?
3. Why do we consider noise and drift having a rectangular distribution?
That is the measured value from the certificate. If you have it adjusted to be within the specification and it keeps within the spec then its easier to use 10V and apply the Uncertainty above. But if it drifts to 10.00000472 V and let us say and it can't be adjusted to be 10.0V you can still use it, you can use it at 10.0000472 V but you would need to recalculate the ppm values to 10.0000472 V. It has 50 μV of adjustment but it might overtime drift so far you can no longer adjust it.
That is the measured value from the certificate. If you have it adjusted to be within the specification and it keeps within the spec then its easier to use 10V and apply the Uncertainty above. But if it drifts to 10.00000472 V and let us say and it can't be adjusted to be 10.0V you can still use it, you can use it at 10.0000472 V but you would need to recalculate the ppm values to 10.0000472 V. It has 50 μV of adjustment but it might overtime drift so far you can no longer adjust it.I'm a little confused. We have a standard owned laboratory. Let's call it lab_standard. And there is a user standard. Let's call it Custom_standard.
Lab_standard = 10.00000472 v with UNCERTAINTY 536.9 μV (k = 2). This is data from the calibration certificate.
If I use this standard for a year, I must use the uncertainty calculated in the table and equal to 545.86 μV (k = 2)
And all the calculations that were made while relate only to lab_standard? I understood correctly?
You could also use it to calibrate the Customer Source by Nulling/Zeroing using the lab's multimeter and lab standard then measuring the customer's unit but that wouldn't my prefered option but it can be done and you would have to possibly add some extra in for the Lab Multimeter.
Thanks for answers. All clear.You could also use it to calibrate the Customer Source by Nulling/Zeroing using the lab's multimeter and lab standard then measuring the customer's unit but that wouldn't my prefered option but it can be done and you would have to possibly add some extra in for the Lab Multimeter.Which calibration method do you choose?
The new 17025 documents are putting more pressure on labs to have a decision rule and that is a good thing as there are lots of labs stating stuff is passed when actually given the barn door of an uncertainty they have it could well be undetermined. It is quite funny looking at mechanical calibration certs where the Unc is wider than the spec but even funnier is that labs had an Unc for length but they never had one for flatness or parallelism of that gauge, now they have to and it is causing lots of cuffufle. The alternative is labs just don't say it conforms to a spec but I get a feeling that won't slide for much longer as customers assume compliance of a gauge that has been calibrated.
Around here it seems quite common to skip the specifications and pass/fail statements altogether and only provide the measurement results (and uncertainty) for the customer.
Certainly that way in our lab: Out of the few thousand temperature calibration certificates that I have signed maybe five! had any sort of comparison to specifications.
For anyone interested the ILAC G8 is the current "bible" on decision rules:
https://ilac.org/latest_ilac_news/revised-ilac-g8-published/
Requirements for decision rules vary quite a lot by field: DMM calibration with 95% uncertainty is good enough for most uses but you might want to consider much larger guardband if you work in drug lab or calibrate police radar gun.
So attached is a crude Budget for a 10.0000000V source. I am not sure the Load Regulation is a factor to take but I am assuming it is the worst case scenario when you hook up a load. A bit like when you load a DC power supply. The Imported Unc is the bit that weights up the reading. Without data to prove that the 10V source is going to remain in spec, I can only assume it will do. But if it did you could keep a similar Unc just adjust the ppm stuff for the 10.0000472V (rather than 10V) that you will calibrate to, its what we do with standard resistors.
Is this based on the certificate that I had posted or on your own lab?
One of the things that would concern me is the ad-hoc assembly of your uncertainty budget without an explicit, sanctioned guide to doing so. As a result, I think you've made some errors, please comment. In any case, unless all labs follow similar procedures, they won't get consistent results--and if calibration labs don't give consistent results, then what's the point??
1. There should be no need to analyze the various details of the stated 6ppm uncertainty from Fluke if that is what you are going with. That is an all-inclusive k=2.58 (99% CI) number (not rectangular distribution) and includes such things as 10C worth of tempco, line regulation and so on. You shouldn't need to add anything else in there that is not related to your lab or ability to do repeatable measurements.
2. If you really want to give an uncertainty at cal temp and use only the drift spec, that spec is 3.0ppm/year, k=2.58 (AFAIK--CI isn't stated explicitly in the document I have but is for later and other models)
3. Line regulation is already 0.05ppm.
4. Load regulation is a predictable characteristic, not an uncertainty. That has nothing to do with calibration, which for a voltage standard should always be done at high impedance.
Now those are nits that have to do with your process, which is what is being discussed here. However, the real issue for me is that neither your lab nor the one in the certificate I posted has any business letting a voltage standard like this in the door--unless you are buying it for your own use.
You can be 'honest' and list your actual UNC numbers all you want, but what is happening is that somebody is getting a calibration certificate that allows them to say 'calibration certificate' but is only slightly more useful than a 9 volt battery. However, when the customer sees "10.0000472 v with UNCERTAINTY 536.9 μV (k = 2)" read straight off an old HP3458A, they likely only care about the first number and consider rest to be boilerplate CYA stuff that they can ignore. I would like to see a policy--or even regulation or law--that requires the sticker, not just the calibration certificate, to note any instrument that is calibrated to less than the manufacturers specifications.
The new 17025 documents are putting more pressure on labs to have a decision rule and that is a good thing as there are lots of labs stating stuff is passed when actually given the barn door of an uncertainty they have it could well be undetermined.
Personally, I am in favour of this push for labs to consider compliance for most customers. In the pressure calibration world (at least in Australia) we have been required to assess instruments for compliance to a specification for many years, where the specification is provided by the customer (based on their usage, risk, etc) or a default specification is adopted based on the manufacturer's 12 month accuracy.
In my region, most instruments are sent for routine calibration to see if the instrument still performs within the original specification, and if not, ask that it be adjusted to do so - in this case, assessment of the instrument for compliance to a specification is the most important aspect for the general industrial customer. It can be dissapointing for us metrologists, who are usually focussed on minimising uncertainty, but this is a reality for many customers who may not understand uncertainty. Pass/fail is easily understood and metrologists are well placed to assess this. For digital pressure instruments, the arithmetic addition of uncertainty and error/correction at each point must be equal or less than the specification. We have carried over the same compliance rule in our electrical lab.
Unlike pressure labs, electrical labs in Australia have often historically not considered any compliance/decision rule and simply reported a measurement and uncertainty, which in some cases was disturbingly large, and inadequate for the test instrument. Poor labs were able to do this with little recourse, relying on the customer ignorance. With the emphasis on decision/compliance rules in 17025, these labs are now obliged to assess if their uncertainty is adequate during contract negotiation. This is an excellent outcome. Of course, there are still labs that ignore this with the "barn door uncertainty" and give a 100 ppm uncertainty to assess a 50 ppm accuracy specification, and say that it "passes", but I hope these are picked-up by our accreditation body over time.
Although it is ultimately the user's responsibility to contract a laboratory with a capability that suits their needs, it can be very hard to navigate or understand a scope of accreditation, and I think we also have a duty of care to make sure our capability is likely to be suitable.
Of course, for metrologists seeking calibration of their reference equipment, they understand uncertainty, errors, corrections, drift, etc, very well and can easily ask their reference lab to assess the instrument to lowest possible uncertainties, without adjustment or consideration for compliance to an accruacy specification, and this is permitted based on specific customer request within 17025.
I agree. Electrical has just got away with giving a result, and the mechanical stuff has just been saying pass on elements they don't even have an Unc for. Like flatness of the anvils for a vernier. I know this as I am just going through the mech stuff and I have to jump through the new hoops. Mechanical has also been saying Pass even when there is a Unc greater then the specifications. Part of this isn't helped that the specifications were written before Uncertainty was even really thought about.
I remember doing pressure gauges to UKAS, having to take into account Height Above Sea Level, Atmospheric Pressure, Gravity, also how high the gauge on the deadweight tester. It was such a nightmare for a gauge you know was just going to be used to tell them that there is pressure there. The worst pressure I had to do was 4000bar that was a day for wearing brown trousers.
I see no reason why customers and hobbyists should also look at these things, its clear that many here get this mentality that because the meter is calibrated that its spot on and everything else is wrong and even if they accept some error it looks like they quote the spec and don't take into account the imported errors for the kit used to calibrate it.
[...] having to take into account Height Above Sea Level, Atmospheric Pressure, Gravity [...]