EEVblog Electronics Community Forum
Products => Test Equipment => Topic started by: openloop on August 02, 2020, 06:35:10 am
-
If you ever wondered how a multimeter's microcontroller sees outside world, wonder no more.
I've tapped into "ADC -> Micro" communication line of K2001 and got about 4 hours worth of readings in 20V range, 10NPLC, 60Hz land measuring a 10V reference.
Behold:
[attach=1][attachimg=1]
Simplified workflow: knowing exact value of LM399 reference and knowing where "zero" is, one can derive slope and offset of the ADC. Knowing value of 10.5V trace one can derive input divider's ratio, thus knowing payload reading and the ratio one can calculate voltage on the input terminals.
Additional comments:
- Warmup lasts much longer than 1 hour. I'm not sure if I ever reached that mythical, blissful state.
- Data comes in form of two 16bit signed integers (big-endian). One is for final slope count, another is for dual slope difference;
- Dual slope difference goes through integer overflow (at 10 NPLC). No doubt a design screw-up; :palm:
- Voltages have inverted sign, for some reason.
- At least in A08 firmware, auto-cal is incomplete: apparently final slope to dual slope ratio is hardcoded in (8000, I think). That's just ugly. :-- One needs a chopper like setup to catch the error (if using data coming out of the micro) >:( Raw data can be processed around this nonsense.
- My setup was: old Arduino Uno connected to opto-isolators on the digital board, Arduino feeds real-time data to my (floating) laptop.
-
|O
-
....
- Data comes in form of two 16bit signed integers (big-endian). One is for final slope count, another is for dual slope difference;
- Dual slope difference goes through integer overflow (at 10 NPLC). No doubt a design screw-up; :palm:
- Voltages have inverted sign, for some reason.
- At least in A08 firmware, auto-cal is incomplete: apparently final slope to dual slope ratio is hardcoded in (8000, I think). That's just ugly. :-- One needs a chopper like setup to catch the error (if using data coming out of the micro) >:( Raw data can be processed around this nonsense.
If there is an overflow in the signed integer representation, this indicates that the number may be unsigned, possibly with an offset. That is the natural way to get the result during the integration / run-up ("dual slope") phase.
The coarse part of the final counts and the run-up part are naturally in a fixed integer ratio as they use the same clock and reference switches. Only the fine part of the final rundown has 2 separate reference sources and thus a separate scale factors for the fine slopes. This may be either a fixed value define by precision resistors or it could be a measured value, only close to a nominal number. Not sure how this is handled with the K2001. They give 0.1 % grade resistors, so it may be a hardware defined ratio. Even than there may be still some numerical correction - though possibly small.
Judging from the graph the K2001, a little like the old 19x series meters does measure the ADC gain frequently (for every reading ?). So there is more than just zero and the input signal, but also a reference for the scale factor. It is odd to have both a 6.9 V and 10.5 V values. Are there really so many reference readings, or are some readings more frequent than others ?
The sequence could give a hint on why the newer Keithley meters (K2000 ... DMM6500, DMM7510) have the ugly extra noise over the 1-30 seconds range.
-
If there is an overflow in the signed integer representation, this indicates that the number may be unsigned, possibly with an offset. That is the natural way to get the result during the integration / run-up ("dual slope") phase.
Does not matter signed or not - if there are two voltages mapping to the same value (e.g. 8v and -6V) then there's overflow. They use a software hack to distinguish the two. It is possible to fool it. [edit: no, it is not. Read below.]
They give 0.1 % grade resistors, so it may be a hardware defined ratio.
It is. The ratio of (R835 + R846) to R844 i.e. 20K to 4M. It drifts. So from time to time there's a step in output when final slope counter (blue) jumps (if the ratio is not exact, and it never is).
Judging from the graph the K2001, a little like the old 19x series meters does measure the ADC gain frequently (for every reading ?). So there is more than just zero and the input signal, but also a reference for the scale factor. It is odd to have both a 6.9 V and 10.5 V values. Are there really so many reference readings, or are some readings more frequent than others ?
The sequence is this:
7 auto-zero(auto-cal) measurements, interrupted after every two to do another "payload" measurement.
The 7 are : 10.5v, 8.6v, 6.9v, 0v, 0v, 0v. In endless loop.
Zeros measured in different configurations : one is simple analog ground, one effectively is voltage offset of the ADC buffer op-amp. Not sure where they use it, but respect.
10.5V is needed to determine exact ratio of the input. Range is +-20V, but ADC is (about) +-11V. So users input is divided by two. Exact ratio is determined by the 10.5V measurement.
-
The sequence could give a hint on why the newer Keithley meters (K2000 ... DMM6500, DMM7510) have the ugly extra noise over the 1-30 seconds range.
In K2001 I was able to trace that kind of noise to the way they process the data: They average the auto-cal readings for a minute (or half?) then recalculate all the coefficients. Thus creating characteristic saw pattern.
No fancy Kalman filters...
-
With 4 M and 20 K the slope ratio for the smallest slope is 1/200. Ideally the finest slope should only be used for 1 step of the coarse slope. There likely is some added constant part, but this would be there for the zero reading too, so it would be subtracted anyway.
So it would be only a small contribution and an accuracy better than 0.5 % for the slope ratio would be sufficient to get an error of less than 1 LSB. It is a little boarder-line, but the resistors may be just good enough.
One would normally not measure the slope ratio very often, as the stability of the resistors should be good enough even for a longer time. So one may not see the measurement by sniffing the communication.
Getting the same result for two inputs is really odd, so a real overflow :palm:. There may be a hidden bit in the other result word.
-
So it would be only a small contribution and an accuracy better than 0.5 % for the slope ratio would be sufficient to get an error of less than 1 LSB. It is a little boarder-line, but the resistors may be just good enough.
That's not just resistors. They (resistors) set the charge/discharge current. The problem is - they connect to different voltages (naturally). One positive, one negative. Those voltages drift too. :palm:
Thus I wouldn't bet that this keeps stuff in spec... Plus seeing a sharp step in readings is very confusing to the user.
One would normally not measure the slope ratio very often, as the stability of the resistors should be good enough even for a longer time. So one may not see the measurement by sniffing the communication.
It's not good enough (one can see that, for example, ceiling of blues drifts too)
And these are all the communications after like 2 seconds of setup.
I did an experiment - changed the ratio a little bit. And those steps become horrible. Thus - they do not auto-calibrate those at all.
Getting the same result for two inputs is really odd, so a real overflow :palm:. There may be a hidden bit in the other result word.
There is no hidden bit [edit: kinda is. see below]. In A08 at least. I was able to confuse the instrument by switching input very fast (in overflowed range).[edit: now I have doubts of what happened]
Although this result was not reproduced in newer hardware/firmware versions (I asked people to do the experiment few days ago).
-
They may very well trust the resistors enough and not do the numerical correction for a non ideal slope ratio. Anyway it looks a little odd that there are 2 smaller slopes, but both from the negative side. So if looks a little like they use either the 4 M or 920 K resistor, but likely not both steps. It may be something like the 4 M only for slower conversions.
I know there is also the reference voltage that can drift. So the 0.1% resistors may not absolutely guarantee less than 1 LSB DNL error. They may sort out the rare cases where the resistors are all on the wrong side. Using a measured slope ratio has a strange effect on the result: the steps are no longer all equal. And the result is no longer a simple integer with all values possible. This could be a little confusing and also add errors if the quantization error is large compared to the white noise background.
For the result there is the questing if the small slope part is already scaled or still the raw steps (1/200 or 1 /46). If still the raw numbers, the ground relative part could still apply the corrections if it has measured scale factors. The extra measurement may be only during calibration of so. The stability of the resistors and voltages may be good enough for years.
That the limiting values for the blue slopes are drifting is likely not a problem: it happens the same way to the zero reading and one will always use differences, not a single reading. This only reflects some drift in the point where (or haw fast) the fast slope comparator triggers and this way the constant part from the small slope part.
-
It may be something like the 4 M only for slower conversions.
Yes, this is correct. 4M used in high precision, long integrations.
For the result there is the questing if the small slope part is already scaled or still the raw steps (1/200 or 1 /46). If still the raw numbers, the ground relative part could still apply the corrections if it has measured scale factors. The extra measurement may be only during calibration of so. The stability of the resistors and voltages may be good enough for years.
Those are clocks (you can see actual values in the graph). Although for dual slope it's like divided by something. So, raw. There are no brains on the ADC board. That makes it ideal for modding. >:D
About setting it during calibration and years stuff: my first thought too, but after combing through actual calibration constants, and doing (non-factory) calibration - no luck.
And then I realized that the whole thing is referenced by a fairly regular zener (not LM) and there just waay too many resistors involved (with, like, 10ppm or worse) in references and stuff and the fact that I see steps when blue jumps (and red changed by 1). I had more experiments with better references and good dividing networks and I came to the conclusion that the ratio they use is exactly 8000 counts of blue = 1 count of red. That actually settles it - it is hardcoded.
-
The regular zener as an additional local reference for the ADC is indeed odd, as this can add some noise and needs the measurement of the scale factor (e.g. the 10.5 Ref.) even for fast measurements. Chances are the zener is reasonable low noise, possibly better than the LM399. In theory the scale factor adjustment could be slow, effectively adding digital filtering to the LM399.
So if the extra zener is really low noise it can even help to reduce at least the shorter time scale noise.
However the extra ref. does not effect the slope ratio part. Here it is R843,R845, R847 and the offsets of U811 and U813 (at the integrator) for the reference voltages. The other resistors are the 4 M and 19K+1 K at the normal reference. So it is mainly 2 resistor ratios (with some resistors as the sum of 2, which is more helping than bad). In addition there are leakage+bias currents that can have an effect. However even the slow slope is still some 10 V / 4 M = 2.5 µA. So it would take a lot of bias to have a significant effect. Still U813 can have some effect from the bias.
The ratio of 8000 for the "red" and "blue" steps is odd, as the slope ration would be 4M/20K = 200
So some of the blue part should still be due to the fast slope. So there would have to be some math (like add in steps of 5). 16 +13 bits would be a rather high resolution so changes are the blue curve would not use all the bits, more like only steps as multiple of 8 or so.
It is still strange to have only a little more than 16 Bits from the fast slope: 200 ms (for PLC) divided by some 100000 counts would be 2 µs, so this would be rather coarse time steps, so the sloe slope would take quite some time and also have a larger effect than needed.
-
Similar approach is also used in K2182A, where LM399 ref used only for autocal network, and actual ADC (K2000 type) works out of 6.2V zener.
-
My guess they used local reference because it makes it completely independent.
The problem with it is that it adds more to ADC's drift and, as I see it, drift is the instrument's main source of noise (at least in 20V range).
Mostly because of the way they handle it (averaging for awhile, then dropping new constants in)
The regular zener as an additional local reference for the ADC is indeed odd, as this can add some noise and needs the measurement of the scale factor (e.g. the 10.5 Ref.) even for fast measurements.
They need to do scale measurement anyway because the compound ratio of R835+R846 to R840 to R841 is unknown.
Additionally, 6.9V measurement is the scale one (LM399's voltage is calibrated in). 10.5V is to determine R327 to R334 ratio. I.e. the input divider.
However the extra ref. does not effect the slope ratio part.
OK, that's true. As "zener reference" I meant entire +-10V local reference voltages.
But already you see that the ratio of final slope to "zero crossing" slope is dependent on two not ratio locked dividers, which in turn made of not ratio locked resistors.
In an instrument that heats up to 40+ degrees under the shield.
It (the ratio) drifts. And even when it finally reaches a stable point it will never be 8000, that point is also room temperature dependent...
Back to my original point: not auto-calibrating this ratio makes K2001 not a fully autocalibrated instrument and it shows up in readings.What, they didn't want it to compete with 2002? ::)
The ratio of 8000 for the "red" and "blue" steps is odd, as the slope ration would be 4M/20K = 200
You're right. :-+ I just crunched some numbers. Here's what I've got.
The granularity of that "fast" slope is 10 clocks (at 7.68MHz).
That got stretched by 200 times => i.e. it translates into 2000 clocks for the "slow" final slope.
But I get 8000...
Simply stuffing 2 bits on the right (i.e. multiply by 4) will result in even numbers, but I've looked into the data and it's not the case.
Looking closer I've found this: all measurements except payload indeed end with binary 00 (but they're all positive or close to zero)
All payload readings (for the final slope count) end with binary 01 for positive (on the panel, negative at ADC) voltages (left/first half of the graph) and 00 for negative (right half).
Thus they indeed shifted the count by two bits and used two lower bits for flags.
This explains why my overflow experiment didn't produce any results: they hide the polarity flag in there... Damn...
-
From the design the K2001 is relatively old, more like the K19x series in some aspects, older than the K2000. The K2002 looks more modern and in many aspects like an improved version of the 2001 with the ADC as a slightly improved version of the 2000, and the input stage more from the 2001.
Doing the scale calibration in the background or even for every reading (like with the K19x series) is the old style. This can compensate for quite some drift (and even excess noise) of the resistors setting the ADC gain. With good resistors setting the ADC gain one can get away with much less frequent adjustment, like with a separate ACAL call. The extra gain measurement step takes up time and will add some noise to the reference for the long time scale (e.g. > 10s of seconds) for the short time scale one can use a second lower noise reference and this way reduce the shorter time scale noise. The 1N4579 zener is not just a simple one, but more like a good grade reference zener, a little like the more common 1N829. Chances are high it is considerably lower noise than the LM399. The added noise from the additional scale factor readings would be more like white noise with a low bandwidth. At long enough a time scale the added noise would be be less than the 1/f noise of the LM399 - so it can be worth it. The real time gain check to compensate resistor drift / TC is than more like an extra benefit.
There is however a more hidden downside: from dielectric absorption there is some delayed effect at the ADC. So it makes a small difference if the reading before (e.g. the payload measurement) was 10.5 V or 0 V. This can result in some extra LF background (e.g. every 14 readings or so) and also some INL error from the input reading effecting the reference scaling.
The separate reference could also help in avoiding a possible ground problem, as the extra ref. is on the ADC board.
At least with today's prices I would have preferred a 2nd LM399 to reduce the noise instead of the extra 1N4579. In the old days the zener may have been cheaper though.
The 10.5 V is using the same resistors (R334+R327) that normally set the attenuation to create some gain for the 7 V. So they need both readings, and especially the difference. So there can be quite some extra noise from including the r334/R327 ratio from the background readings.
Chances are the 2 V range would directly use the 1.75 V reference. So the 2 V range could be quite stable !.
-
The zener on the ADC board is a mistake. One can rationalize it's presence till kingdom come, but the simple mathematical truth is that if you add an additional independent noise source you shall add it's variance to whatever you already have. Thus invariably increasing your noise. Additionally, that zener has a really bad (comparing to LM399) temp co thus inducing even more drift.
Keithley guys realized the futility and in later designs they used a nice, onboard reference. The ground issue was solved by a brilliant move - bring in the outside reference differentially (plus it sidesteps the thermal EMF thing). This trick is used in 2002 and DMM7510 (maybe others too).
There is however a more hidden downside: from dielectric absorption there is some delayed effect at the ADC.
You found it too! :-DD
Although I wouldn't subscribe necessarily to it being caused by "dielectric absorption", but there is indeed a "memory" or how I like to call it "non-Markovian shit" effect.
Surprisingly, the problem is not on the ADC board. I was able to trace it to U322 buffer acting a little bit like a flip-flop. With little hesitation I've replaced it with a modern chopper assuming (seems correctly) that any additional HF noise will be suppressed by ADC's integrator acting like a low pass filter. The chopper has made this "memory" phenomena pretty much disappear.
The 10.5 V is using the same resistors (R334+R327) that normally set the attenuation to create some gain for the 7 V.
That is incorrect. The gain for 7V reference is explicitly set to one. (R334+R327) are excluded from feedback by the open U319 "//2" switch.
(R334+R327) divider is used exclusively to divide 20V range input by two. And because its ratio drifts (and not a part of the calibration) - hence 10.5V autocal.
Chances are the 2 V range would directly use the 1.75 V reference. So the 2 V range could be quite stable !
Maybe. The problem with 1.75V (a.k.a. 6.9/4) reference is that it's coming from a fairly noisy PWM. Thus, while being very accurate it is not very precise.
And considering how badly firmware handles drift I wouldn't be surprised if it has problems with a noisy reference too.
-
The reference reading with the 10.5 V has some nice features. If am did not make a mistake in may short calculation the relevant size is the 10.5 V reading minus the 6.9 V reading. So they may not even need a zero reading if they get the ground right (same effective ground of the 6.9 V as for the 10.5 V, which should be possible as the 10.5 V ground is free to choose). From the sensitivity they could have as well directly read the 6.9 V reference in the 20 V range, getting around switching the gain.
Getting rid of the memory be changing U322 is a surprise to me. The main reason for memory inside U322 would be a thermal effect, tricky to estimate. An AZ OP may still make sense here as it can offer good linearity - my best guess would be something like OPA189.
A memory from the last conversion can be quite annoying - I also see this in my ADC implementation and the main effect in mind is the DA. If linear one could compensate numerically, but it is of cause better to not have it from the start.
The K2002 and DMM7510 use a very low noise main reference to start with (LTZ and LTFLU) so no need for extra noise filtering. The K2002 uses separate current source for the ADC and this way can tolerate some common mode shift between the reference and input amplifier, possibly all the way to the low side of 4 wire ohms mode. I don't know how the DMM7510 does it, but from the pictures it looks different from the K2002.
If the extra reference is really low noise it can help, at least in theory by a kind of digital filtering of the LM399: the noise of the LM399 could be digital filtered for some thing like the > 0.01 Hz range. In this range it would be replaced by the 2nd reference noise and some additional white noise from the gain measurement. If the cross over frequency is low enough (long averaging) the extra white noise would be small. I somewhat doubt they added the 2 nd ref for noise filtering, it is likely because of gain drift (the possibility to use cheaper resistors) and tradition. Possible noise filtering for the reference is more an additional benefit, they may not even use. With low grade resistors the averaging is limited, by excess noise of the resistors.
For noise filtering it would help if the ADC itself would be low noise. In the ADC of the K2001 there are a few extra noise sources. A resistor for the input (59 K) much larger than the other resistors (20 K + 40K) results in a noise gain for the integrator OP (U813) of nearly 5.5. There is also quite some extra noise from these resistors. The attenuation by a factor of 2 for the 20 V range also adds to the noise. The LT1007 with some 20 K of source impedance is not really low noise anymore at some 2 Hz.
AFAIK the K2001 is also relatively noisy for reading a short. The effect of the extra reference would be mainly there if a 10 V or similar voltage is read. In this can one could have an interesting noise spectrum, depending on how good the reference filtering (averaging of the gain readings) is.
So I would not blame the relatively high noise to the extra reference.
The more modern K2010 and K2182 also use the extra reference and can get reasonably low noise.
The 1.75 V is not from a PWM divider, but a charge pump / switched capacitor type. This can be reasonable low noise - the trouble is more like producing EMI problems via Rf crap on the supply that may or may not effect some circuit parts.
-
The ground issue was solved by a brilliant move - bring in the outside reference differentially (plus it sidesteps the thermal EMF thing). This trick is used in 2002 and DMM7510 (maybe others too).
What's the trick?
-
Thanks openloop and Kleinstein for your immense input.
I have restored a K2001 (see here (https://www.eevblog.com/forum/repair/this-keithley-2001-need-to-be-saved!/))
and I want to learn from it.
I still have to check some basics since I upgraded (probably with no real improvement) all the analog switches (i.e. DG411/DG404/DG211) to Maxim devices. Of course I lost the calibration, but in such old devices I did not lost too much.
Anyway interesting readings:
https://www.eevblog.com/forum/testgear/do-i-have-exceptional-keithley-2000-or-poor-keithley-2001/ (https://www.eevblog.com/forum/testgear/do-i-have-exceptional-keithley-2000-or-poor-keithley-2001/)
and I still did not forget this (https://www.eevblog.com/forum/testgear/do-i-have-exceptional-keithley-2000-or-poor-keithley-2001/msg1063600/#msg1063600) Kleinstein's idea.
but I need to set up a proper measurement system with some loggin data, and other projects kicked in... still work in progress...
PS: Metrology thread already?
-
The reference reading with the 10.5 V has some nice features.
That's not really 10.5V. It is "10.5V" :D
It's 6.9V * (R327+R334)/R327 And you don't know what the value of "(R327+R334)/R327" is.
So you have 3 unknowns: gain, offset, "(R327+R334)/R327"
Thus you need at least 3 measurements. Hence "zero"
From the sensitivity they could have as well directly read the 6.9 V reference in the 20 V range, getting around switching the gain.
The whole 20V range signal path looks like a last minute hack to me. The voltage does not go through the main "dual JFET" path, instead it's picked up off the guard rail, of all places :wtf: and it's going downhill from there. I suspect that the instrument was designed to have 10V range, but somebody (marketing?) decided that they want "20V!". Although, I must admit, unbroken span of +-20V is useful.
my best guess would be something like OPA189.
Indeed it is. ;D
When I was replacing the opamp, I also shorted out L100. I recon it's not needed anymore, more of a liability.
So I would not blame the relatively high noise to the extra reference.
As I see it, the most of the noise out of K2001 is coming from the way firmware handling the drift. Looking inside the instument - everything and their grandma are drifting (including the extra reference). I kind of gave up on looking on other noise sources simply because they are of lower order. Looks like designers forsaked all drift mitigation attempts thinking that the clever architecture will allow them to mitigate it.
The 1.75 V is not from a PWM divider, but a charge pump / switched capacitor type.
While the chip used is indeed designed to switch capacitors, in this case it is used as a simple switch. It connects output PIN2,3 either to the 6.9V reference or to analog ground. And the duty cycle of REFCLK signal on the Cosc pin is 25%. Hence - PWM! :P
-
The ground issue was solved by a brilliant move - bring in the outside reference differentially (plus it sidesteps the thermal EMF thing). This trick is used in 2002 and DMM7510 (maybe others too).
What's the trick?
Differential?
Instead giving one signal and say "use this voltage between it and your ground (whatever it may be) as a measuring stick", it gives two wires and says "use voltage between these two and use it as a measuring stick"
Thus you no longer need to worry about your ground being exactly at the same potential.
Plus, voltage references usually live in hothouses, so getting the reference out with two wires made of the same material avoids having a thermocouple effect.
-
The 1.75 V ref. indeed is PWM :palm:. Still if the PWM clock in in sync with the ADC this is still a relatively low noise signal. However the LTC1050 adds a little to the noise. Directly loading the 6.9 ref. with the PWM switches is also no that great a practice. However chances are the ADC noise is still higher.
For the 6.9 V part there is no filtering of the LM399 noise at least over the time of the AZ cycle. So the gain adjustment includes quite some of the higher frequency noise of the LM399. In this respect the 2 V signal with PWM and filtering may actually help, though they still get aliasing of the ref frequency from the input side.
The 2001 is not made for low noise, it is more about high accuracy and good stability. It is an old design too. The noise may have been acceptable at the time.
The 20 V range is a nice option, though it would be nice to also have access to the 10 V range directly to the ADC. The HW is there, just the SW does not no seem to support it - except in some Ohms modes I would guess.
The 20 V full scale range is indeed odd, not sure if there is really need for it. At least the Datron1281 also has a 20 V range with the downsides of the extra divider.
There would be even an alternative (and in many aspects better) way to get a 20 V range even with the ADC only supporting a +-10 V range, without relying an a stable divider. However this would requite a slightly different input amplification - though not necessary more complicated in some aspects even simpler.
For the inverting gain of 0.5 one does need to know the individual resistor values, just the gain factor.
The "10.5 V" reading gives this gain plus 1 times the 6,9 V, if one ignores the OP offsets and switch resistance. One would need the zero reading(s) because of the LT1007 offset, though not sure why so many.
Using only the BSCOM signal from the AD548 is ODD: the main discrete JFET amplifier should be lower noise, though it could add more switching spikes at the input and more delay for the extra precharge phase.
-
After reading all of this and understood only 50% of it, I think changing the diode reference on the ADC board with a better one will not improve anything. This is what my spider sense are telling me.
-
I agree that changing the zener on the ADC board would not help. I expect it to be low noise already.
The problem may be more with the software: filtering / averaging of all the readings of the AZ cycle is likely not the best. Many of the Keithley meters show avoidable noise on the 10 seconds time scale that is likely due to poor averaging. Chances are the 2001 is effected also, though I don't know for sure. The other SW issue is the 10 V range.
With the AD548 the noise is quite high at some 1-2 Hz that are relevant for 10 PLC AZ. With 1 PLC one can start to see quantization noise.
-
I still not believe the prices of this unit right now:
https://www.ebay-kleinanzeigen.de/s-anzeige/keithley-2001-multimeter/1476348108-168-4806 (https://www.ebay-kleinanzeigen.de/s-anzeige/keithley-2001-multimeter/1476348108-168-4806)
4200€? :horse:, last week there was one for 3000€ (but in good conditions... )
https://www.ebay.de/itm/Keithley-2001-Multimeter-with-leadset/383290498030?epid=1801719172&hash=item593de4dbee:g:dNcAAOSw7RteJpMG (https://www.ebay.de/itm/Keithley-2001-Multimeter-with-leadset/383290498030?epid=1801719172&hash=item593de4dbee:g:dNcAAOSw7RteJpMG)
only 2699 USD.
https://de.tek.com/tektronix-and-keithley-digital-multimeter/2001-series-7%C2%BD-digit-multimeter-scanning (https://de.tek.com/tektronix-and-keithley-digital-multimeter/2001-series-7%C2%BD-digit-multimeter-scanning)
6,930 € - 7,740 € new?
eh??
-
That one has been on there for ages. I don't think this guy realises that kind of money buys a DMM7510.
-
Many of the Keithley meters show avoidable noise on the 10 seconds time scale that is likely due to poor averaging. Chances are the 2001 is effected also, though I don't know for sure.
Here's how I figured out what K2001 is doing with the averaging:
I ran the instrument for several hours while collecting both raw readings and unfiltered, regular voltage readings over GPIB.
Then I've mapped every raw reading (blue) to a corresponding "postprocessed" GPIB sourced voltage reading.
Then I made a 3D plot with one axis for raw, one for regular and one axis for time.
When I looked at the resulting scatter-plot their algorithm has become clear as day.
Basically the plot consisted of short, about 1 minute long intervals, where readings aligned themselves into short but exact straight lines.
That means that for every minute of measurement the gain and offset coefficients used to convert raw readings into voltage remained constant and as it's a linear calculation the dots on the plot fell into straight lines.
Thus here's what's happening: for one minute they collect acal readings, about 35 of each (6.9V, 0V etc.). Then they average them and derive new Gain and Offset coefficients.
These new gain and offset then immediately put into service and they'll remain active for the next minute, while the instrument collects the next batch of acal data. Rinse repeat.
As a consequence of keeping gain and offset constant for entire minute, while ADC drifts every each way, readings start to fall into a characteristic ramp-like pattern (because used gain and offset coeffs no longer correspond to the reality of the drifted off ADC). The longer it goes, the further off base readings become. At the end of the minute, there's another jump (snapping back to a somewhat correct value) and a new ramp begins...
And you know what's the stupidest thing is? Their digital filter even at maximum setting (100, 2 per second = 50 seconds) does not cover their own averaging AZ cycle (1 minute or so)! :palm:
I don't know, maybe to get quiet readings one needs to set readings by timer to once per few seconds - that will allow stretching the aperture of the digital filter to cover at least a couple of AZ cycles...
-
Thanks for the effort of finding out about the averaging method used. :clap:
Averaging over about 1 minute makes sense. Normally, if everything is done right this should give relatively low noise values for the gain. I would not expect much faster drift for the gain, though with noisy resistors (e.g. like the NOMCA arrays or worse thick film resistors), there can be multiplicative 1/f noise too. Normal thermal drift would be rather slow, so that 1 minute steps sound like OK for this.
However for the offset averaging over 1 minute is kind of a disaster as it would add lots of 1/f noise from the amplifier and ADC. It would set the relevant frequency for the 1/f noise to some 0.01 Hz instead of the possible 1.5 or 15 Hz for 10 or 1 PLC. The added 1/f noise would be especially bad when using the BS-amplifier (AD548), that has lots of 1/f noise.
The natural way would be to use directly the zero readings 1:1. This way one can also avoid the delay from the zero and signal readings. Good averaging would be the zero reading before and after the signal, no more. Later digital filtering for the result (signal - zero) would also filter the zero reading - so no need to do extra zero filtering. Filtering is reducing the white noise a little, but adds 1/f noise, so there is an optimum time window, that may well be in the 10-50 ms range, so maybe a thing for faster readings < 1 PLC.
For some reason the early plot shows quite some drift for the voltage readings, possibly the ADC gain. So there may also be some source for gain drift, like a high TC resistors in the hope that the real time gain measurement can correct it. With so much drift one may have to shorten the interval and use more like running average instead of just fixed blocks.
With the captured data one could in theory use a more sensible calculation to get a lower noise result.
-
Thus here's what's happening: for one minute they collect acal readings, about 35 of each (6.9V, 0V etc.). Then they average them and derive new Gain and Offset coefficients.
These new gain and offset then immediately put into service and they'll remain active for the next minute, while the instrument collects the next batch of acal data. Rinse repeat.
As a consequence of keeping gain and offset constant for entire minute, while ADC drifts every each way, readings start to fall into a characteristic ramp-like pattern (because used gain and offset coeffs no longer correspond to the reality of the drifted off ADC). The longer it goes, the further off base readings become. At the end of the minute, there's another jump (snapping back to a somewhat correct value) and a new ramp begins...
This is in agreement with output data I measured during the warmup phase (when drifts are strongest). The input signal was stable 10V reference. See attachments for details.
-
The DMM7510 has something similar when warming up.
-
I would not expect much faster drift for the gain
Actually, there are much more drift in gain than offset. You can even see it in the graph - I marked "blue" zeros on the right - they've stabilized fairly well. Zero does not depend on the drift of LM399, it does not depend on the drift in the divider. Moreover, the zero reading does not depend on drift of ADC board +Vref. Simply because it's mirrored by -Vref and when they drift apart the middle stays the same.
But the gain does depend on +-Vref, and LM399, and the divider...
However for the offset averaging over 1 minute is kind of a disaster as it would add lots of 1/f noise from the amplifier and ADC. It would set the relevant frequency for the 1/f noise to some 0.01 Hz instead of the possible 1.5 or 15 Hz for 10 or 1 PLC. The added 1/f noise would be especially bad when using the BS-amplifier (AD548), that has lots of 1/f noise.
Yes, it adds some low frequency noise. Although when I looked at the numbers I had hard time (more like "wasn't able") to separate noise contributions from gain and offset. Little bastards are like yin and yang.
With the captured data one could in theory use a more sensible calculation to get a lower noise result.
Sure - there's no need to deal with firmware's extra noise BS. Looking at the data, one can see that the instrument's drift is very smooth. In such a situation, for real time work, Kalman filter framework should give good results. Of course, if done in post-processing, one can use non-causal filters to get even more accurate readings.
Wish high-end instruments, just like high-end photo cameras allowed for access to raw readings, without the need to solder taps in...
-
Chances are much of the offset drift would be due to drift in the resistors: one pair is the + to - 10 V ratio on the ADC board. The other pair is the 19+1 K and 40 K resistors at the integrator input.
The drift of the LM399 should not be very large, it is more like noise here. The ref. on the ADC board may show some drift, but this should not be so bad if things are done right - the zener used is supposed to be reference grade and low TC ( < 50 ppm/K). With the right current it could even be tweaked to near zero TC. For the resistors involved it is one pair for the attenuator at the amplifier, the resistors at the input (19+1K and 59 K) and the +-10 V amplification. So slightly more than involved with the offset - maybe twice as sensitive.
For separating the noise to an offset an gain part it may help to look at a zero volts reading. This should have essentially no noise from the gain factor and the reference. The LM399 (and the 1N4877 too, though likely lower) noise would be a 3rd contribution.
A agree that it would be nice to have access to raw data for post-processing, or just for diagnostics. However the GPIB bandwidth is limited and memory was more expensive back than.
-
Sounds like a good opportunity for custom / modified firmware. : )
-
From these detailed diggings in how Keithely 2001 is working out, anybody having any view on what is the best setting and tweaks to measure as stable as possible for voltnut style measurements?
I just got one. I was well aware that it is considered noisy, but though I could deal with that with long integration times and long averaging times. But seems quite a bit of the "noise" is very low frequency i.e. short to medium term drift so do not solve the problem.
I get much more stable and useful measurements with my my Fluke 8845a using statistic mode and watching the average which gives 7.5 digit reading, and my 33401a.