Electronics > Metrology

DIY high resolution multi-slope converter

<< < (88/93) > >>

I did a more basic INL test. The idea is to check if the sum of 2 voltages actually reads as the sum. As an example have a 4 and a 5 V source in series and check if 4 + 5 is really 9. In my case the sum is fixed to some 9.4 V and a few different points in between are used.

For the connections this uses the 2 inputs from the DVM board (switching in software) and 1 external mechanical switch. The 2 ref voltages are generated from a single reference source with a divider - same as used with the AVR based board before. In addition to the 3 readings there is 1 more 0 reading.  This way each DVM input and each setting of the switch is used twice and offset errors for the inputs or at the switch should cancel out. With everything stable this test is relatively easy. However there is the complication that the LM399 references used at the DVM and for the external reference circuit are not that stable, but shows popcorn noise. So there can be jumps of some 0.5 ppm at either side. A know the external ref. is a bit noisy.
Trying to see an error in the 0.1 to 1 ppm range such a jump can cause a significant error. The idea is to get cycles of the 4 measurements with stable reference.
With a more manual process (switch and selecting data) brute force averaging is not so easy, but would be a theoretical option with a little more automation.

Attached it the result and a curve showing some raw reading. To get the data all one one screen the data are reduced to a little more than the last 2 digits (the digits further up do sum up correctly). So one can do the math with only looking on the end (100 nV resolution, 20 µV wrap around). The right scale / green symbols give the coarse voltage to see which step.

The known mechanism to cause such an error (voltage contribution proportional to U³) is the self heating of the resistor network from the input current. For comparison I did a short test for the TC of the ADC gain:  heat up the board to some 40 C and than on cool down record the board temperature and ADC gain (read the own ref.). The resulting TC is surprisingly low : ~ 0.4 ppm drop in the gain for some 4 K of temperature drop. I think I got lucky with the resistor this time (the AVR version was more like 0.5 ppm/K). This is still the combination of the 2 resistor networks, so no direct comparison to the nonlinear effect. Testing the networks separately was a bit tricky: the 50 K network showed the same positive sign fitting the sign of the nonlinear effect (more gain with a higher voltage). The 10 K network for the reference did react on temperature gradients too and could show positive and negative effects. There is also some mechanical effect on the resistors. Bending the board can change the ADC gain. So the rel. TC of the 50 K network may be a little larger than the total 0.1 ppm/K gain TC.

First quick turn over tests showed pretty low errors - may have to test a little more patient.
However the results so far not really sum up to the comparision of the 10 V range (more normal 0 and signal AZ mode) and the 20 V range (differential U/2 and -U/2 signal to the ADC). This comparison does show an higher error. So there still seems to be some additional error I don't understand so far.

Changed slope amp C15 from 22pF to 44pF - no difference in noise with short
Operation from battery - no difference in noise with short

Seems I found magic denoise button  :-/O
Noise now down to 100 / 110nVacrms (short / Ref @1PLC & 5min) and values now very close to where they should be:

Magic parameter in Pascal program:

--- Code: ---xd = 1000*3;                 // extra  delay 3x xdel in ASM code
--- End code ---

Went a bit crazy with 1000*3, formerly it was set to 0*3 (Kleinstein seems to use 12*3).

In most of the later data files the raw ADC reading look good, so little residual noise for the 2 consecutive readings (e.g. columns 10 and 17) of the µC internal ADC. The version with the slower ADC clock still seems to have some problem (more noise and the last column is allways 0). So one can probably go back to the faster ADC clock.

The comparison of the noise for the different modulation speeds show quite some difference: for the cases P,Q and W ( double, normal and half the speed) I get noise of 2.9 µV , 2 µV and 1.4 µV for the 3 cases.  So there is quite some noise related to the switching / jitter.

One could try to slow down the modulation, e.g. about to make the slow case W mode like the more normal case, by increasing the xdelay constant in the ASM program (e.g. from 12 to something like 40). The slight higher clock (16 vs 12 MHz) makes it start a bit faster anyway. The integration cap is still large enough for this.
The other point would be trying to find the actual jitter source. The main candidates are the oscillator, the HC74 and the LV4053. This could be the chip itself, or there supply / decoupling. For finding the weak point in the HW side the faster modulation would be an advantage. Normally the HC74 and LV4073 should not be so bad, unless there supply is unstable.  Trouble with the clock decoupling would likely be also visible in the INL test via the difference test (B).

A very short xdelay (in the ASM program) could explain a little, though I still think there is more jitter than it needs to. A values of 1000 is likely way too high and may drive part in saturation. The upper limit is likely at around 63, as there may be some 4xdelay that has to fit in 1 byte. The numbers in the ASM and Pascal program also have to match !

A noise of only 100 nV would be too good to by true. The Johnson noise of the resistors should contribute about 300 nV. The best noise I got with the AVR version was with slow modultion, at some 420 nV. With faster modulation the noise is more like 500 nV. With the ARM version and an slightly slower modulation I get down to 360 nV, which I would consider well good enough and better than hoped for.

I lookes at the raw data: there is still some scattering in the raw resuslt, just the math for the 7 V ref reading is way of an this than divides down the result so much. WIth the slower ADC clock the reading also show more scatter. The values for the K1 and K2 factor should normally be relatively stable. So no real change needed unless some HW change or very different temperature. If K2 changes with the setting of the trimmer ( no need to be strictly in the center of the ADC range, just avoid hitting the bonds) this would indicate going to high in the residual votlage. I don't exactly know how the 2 transistors behave.

2nd edit:
If the ASM code still has xdelay=12 and the pascal code had 0, than the scaling was wrong (and massive INL/DNL errors) before too. So the actual noise may already be better by about a factor of 2.  So the appearent 1.4 µV noise would be more like 700 nV noise for the slow mode. Still not very good but already useful. One can get a quick check of the linearity by watching a capacitor discharge an error in the size of the runup steps is quite obvious.

You always need to do compromises, so it´s the button to adjust noise vs. inl  :-DD

Meanwile the board got different XO (cheap Reichelt) and noise got a lot worse - around x2 (4µVacrms vs. 2µVacrms prior XO).
Seems the XO has quite high influence on noise - @Kleinstein, which one do you use?

The FW always had delay 12, but missed to change it in Pascal program :palm:
Did comparison of delay 12 & 0 in program with cheap XO and as proposed the noise with delay 0 is around 2x (4µVacrms vs. 8.1µVacrms) - this applies to the diff to the expected value as well (5.3µV vs. 10.3µV for short).

One thing I want to try is to have more delay after DG408 input mux switching, but have no idea where what to change.

--- Quote from: Kleinstein on September 25, 2021, 03:52:21 pm ---The Q2 and Q8 part at the Ne5534 is a bit odd. I had not so great experience with transistors as diode and for the ARM version use 3 diodes that work quite well. The BAS21 seems to be a good compromise between leakage and recovery time for the 1 critical diode. The other 2 can be simple fast ones (e.g. 1N4148 / BAV99). With just 3x 1N4148 the weak point is dirft of the gain (likely temperature dependent leakage). With a transisistor instead of 1 diode the problem with relatively large drift of the DC level / trimmer position, likely from slow recovery. With a relatively low gain for the final amplifier the drift of the DC level may be still acceptable.

--- End quote ---

BAS21 diodes are in the pipe (with some other components) to try out.


[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Go to full version