Author Topic: Remote controlled DMM DCV INL tester based on voltage divider idea  (Read 9141 times)

0 Members and 2 Guests are viewing this topic.

Offline CurtisSeizert

  • Regular Contributor
  • *
  • Posts: 143
  • Country: us
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #50 on: April 19, 2024, 04:13:26 pm »
I finished layout and assembled a board for the DAC + single ratio version. I wanted to get my money's worth for the LTZ1000A and DAC11001B I used, so I added a Howland-type current source with four ranges and a fourth order Butterworth for an AWG on separate outputs. Those are on switched rails to minimize power consumption and heat dissipation. I am hoping to get the firmware to a point where I can test it with my 3458 in a week or so, but I expect I'll need to run it for a month or so to get the LTZ1000 stable enough for optimal results.
 
The following users thanked this post: Echo88

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14246
  • Country: de
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #51 on: April 20, 2024, 08:36:00 am »
For just INL test it should not take so lang for the LTZ1000 to stabilize. The more tricky part can be drift at other parts (OP-amps, resistors and the DAC)  from board stress and humidity changes. Ideally the INL test sequence would allow for some constant drift at least. The drift can be from the ref. source, but also the DUT.  As an example have a sequence like full voltage, upper half, lower half, maybe the upper half again and than full voltage again. One may want to repeat the tests multiple times anyway, especially if there can be popcorn type noise as this can cause outlyers.
 

Offline CurtisSeizert

  • Regular Contributor
  • *
  • Posts: 143
  • Country: us
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #52 on: April 20, 2024, 04:06:17 pm »
Fair enough. I baked the board at 125 C for 60 minutes after reflow (but before populating the TH components) per the DAC11001B datasheet, but in my experience, it takes some days of on time for things to settle out.

It will be important to characterize the settling behavior of the source and DUT to determine the best way to order the data points while keeping the overall run time reasonable. If there is a non-linear component to settling or drift, that will not be cancelled effectively with an ABCCBA sequence. Also, ideally the sequence of DAC codes would be random (if there were no parasitic effects for the source to consider) because running everything in order would make time dependent gain drift of the DUT appear as INL. However, there will be some inevitable thermal tailing due to power dissipation in the 10k/10k divider that will increase the settling time for large code steps. As such, it will probably be necessary to check for settling or group codes into blocks that would be randomized in both placement and direction if settling time is excessive. Also, to the extent that a power dependent error term exists in the DUT, incomplete settling for the readings taken at a given code will lead to second and/or third order INL terms. The weight of these error terms will depend on whether the DUT input range is bipolar and to what extent the effect is on offset through e.g. parasitic thermocouples or gain through resistor ratio drift.

With those considerations in mind, it would be hubris to predict which run order would be best a priori without system feedback. So, to start off, I will run some tests to compare the results of different ways of structuring the run with my 3458. My gut feeling is the best algorithm will capture a number of data points for each code + switch position at, say, 10 NPLC and take a standard deviation of the x most recent of these. Once the standard deviation stops dropping, take the average of those x most recent and record that average and move to the next point. Some degree of randomization will almost certainly be necessary as will replicates for either the entire run or for outliers. I think that it will be wise to collect temperature data for both the DUT and the source at each point as well to be able to run a diagnostic fit of residuals against temperature (as well as the usual fit against order and checking for normality).

For the polynomial fit of the sum and reversal errors, my plan is to work these up as one would a data set for multiple regression or design of experiments. Before the run, the number of points would need to be chosen to be sufficiently large to avoid over fitting the data based on the highest order INL term I plan to fit. Once I have the data, I'll run multiple regression and prune the least significant terms from the model until this starts to increase the standard error or adjusted r^2. I will probably use reversal error for the even-order terms and sum error for odd-order to limit the degrees of freedom on the sum error fit. Best practice (at least in process chemistry, which is where I would do this stuff and get paid for it) would be to run replicates for points above a certain threshold standard deviation from the model then generate a sample of random data points not included in the initial run to check their fit to the model to validate its predictive power. None of these can necessarily pick up systematic errors, which is why it will be important to handle that aspect separately through robust experimental design.

I will post the data I get once everything is up and running.
 

Offline CurtisSeizert

  • Regular Contributor
  • *
  • Posts: 143
  • Country: us
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #53 on: April 25, 2024, 02:47:48 pm »
I've got the core functionality of the design up and running, and everything seems reasonable at this point. The current draw at 16V is 140 mA, so battery life with 21700 cells should be 24h with margin. I am waiting on a trim resistor to get the +10V reference voltage to 10.48576 V to give a 10 uV LSB for the 10V range. The low voltage part is very quiet and has a range of +/- 100 mV (or less, depending on the reference SW position), so I should be able to also get INL tests for the low range of the 3458 and my NVM in the future.

I've included the results from the only real test I have run so far, including the raw data if anyone wants to play around with it. The DUT is my 3458A. I used 50 DAC codes evenly spaced in a random order. For each code, I measured voltage at each of the eight possible switch settings (DAC,GND; CT,GND; DAC,CT; CT,CT + the reverse). For each setting, I took 20 samples at 100 NPLC, kept the last 16, and recorded the average and standard deviation of those 16. I ran ACAL right before the run, but it's been a while since I did a short calibration, so null is a bit off. The total run time was about 7 hours.

For future tests, I will see if it is actually important to randomize the order of the codes. I suspect it is not because gain drift between codes would be second order effects that would get lost in the noise with any reasonably accurate DUT. I am also going to try random SW position order but maintaining the CT,CT settings as bookends. I do think this will be more consequential.

Standard deviations were reasonably good, averaging about 40 ppb of full scale at the ends and 11 ppb of full scale in the middle. With 16 samples, the total width of the 95% confidence interval is equal to the standard deviation, so getting sufficiently low scatter for moderate order polynomial fits at sub-ppm level should be feasible in 24 hours or less. There reversal errors at the center tap voltages that are in excess of those measured between DAC and GND by several-fold (average is 3-4 uV). I don't know why these should exist other than that they may come from parasitic thermocouples between the leads of the switch. I will try compensating those for future runs by switching the DAC to bipolar references, running codes on either side of zero sequentially, and averaging the error terms for each. I think this will help to avoid confounding non-idealities in the source with DUT nonlinearity.

Anyways, with the sum errors (corrected for offset at CT,CT), there are a couple points that I would probably discard or rerun for a fit, but there is a clear shape to the curve. I subtracted out reversal errors for each voltage in the sum to get "corrected" sum errors. This makes the plot symmetric about the origin, but it's not far off from that uncorrected (less the offset, of course). This correction should cancel out the even-order errors in both source and DUT, which is fine because we can determine the even order terms in the transfer function from the reversal error fit. Just by looking at the corrected sum error plot, it seems the dominant term is fifth order with nonlinearity errors topping out around 50 ppb. It will be interesting to see if this shape remains with the bipolar compensation scheme I mentioned. The magnitude of the reversal errors is smaller, and without doing the analysis, my gut feeling is that the experimental power is probably not sufficient to put any even order error terms in an INL model. Overall, I am optimistic that this source could be used to characterize INL down to at least the 0.1 ppm level if the transfer function is well-described by a polynomial fit without too many terms.

I also modeled the DAC linearity error, but this isn't really a good way of measuring it because of the amount of time spent measuring other things and the greater opportunity for drift. Here I saw maximum deviations from linearity of about 22 uV, so 2.2 ppm. I did a quicker run before this, and it was within the DAC11001B's +/- 1 ppm spec.
 
The following users thanked this post: splin

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14246
  • Country: de
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #54 on: April 25, 2024, 04:35:34 pm »
For the DAC error the drift of the reference / gain at both the DAC and meter can definitely be an issue. This may especially effect the first point, when the system is possibly not yet fully settled. It still does not look that bad, expect for a few outlyers.

For the sum tests the drift is far less critical. It is mainly about the drift during the 8 readings.  One may also just run the test with only 4 readings that belong together and maybe than repeat that same 4 values 2 or 3 times. The other polarity is more like a different test. The combination gets 3 tests from 8 readings instead of 1 test from 4 readings. So it could save some time.

Some of the turn error can come from an offset at the meter, e.g. from termal EMF at the terminals / relays use for the reversal. One gets 2 x the meters offset as turn over error.
One may get around some of this if the turn over test includes a zero reading from the same switches as used for the source reversal.
How is the reversal done exactly ?  Is it just the sum of 2 readings or is it 4 readings inlcuding 2 zeros ? Including the zero readings would compensate for the meters offset.

I have not yet look at the excel file - maybe it tells the missing details on what is actually shown.

edit:
 I looked at the Excel file: it shows that the results for the sum error is from 4 readings. It is really surprosing to get that much of an error. This makes me question if maybe a different range is used for the zero readings is used, of something else is going wrong. I would not expect that much of an error in the sum and also not an essentially constant value.

The turn over error also uses 2 zero readings, but this as zero readings at CT CT. This would only correct the offset at the DMM, but add the errors from the relays at the tester. The right type if zero would be GND,GND and DAC,DAC. The is already the factor 0.5 included.

The currected sum error looks odd and for some odd reason it adds up to near zero. I don't understand the idea behind the correction - the compensation is suspiciously good and also super symmetric (not even noise there) for positive and negative, like really removing most real data.
« Last Edit: April 25, 2024, 06:15:34 pm by Kleinstein »
 

Offline CurtisSeizert

  • Regular Contributor
  • *
  • Posts: 143
  • Country: us
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #55 on: April 25, 2024, 05:24:12 pm »
The reversal error is calculated as (V(DAC,GND) - V(OS1) + V(GND,DAC) - V(OS2))/2 to give an equivalence between the magnitude of it and that of offset.

The 8 readings are taken together so that the inverse polarity datapoints are done as near in time to the normal polarity ones as possible. This should minimize the error contribution of various drift terms in calculating reversal error.

I should note for anyone looking at the spreadsheet that the headings for the reversal error columns correspond to the binary code of the individual SPDT SW positions. They are in the same order as the first three columns of readings.

The issue I noted with the reversal error for the center tap voltages is that they are about 7x higher on average than the reversal errors for the DAC voltage itself. The averages for the CT reversals are -4.16 and -3.99 uV while those for the DAC are -0.58 uV.  The center tap is buffered, so the output impedance should be the same. I cannot think of a way that this would be from the DUT, but I may be missing something. I am not worried about reversal error for the center tap voltage per se, but if it's indicative of an error with the source that is code dependent, that could lead to spurious conclusions about the higher order coefficients. The good news is that what I would consider the most likely code dependent effect - power dissipation of the nearby divider resistor leading to parasitic thermocouple voltages proportional to P(divider) would be second order. The shape of the CT reversal errors vs. DAC voltage looks very slightly parabolic, but you might have trouble proving that in a court of law.

As for the overall structure of the test, I am trying the option of averaging the results from multiple runs with the individual points taken at 10 NPLC rather than 100. This puts measurements of the individual components for calculating the error terms closer in time. We will see if it helps.
 
The following users thanked this post: Kleinstein

Offline CurtisSeizert

  • Regular Contributor
  • *
  • Posts: 143
  • Country: us
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #56 on: April 29, 2024, 03:43:57 pm »
First off, I got to the bottom of the anomalously high turnover error for the center tap voltages. I had unwittingly connected circuit GND to the shield of the USB cable via the SMA jack and the case. Things are much better behaved after using some polyimide tape to insulate the jack and connecting the USB cable through an isolator.

To address Kleinstein's comments, the "corrected sum error" values came out to near zero seemingly by coincidence for that test. It is an interesting sum because it ends up cancelling out the even-order errors from the data. However, with the source actually floating now, it is noisier than the regular sum error. There is no option to short the meter at a common mode potential relative to circuit ground other than that of the center tap because the output switching is handled by four SPDT switches (a TMUX 7234). The relays on the board are actually range switching for the Howland current source. The short nulls out the TEMF-related offsets back to the common terminal of the first switches. It is not clear to me that anything would be gained by taking short readings at multiple bias voltages unless there is considerable noise from the center tap buffer. The meter would be seeing the exact same impedance between its terminals for a short at any tap, and there is no effect from bias voltage on the short voltage in the data I have captured. The range for the tests was set manually, and it is the 10V range the whole way through.

I have been collecting slopes from the data points for each measurement in the test, and I noticed that with a randomized code order, there is a relationship between slope and code. Moreover, when I use bipolar references for the DAC and take readings on either side of zero sequentially, the second of these has a smaller standard deviation. To try to keep the settling better behaved, I started running the codes in sequence. This does help reduce the standard deviation a bit, but it is now impossible to separate drift in time from code-dependent drift, so I may look at some alternatives.

The weakest point in this strategy is probably the reliance on polynomial regression. With a dataset I gathered at 10 NPLC with 50 points and 8 replicates each, going above fifth order fits was dicey. Seven was possibly OK except at the edges, but nine was not well behaved. I have been experimenting with fitting cubic splines, and this is definitely better near the edges. I don't know if it is possible to analytically derive the transfer function from such a fit, but I have been testing some algorithmic approaches to converge on a transfer function that minimizes deviation from the turnover error fit spline and the sum error fit spline. This is doable, but it tends to be poorly behaved around zero, which has significant effects on the other points in the curve. I may also try fitting a polynomial to the data near zero, as high order terms will not be important, and calculating out from the fitted curve. The sum error formula is agnostic about what happens on the other side of the origin, and the turnover error data do not constrain the difference between two points on opposite sides of the y axis. That means just trying to derive the transfer function through recursion is subject to compounding errors. The effect of fitting errors on the slope of the transfer function is greatest near zero because the sum error is effectively the slope of the error of the transfer function between the total voltage and the center tap voltage (assuming a ratio of 1/2). As the function approaches zero, the divisor defining this slope gets small, so the impact of the errors gets large. This is a work in progress, and the results would need to be validated against simulated data with various polynomial and non-polynomial transfer functions to gain confidence in the technique. I expect that this general approach will ultimately yield the best way of processing the data.
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14246
  • Country: de
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #57 on: April 29, 2024, 04:11:12 pm »
The idea of doing the zero readings also with 2 different readings is to also include a possible offset error (thermal EMF) from the switches. With a careful layout this offset may be small. So only the zero reading at the center would not include the offset from the switches.  For the test one +-10 or +-5 V range this sytem can be OK.

It would be mainly for the low voltage test that a slightly different switch configuration would have been better. This could still be the SPDT switches, just in a different configuration, so that the first switches are used to select the voltages for the test and the 2nd set of switches are used for the 4 steps in a sequence.
 

Offline CurtisSeizert

  • Regular Contributor
  • *
  • Posts: 143
  • Country: us
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #58 on: April 30, 2024, 06:41:33 pm »
There is actually a much better method of using the data to generate a transfer function than I initially thought. It's pretty simple. You generate the corrected sum error by subtracting the turnover error for each tap. This removes the even order errors and gives you a locus of points that is symmetric about the origin. You can prove this algebraically very straightforwardly. Similarly, each value you get for turnover error is equally applicable to the negative and positive voltages used in the sum, so this locus of points is symmetric about the y axis. So now, for each point in the transfer function you have two orthogonal constraints:

(1) E_t(x) = (E(x)+E(-x))/2
(2) E_s(x) = E(x) - E(-x)

Where E_t(x) is the turnover error at x, E_s(x) is the corrected sum error at x, and E(x) is the transfer function error. Thus:

(3) E(x) = (2*E_t(x) + E_s(x))/2

At this point you can fit the transfer function however you like if the point is to correct for nonlinearity. There are some real advantages to this. First, there is no need for the transfer function or either error function to be well behaved. This is because the eight measurements taken at each DAC code are sufficient to define a point on the transfer function. Zero and first order terms can drift as much as they like between codes, but as long as they are stable within the time needed to take the measurements for a given code that is fine. Any drift term would be multiplied by the linearity error, which would be an imperceptible error with any reasonably linear and stable DUT. Third, this really simplifies uncertainty analysis. In principle, you could just run multiple sweeps and directly calculate confidence intervals at any point.

I attached a plot of the linearity error for the 3458A calculated by this method as an example. Some of the data collection was suboptimal, but the spread is generally pretty tight either way. All the data were taken by averaging 16 points taken at 100 NPLC. I took the initially calculated points and removed the slope and offset with a linear regression to make it easier to interpret. I am looking at ways of making sure the turnover errors are consistent because that actually has the most scatter at this point. I included the plots for each. The vertical axis is microvolts. The blue points are directly calculated and the orange are reflected. Each point derived by averaging the data from two DAC codes on the opposite side of 0 V (approximately) at opposite turnover switch positions. The turnover error is still susceptible to offsets coming from even-order harmonic distortion or rectification of AC components of the signal, but any DC error in the source is pretty well compensated.
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14246
  • Country: de
Re: Remote controlled DMM DCV INL tester based on voltage divider idea
« Reply #59 on: April 30, 2024, 08:18:20 pm »
I am afraid things are not that simple. The trun over part is simple, but the sum test works with 2 different voltages and thus 2 points of the INL curve.
The normal sum test uses 2 readings of about the same voltage and not a positive and a negative reading. The version with a negative and positive reading at about half the voltage is also possible (I get that with my DVM comparing the more normal 10 V range with the special 20 V range tha splits a voltage to +U/2 and -U/2).
The point is that the sum test also has the reading of the full voltage. It is only at the full scale, that one could argue that the full scale is used as the reference for the INL curve. So one could get the INL at half the full scale, but not at other voltages.
 

Offline CurtisSeizert

  • Regular Contributor
  • *
  • Posts: 143
  • Country: us
Ah, you are right. I made some simplifications when I was doing the math, and then I forgot that I had made them. The error of the transfer function at any given point is actually given by this sum:

E(v) = E_t(v) + sum(i = 0 to inf)( (2^i) * Esc(v / (2^i))

Where E_t(v) is the turnover error at v, and Esc(v) is the corrected sum error (as defined previously) at v. This can generate a linear component to the fitted error, which I just compensated by adding an appropriate linear term to ensure the slope between the endpoints was zero. How many terms one needs to calculate to get a good approximation depends upon the shape of the corrected sum function near zero. I did some trials with smoothed cubic spline fits to sequences of random numbers to make random  transfer functions, and four terms was good for simple transfer functions, while six to ten terms were necessary if things were all over the place (i.e., things that looked like 20+ order polynomials).

I tried this procedure with the real data I had from the 3458. When I back calculated the sum error from the transfer function, this gave a linear residual. This should not be possible with a polynomial transfer function, though it could be possible with terms on the order of x log x. Because the area around zero was leading to a large portion of the calculated non-linearity, I tried multiplying the interpolating spline by ( (2/pi) * arctan(a*v) )^2, where a is a horizontal compression factor. I used a=20 so that the effect would be localized close to zero. I attached a plot showing the effect of this. Most of the deviation at the points closest to zero is actually due to the smoothing factor used in calculating the spline. This gave a residual sum error function with a slope of zero. That said, I don't know that this is a good correction to make because the steep slope near zero is a very repeatable observation, so it may be a real thing. Both the turnover error and the sum error indicate some large swings in the transfer function near zero that would be tough to pin on the source. Because the shape of the curve near zero is so influential, it is probably best to get more points in that region.

Anyways, it is possible to use the source, some minimal fitting, and a handful of iterations to derive an INL curve from the measurements, and it is looking good well below the ppm level.

 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14246
  • Country: de
The region near zero is a special point for the 3458. There are some more localized errors that can happen here, like the output cross over error of the amplifier and it is the region most effected by the capacitor DA. The 3458 ADC has an odd correction function (Zero glitch jump with U181), that I don't really understand. I don't even know if it is active or disabled by the software anyway.

The infinite sum with a 2^i  term looks suspicious and is likely no behaving well. It would hardly converge or at least cause problems with diverging errors or numerical problems.
The sum test is likely not the best method to use near zero.
 

Online miro123

  • Regular Contributor
  • *
  • Posts: 207
  • Country: nl
E(v) = E_t(v) + sum(i = 0 to inf)( (2^i) * Esc(v / (2^i))
Sorry for my late response. I came to the same equation seems the begin of exploring the idea of R-R divider idea. 
The equation itself speak for fundamental limitation of algorithm, even if if is quite easy to implement in recursive software  algorithm.
I suspended any HW development until I'm satisfied with sim results.
I was playing with idea of combining of multiple Sum=0 divider based on 1*R/1*R    n*R/m*R.
I have tried applying  Kalman filter to mitigate disadvantages between different resistor divider configurations. I'm still not satisfieed with results. There many parameters. Just to mention few of them  error sensitivity, calibration points coverage, measurement time. sensitivity for temperature and time  drifts.
I'm almost at he point to give up the idea of multiple divider ratios.
What is next - I'm considering to explore the initial idea from Echo88 and  use DAC + resistive string or not DAC at all





 


« Last Edit: Today at 10:07:22 am by miro123 »
 

Offline CurtisSeizert

  • Regular Contributor
  • *
  • Posts: 143
  • Country: us
This sum looks bad on the surface, but it is actually convergent for all polynomial transfer functions. It is pretty easy to convince yourself of why this would be. For a polynomial transfer function, the corrected sum error function has coefficients of 0 for the constant, linear, and quadratic terms. Because the kth term of the sum is  (2^k) * E_sc(x / (2^k)), so if E_sc = x^3, the term reduces to x/(2^2k). If, however, there is a linear term, then the kth term just reduces to x, and the sum obviously diverges. This gives a simple test for whether the sum diverges. If you can approximate the corrected sum error function as a Maclaurin series for an arbitrarily small domain centered around zero, then the coefficient of the linear term of the Maclaurin expansion is the the first derivative at zero. So, if the first derivative of the corrected sum error function at zero is zero, the sum will converge.

The thing is, if over some domain centered on zero you can approximate the transfer function error as a Maclaurin series, the corrected sum error will have a first derivative of zero at zero because it always has coefficients of zero for the constant, linear, and quadratic terms. Also, as you approach zero, the cubic term in the corrected sum error is dominant, and for a E_sc(x) = x^3, this sum equals (4/3)*E_sc(x). In fact, for any polynomial fit term Ax^n in E_sc(x), this sum will be equal to A*(2^(n-1))/(2^(n-1) - 1)*x^n. So you can do a polynomial fit over part of the domain of the sum error function and use that to avoid infinite recursion.

I am looking at using a bipolar sum error to marginally reduce the sensitivity to error near zero. The error term is (V(DAC,GND) - V(GND,DAC)) - (V(DAC,CT) - V(CT,DAC) + V(CT,GND) - V(GND,CT)). I'll also take a closer look at the topology near zero. I should include that even with my spline fitted sum error function without the correction factor, the shape of the transfer function converged around five terms in the sum, and I could not differentiate the plots between six and 20 terms by eye, which is good enough when the limits of the vertical axes are less than 200 ppb fs.

E(v) = E_t(v) + sum(i = 0 to inf)( (2^i) * Esc(v / (2^i))
Sorry for my late response. I came to the same equation seems the begin of exploring the idea of R-R divider idea. 
The equation itself speak for fundamental limitation of algorithm, even if if is quite easy to implement in recursive software  algorithm.
I suspended any HW development until I'm satisfied with sim results.
I was playing with idea of combining of multiple Sum=0 divider based on 1*R/1*R    n*R/m*R.
I have tried applying  Kalman filter to mitigate disadvantages between different resistor divider configurations. I'm still not satisfieed with results. There many parameters. Just to mention few of them  error sensitivity, calibration points coverage, measurement time. sensitivity for temperature and time  drifts.
I'm almost at he point to give up the idea of multiple divider ratios.
What is next - I'm considering to explore the initial idea from Echo88 and  use DAC + resistive string or not DAC at all


With regards to the ideal topology for a source like this, I haven't compared the math between this and the string DAC, but I can say for noise, the limiting factor is definitely the DUT with the design I am using. Rod White at New Zealand's NMI has published some work about using a similar principle but with various series and parallel combinations of four resistors to measure linearity errors of bridges for resistance thermometry down to the 100 ppb level, and this might be worth checking out for ideas.

Developing the HW for this was really not that much work, I think the total time I spent was around a week. I believe that having a prototype, potentially with the ability to implement different methods of testing linearity, is probably going to yield more productive results at a certain point. Given the simplicity of the actual schemes, having tested this board, I would say the best way of approaching the design is to implement all of them on one board. Actually, if you just feed the string DAC with an IC DAC, that gives you all everything you need, and you can test all three of those possibilities. Just add a handful of muxes, an MCU with isolated UART to USB, and a reference, and that's it. If I were going to make this again, I would probably also include something where the DAC voltage bootstraps a, say, 2V048 or 2V5 reference, which feeds a divider between itself and the DAC voltage. Then you could calculate the sum error by measuring the bias voltage and the other components of the sum, so you would be getting something like 8, 9, 1, and 10 V for bias, the two centertap readings, and the total, respectively. This would be useful to be able to probe the average concavity around zero without needing to use points in the sum that are spaced very close together. It would give similar information to that available with the string DAC but with better resolution. Imperfect CMRR for the bootstrapped reference would just give horizontal scale compression. Whatever your design, I would recommend being able to bias with bipolar references to cancel out residual thermocouple errors. Also, the DAC11001 is overkill, but that's not really news.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf