Author Topic: Software simulation of a voltage reference-- noise, tempco, long term drift?  (Read 5528 times)

0 Members and 1 Guest are viewing this topic.

Online RandallMcReeTopic starter

  • Frequent Contributor
  • **
  • Posts: 541
  • Country: us
Hi folks,
I am trying to make a simple java software simulation of a group of voltage references from different manufacturers. This is just for fun, btw. The long-term goal is to make sure that software I eventually program into an Arduino will actually handle the cases that might occur. Since we are talking about over 1000 hours of continuous operation, a simulation seems wise  ;)

So, ideally, I would like to create a class like this:
VoltageReference max6350( 5.00055 /* measured voltage at say 23C */,
                                             1ppm, /* temperature coefficient in parts per milliion */
                                              1.5uV); /* low voltage noise in microvolts p-p, 0.1Hz ? f ? 10Hz */
Then something like
  double value = max6350.next(temperature); /* expected value given temperature */

The value returned should be a value that takes into account the temperature and a pink noise source of the correct amount as specified at object creation.

 I'm not sure what to do about long term drift. How is this modeled, typically?

Anyone know of software already out there? C, C++ also acceptable.

If anyone is interested I can open source whatever I come up with, if nothing is available.

Thanks,
Randy
 

Offline TiN

  • Super Contributor
  • ***
  • Posts: 4543
  • Country: ua
    • xDevs.com
General rule is that long term stability can't be simulated, due to many difference factors which are hard to predict (temperature effects, aging effects, stress relaxation, seasonal effects, humidity variations, orientation and RFI/EMI environment). You can estimate it from previous collected history, but that would be still rough estimations.

As of tempco and noise - that would be interesting to see and play with. I have tens of megabytes of saved data on various meters and references, which I can provide for your data crunching and testing, if you like.
YouTube | Metrology IRC Chat room | Let's share T&M documentation? Upload! No upload limits for firmwares, photos, files.
 
The following users thanked this post: MisterDiodes, RandallMcRee

Offline MisterDiodes

  • Frequent Contributor
  • **
  • Posts: 457
  • Country: us
Just a thought from someone who's attempted simulating precision analog circuits before:

Simulating a precision Vref would be very difficult to get any meaningful data out, at least in the PPM world.  For instance -every- LTZ / LM399 (and most other Vrefs) has it's own drift personality depending on where it was cleaved out the mother wafer - and on top of that it is really a power-in vs. power-out device.  A steady voltage output is just a by-product of everything else going on power-flow wise when it reaches thermal equilibrium.  So you'd have to take in an endless array of thermal flow parameters, most of which wouldn't be really measurable for a sim.  Then add resistors, and how do they behave thermally on a particular board, power cycle effects, thermal cycle effects, duty cycle, board stresses and on and on.

And then add the enclosure, ambient surroundings, board insulation, coatings, etc.

And for instance: what output voltage are you going to model for LTZ1000?  No two are the same.

In short:  Everything around the Vref is going to influence the final outcome and 1/f noise effects, and every Vref chip is going to age a different way.  A board TiN builds won't act at all like a board I build, for instance.

By the time you tried to model that you're much better off building the circuit your way and running / studying it for a few years... any simulation probably wouldn't tell you much across board designs.

That's why you'll never see a true simulation of precision Vref's like LTZ in Spice / LTSpice.  In fact when you get down to it, LTSpice itself is only good for circuit 'estimates' - it doesn't really take into account any true switching effects, true noise on power rails or op-amp inputs, etc.  For instance, in order to finish a simulation quickly, any switcher power supplies, AZ op-amps, etc. all are just distilled down to estimated short-cuts of the switcher system.  It's also not going to tell you the difference in noise between PWW or Bulk Film, Thin / Thick film resistors, etc.  And lots of other short-cuts to get a final answer.

You'll learn quickly when you build your first real Vref circuit: "Where Theory Ends and Reality Begins".  Then you build the second circuit or ten or twenty and let me know if any digital simulation predicted any of them for long-term drift rate, noise, etc.  With experience and patience you WILL have an -idea- of final ballpark result just be looking at Vref spec sheet parameters, but everything else is application-specific on -your- board / enclosure / circuit architecture / temperature span / spec requirements etc.

There really isn't a good substitute for lab time on precision circuits.  Good Vref's are pretty much a "Roll up your sleeves and get your wallet out" project at any level - especially when you get to the < 5ppm area.
« Last Edit: May 02, 2017, 05:34:41 pm by MisterDiodes »
 
The following users thanked this post: TiN, Edwin G. Pettis

Offline chris_11

  • Contributor
  • Posts: 48
  • Country: de
That boils down to quantum physic versus classical physic. You can simulate the behaviour i.e. in Spice even a LTZ1000 with good results. What you can not predict or simulate is the behaviour of that individual representation of that circuit. Even if all starting conditions are known. A roulette ball follows Newtons laws. However you can not predict the number which comes out of each turn. Same goes for noise which is the result of "statistical" thermal movement of matter. Drift is the DC end of noise, 1/f, shot, popcorn and whatever fun physic has in store.

So you can predict (simulate) the interference pattern of a multitude of electrons or photons. If it is down to the individual electron or photon the uncertainty is down to Heisenberg(s cat. SCNR).
Same goes for the individual part or circuit long term drift/stability.

There is a reason that you have to cool out the noise or thermal energy to make a JJ. So I don't think that we will see that room temp JJ stabilised desktop 3458B with a few ppb drift and noise in our lifetime.

Christian
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21720
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Okay fine sure, but what's the point?

Why do you think you need a "simulation" of this, and, what are you really going to get from it, anyway?  A bunch of colorful squiggles?  What is the ultimate result?

It seems like a very inexperienced approach (and, well, mentioning Arduino isn't helping on that matter, either).  One with experience will realize: yes, all these poorly-defined things will happen over time, yadda yadda yadda, but in the end, the datasheet must be right: the parameter is within specified bounds (whether statistical or worst-case) after the specified conditions.  That, or the manufacturer is full of shit, and isn't upholding their end of the purchase agreement.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: MisterDiodes

Offline MisterDiodes

  • Frequent Contributor
  • **
  • Posts: 457
  • Country: us
You can simulate the behaviour i.e. in Spice even a LTZ1000 with good results.

Really??  For tempco, noise and Drift??  You should let Linear Technology / Fluke / Keysight know, maybe it could be added to LTSpice... :)  That would be a trick, seeing how no two Vrefs are alike in the PPM world, except within the tolerance limits you know already.

For this good Spice sim, out of curiosity - How do you specify thermal flow at dozens of nodes?  Copper weight on PC board effects? Power Cycle / Duty cycle and history effects? Temperature variation history and aging?  PWW vs Film resistors? Which resistors?  Air draft and ambient effects? How are components mounted? External stress effects, etc.??

Here's a good one:  Does the sim take into account which way the Vref is oriented to Earth?  How does gravity affect the thermal situation? 

IN the PPM world, you learn to think and anticipate these effects after you build some real references.

You just have to go with the correct ballpark boundaries as given in the datasheets, as pointed out in other posts.  The circuits are well known, tested and verified (some over decade's time and millions of hours run time and measurement data history, especially in 3458a's and similar).  A purchased device will run within the boundaries spec'd on the datasheet - otherwise the company won't be selling them very long.

Your board design / real situation will probably bring the Vref performance it a bit downhill from there.

And then you build circuits and test them in -your application- to get the best test data and adjustment for -your- system.

I really doubt any Vref simulation is going bring anything useful to the table.
 
The following users thanked this post: Edwin G. Pettis

Online RandallMcReeTopic starter

  • Frequent Contributor
  • **
  • Posts: 541
  • Country: us
Well, yes I admit to being inexperienced. I *do* have a background in software so turning to simulation to answer questions seems natural to me. Fortunately, I do actually understand what a simulation can and cannot do.

For example, I simulated two voltage references, averaged them together and "saw" that the noise of both together is .707 times that of the each separately. So the simulation works.

I'm not trying to predict behaviour. I'm trying to make sure that a circuit I am working on has the correct software to deal with noise, temperature and long term drift of the references. For example, one question I think the simulation should be able to answer is, what kind of averaging should I use when measuring the difference between two references using an instrumentation amplifier? I should be able to evaluate different averaging options (weighted averages, svg, etc) without "building" each.

Maybe I'm smoking something but its seems to me that my simulation should, in the long run, pretty accurately tell me, for example, how many bits I can usefully get if the averaged references are used as the reference in a 24-bit A/D. I know its less than 24 bits (these are not LTZ1000s) but how many? If the temperature range is restricted, what then?

Perhaps the questions I'm trying to answer are just obvious to you folks? Don't know.

But, hey, thanks for taking the time to read the post!

Note about long term drift. I decided that temperature is my main concern so I'm going to model that only. But from the datasheet it looks like long term drift can be modeled using the Arrhenius equation:
https://en.wikipedia.org/wiki/Arrhenius_equation  This isn't useful for a *particular* reference but you can make sure your software can handle future ranges of long term drift by simply simulating for a 'long' time and using thousands or millions of virtual references (each different!). So, yeah, it can be useful. It seems that my useful is not your useful! I would like to know that my software can handle a situation *before* it arises rather than testing in reality for 1000 hours of actual time. That's reasonable, right?

I attached a pic showing the simulation of averaged noise. I did this since posting the first time a few days ago.
 

Online RandallMcReeTopic starter

  • Frequent Contributor
  • **
  • Posts: 541
  • Country: us
TiN, Thanks for the kind offer. Actual data would be cool in making sure that my models match reality. We all know that "typical" values in a datasheet are not to be relied on.
 

Offline MisterDiodes

  • Frequent Contributor
  • **
  • Posts: 457
  • Country: us
RandallM,

OK, you have a goal in mind of what simulation can and can't do, and that's a start - but you'll still run into Theory vs. Reality issues once you get into the low PPM area - so Head's Up.

If I understand what you're doing: you're trying to maybe simulate how your software will act when it processes some noise and drift signal that "looks like" it came from a precision Vref? 

And You are trying to simulate a typical noise signal from a particular Vref??  That still will depend a lot on real hardware, and how the Vref chip reacts to the rest of the parts on the board.  TiN's generous offer of data might be a starting point, but not the complete answer.

A fer' instance gotcha is "Tempco".  What is that? Is that a fast temp change...did it suddenly get hot fast and cool down slow? Slow change up and down?  Did it get cold and then warm up?  So It's not just Tempco...it's how the temperature changed and the history of changes that gives you a real world noise + drift signal of a precision Vref.

Your measuring software all depends on so many factors based on hardware - and how long - do you need to do a repeatable measure, daily drift rates over 1 minute, 1hr, 10hr, 24 hrs, 30 days, 1000 hrs, 1yr etc. 

You -can- average that out 1/f noise over multiple uncorrelated systems.  Sorta. Maybe.  Within practical limits.  Once you build some real systems you'll realize many more factors creep in that are hard / impossible to predict ahead of time.  After you get to about 4 parallel Vref's, you start to see the other problems creep in, like the noise from your averaging technique, etc.  You get into Theory vs. Reality problems again, very quickly.

The point is - there are lots factors that go into the accuracy you need for an absolute measure system (to whatever low ppm you need), its probably nearly impossible to predict / simulate with any high confidence what's going to happen without building and testing Vref hardware for -your- application.

But you can always try - it might be a starting point.  Once you get into it you'll see if a simulation is going to really help the end game saving time over just building and testing hardware.  Low PPM is hard, and next to impossible to simulate even if you're trying to predict what the real noise & drift might look like.  Different hardware will have different rates and responses to temp changes etc.

Now if you're talking about a RATIOMETRIC measure, where you have something like a weigh scale or pressure sensor bridge connected to a high -resolution ADC, and the same Vref is the excitation source for your sensor - that makes the system more forgiving in terms of Vref tempco and drift rates at high bit counts; and you're more concerned with short-term noise reduction. That task has already been tackled for you on these chips.  Those high resolution ADC's (fast 24 and 32 bit ADC's) are primarily aimed at that radiometric input signal market, and you'll see where much of the SINC and Averaging filters are already done for you on the chip.  All you need to do is read the data out - and that's a lot easier at high bit depths.

So in this "Ratiometric" application, a simulation might just be a reinvention of the wheel - and probably isn't needed.



« Last Edit: May 03, 2017, 04:56:53 am by MisterDiodes »
 

Offline chris_11

  • Contributor
  • Posts: 48
  • Country: de
@misterDiodes

what you can simulate is noise behaviour, however extracting decent models is not easy. You can to a certain degree simulate tempco, given decent characterisation of the devices, package etc. You can not simulate drift.
The real device is the best "simulator" since it runs in real time. Even if you had all start parameters and perfect models, it would still be useless for drift, because the simulation run would need much longer than the real circuit. Typical simulation times are seconds for microseconds. So you are easy thousand to a million times slower compared to the real world.
 
The following users thanked this post: MisterDiodes

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21720
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Maybe I'm smoking something but its seems to me that my simulation should, in the long run, pretty accurately tell me, for example, how many bits I can usefully get if the averaged references are used as the reference in a 24-bit A/D. I know its less than 24 bits (these are not LTZ1000s) but how many? If the temperature range is restricted, what then?

Perhaps the questions I'm trying to answer are just obvious to you folks? Don't know.

Yes-- it is.  There are theoretical bases for all of this, which shortcut a lot of prodding and guessing :)

The central part is the Fourier Transform: take a signal as a function of time, and convert it to a function of frequency.  Now it's upside-down: the average-for-all-time component is at frequency 0 (DC), while the variations over short periods of time lie at high frequencies.

The most useful part of the FT is, per-element multiplication in the frequency domain is equivalent to convolution in the time domain.

You should be familiar with convolution, because it's responsible for many useful things in software: numerical multiplication (if you've ever written out a multiply algorithm), DSP (FIR filters), lots of simulated dynamics (game engine physics?), etc.  In any case, it's taking two arrays, and combining them into a new 2*N sized array, where one array is read in reverse order and slid past the other, taking the sum of products of the two arrays.  The "slide" position is the output index, and arrays of length N have 2*N overlapping positions, so you get that many output points (though in practice, it's usually arranged so that half of those are zeroes, so you can truncate it).

For numerical multiplication, this gives maximum 2*N digits in the output, from N length input numbers, so there you go.

But in the Fourier domain, the sliding has already been done, in a sense: one definition of the FT is, convolving the input with all possible sine and cosine waves.  This selects the components of the input, which are most similar to those sine waves.  In other words, the frequency components, the spectrum.  If you have the FT of a frequency response (a filter), you simply multiply that (per-element multiplication, no sliding or whatever) by your transformed input, and you've filtered your signal.  Of course, to get a time-domain signal back, you need to inverse transform it.  (A nice property of the Fourier transform is, it is self-inverse, so you don't need to duplicate code, just take FT(filter*FT(signal)).

So having introduced that: what's the FT of an averaging function?  It's a lowpass filter!

There are several kinds of simple average: arithmetic average, weighted average (given some arbitrary array of weights), geometric average, sliding average, etc.

Note that a sliding average is a special case of a weighted average, where the weights are all 1 in a certain span, and zero elsewhere (i.e., we ignore any other samples).  Likewise, the simple arithmetic average is equally weighted for all samples, so we don't need to make that special.

We do discard the geometric average, however: it is nonlinear.  FT only works when the functions obey linearity: FT(a*X+Y) = a*FT(X) + FT(Y) (for vectors X and Y and scalar a), and associativity and all that.  The geometric average takes the Nth root of the result, so it cannot, in general, share this property.  (It will be very close when X and Y are nearly constant; but then, why not save the computation and use the arithmetic average?).

So what's the FT of a general weighted average?  Well, the weights are just a vector, and we can FT that.  Indeed, if we do FT(FT(weights) * FT(signal)), we get the filtered signal just as if we did convolution(signal, weights) -- which remember, is the definition of the weighted average (minus the sliding part).

A weighted average is just an FIR filter (finite impulse response: when the input signal is a single point surrounded by zeroes -- an impulse, the output is simply the series of weights, and when that vector is finite, as it is in real DSP systems, the response is also finite).

There's also the IIR filter, which works more like a capacitor charging and discharging.  Add up successive input terms in an accumulator, "leak" some away by removing a percentage (10% lost each step == multiplying by 0.9, is roughly equivalent to taking the average over 1/(0.1) = 10 samples), then adjust the output gain by that effective number (divide by 10 --> output).  It's not a strict sliding average, because the weights decrease exponentially over time, never quite reaching zero.

But what can signals teach us?  Well, EEs have been doing much, much better filters for over a century.  If we take the transform of one of those filters, we will find we can get much sharper attenuation of high frequency components.

And with FT in hand, we can synthesize them from scratch, too!

(IIR filters require more math -- you're solving for polynomial roots, just as network synthesis methods need to.  Downside: they can be unstable, so you need more bits of accuracy in the computation.)


Back to the matter of references: suppose your error goes as sqrt(t) -- a diffusion mechanism.  As t --> infty, error --> infty, so you can't hope for anything guaranteed.  That's simply flat out impossible, given that statistic.  What about probability?  What's the error bound that you need?  What if there's a 0.1% chance it's outside your min/max bounds after, say, a decade?  A spec like that gives us a finite time, and therefore a finite frequency response, we can use to reason about the system.

Suppose your noise density goes as 1/f.  The integral of 1/f, over all frequencies (we must integrate, to find the total noise -- a density is per-frequency only), is log(f).  And if we take log(0Hz), i.e., DC, we find it diverges -- so again one cannot say anything about the long term stability of the system.

How many bits to expect, and how "good" are they?  Of course if you keep adding up samples, as in the IIR or total-average case, your accumulator grows as Lg(N) for N samples.  To double the ENOB, you need 2^ENOB samples.

Can you do better?  Is this a lower or upper bound, and do you need more to be sure?

You can do better, but not in the general case (uncorrelated noise).  If the noise comes from a known source (usually an interference source with known frequency range -- like SMPS noise), it can be low-pass filtered, or notched out.  If it comes from a known process, like ADC quantization noise, you can take some steps to average it out (dithering) or subtract it (subtractive dithering).

This is an upper bound, because systematic errors tend to dominate.  That is, how do you know your ADC is perfectly linear at this level?  If it has a small offset or gain error, that varies as a function of input value, you'd have to measure that transfer function and calibrate it out.  Same goes for any other circuitry in the way.

All the rest: signal noise, ADC noise, reference noise, we can treat as normal uncorrelated Gaussian noise, which adds vectorially (i.e., RMS), and averages out the same.

At the very least, we don't have any motivation to suspect the noise sources are anything other than Gaussian, and if they do obey different statistics, they'll probably be close enough to be fine (i.e., the central limit theorem applies), but in any case if we want to obtain any better result, we need to really dig into and discover the statistics of those sources, which may not be a useful gain in the end (say the modified statistical approach only yields 1-2dB of SNR; who cares?).

Most likely, your filtering (and therefore ENOB gain) will depend on how long a user is willing to wait.  A DMM must read in a useful time frame (under a second, say), and that limits how far down the spectrum the filter can operate.  (A gate period of 1Hz can't filter anything below 1Hz, and so on.)

Sure, you could set the gate duration obscenely high (months?), but what good is it?  If you're just wanking to the reference, why not just wire the display to a constant value? ;D

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7401
  • Country: nl
  • Current job: ATEX product design
You want to simulate what? It does not work that way.
You take tempco, multiply it with the operating temperature range, pray for the baby Jesus, that the box method they used is relevant for your application.
You take the current load change, multiply it with your current.
You take the aging, multiply it with the next calibration period, take care about the square root of time.
You take the noise, multiply it with your aperture time or integration time or whatever relevant.
You take every other parameter that is in the datasheet and multiply it with something.

And then you add all these together, and compare it with your specification.Not square root adding, just adding, you are calculating with worst case scenarios. Usually the datasheet values will tell you, that you need a much better reference. There is no simulation for this. Try simulating the random generator in your computer, see if that works.
 

Online RandallMcReeTopic starter

  • Frequent Contributor
  • **
  • Posts: 541
  • Country: us
So, yes, I did come up with some simple simulation software. It certainly tells me stuff that I did not know previously.

I open-sourced it here:
https://github.com/RMcRee/ModelVoltageReference

This simulates four voltage references, labeled v1,v2,r1 and r2. (The idea is that v1 and v2 are averaged to make a primary reference and r1 and r2 are there for check purposes). Now if we have a circuit that measures the differences, e.g. v1-v2, v1-r1, v1-r2, v2-r1, v2-r2 and r1-r2 *only* can we then impute the actual voltages for v1,v2,r1,r2?  I found a thread from 2013 started by branadic that suggests the answer is NO.
 https://www.eevblog.com/forum/projects/thought-experiment-self-controlled-voltage-reference/

My simulation says the answer is YES--with caveats.

I solve the system of equations using least squares via QR decomposition. And it turns out that least squares has been used for this purpose before. Here is a 1974 NIST paper discussing measuring four gage block differences and yielding the same set of equations: http://emtoolbox.nist.gov/Publications/NBSIR74-587.asp
It's a dry read, if you don't care about least squares and such...sorry! I think it shows that the overall idea is well-founded.

Sorry about a bit of cross-posting. I thought it was important to acknowledge branadic's original idea.

 

Offline try

  • Regular Contributor
  • *
  • Posts: 112
  • Country: de
  • Metrology from waste
So, yes, I did come up with some simple simulation software. It certainly tells me stuff that I did not know previously.

I open-sourced it here:
https://github.com/RMcRee/ModelVoltageReference

This simulates four voltage references, labeled v1,v2,r1 and r2. (The idea is that v1 and v2 are averaged to make a primary reference and r1 and r2 are there for check purposes). Now if we have a circuit that measures the differences, e.g. v1-v2, v1-r1, v1-r2, v2-r1, v2-r2 and r1-r2 *only* can we then impute the actual voltages for v1,v2,r1,r2?  I found a thread from 2013 started by branadic that suggests the answer is NO.
 https://www.eevblog.com/forum/projects/thought-experiment-self-controlled-voltage-reference/

My simulation says the answer is YES--with caveats.

No. Any least square approach with a given constant c giving
v1'=v1+c
v2'=v2+c
r1'=r1+c
r2'=r2+c

will show the same quality of fit for a new '-family of values for the set of references.

If the quality of fit stays the same regardless of the constant c selected it proves that you cannot attribute absolute values to your references.

There is a reason for the following sentence in your link to NIST:

[... and uses the values assigned to the standards as the linear restraint to bring the system up to full rank..]





« Last Edit: May 09, 2017, 11:01:01 am by try »
 

Online RandallMcReeTopic starter

  • Frequent Contributor
  • **
  • Posts: 541
  • Country: us
I believe you are correct, but also wrong.
Yes, it is correct that differential measurements cannot determine the reference values. It seems to me (and I think others, but I could be wrong) that the LSQ solution can be used to *track* reference changes.  For the LSQ solution we need to start off with the initial reference values in some form and they obviously come from somewhere--they are initially measured.

In the gage block paper what is the utility of measuring the gage block differences if nothing "real" is determined?

Thanks,
Randy

 

Offline try

  • Regular Contributor
  • *
  • Posts: 112
  • Country: de
  • Metrology from waste
In the gage block paper what is the utility of measuring the gage block differences if nothing "real" is determined?

That is not true.
As they say:
[... and uses the values assigned to the standards as the linear restraint to bring the system up to full rank..]

They apply their calculations on the measured initial value. This is what you would consider "real".

What is "gage" by the way? English is not my native language.
 

Online RandallMcReeTopic starter

  • Frequent Contributor
  • **
  • Posts: 541
  • Country: us
A gage block is a block of metal machined to some high standard on one or more faces. For example a 1-2-3 block is a rectangular block of some specified parallelism with sides of 1, 2 and 3 inches.

https://en.wikipedia.org/wiki/Gauge_block

 
The following users thanked this post: try


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf