Electronics > Projects, Designs, and Technical Stuff
Curve fitting question
<< < (6/8) > >>
AlcidePiR2:
@Jester : You need to do at least an additional experiment with the full curve again. Even better a few.

This will  tell you the noise level in your experiment.
Berni:
Curve fitting onto noisy data will also try to reproduce the noise the best it can. It has no way of knowing what is noise and what is signal. Just that usually by limiting the number of elements in the polynomial you smooth it out somewhat by limiting its ability on fine detail reproduction.

If you want the same smoothing effect with a lookup table then you just run the whole thing trough a filter that smooths out the points. Basically like bluring on a image but on 1D instead. The end result is the same, just a different process to get the smoothing effect.

Not saying polynomial curve fitting is bad, just saying that its much better suited to very smooth graphs. If the graph has only a 2nd order curve to it along its whole size and you have 100 points then yes a polynomial curve fit is an excellent way to clean it up as a low order polynomial will average out the noise due to its inability to take sharp turns and will be computationally nice for lookup due to being short. But the graph that the OP is showing is not a smooth graph at all and i assume he wants his curve fit to look similar to the interpolated curve that excel draws (And excel likely did it with cubic interpolation or similar for efficiency reasons)

But if we are talking about the actual data, the graph in the original post does look like it should have more points in it. Some points have significant jumps so you probably want some extra points in there. As for noise, that's the OPs job to make sure his test setup makes a graph sufficiently accurate to begin with. No amount of math will magically remove noise, not even polynomials. You can just mask the noise a bit by smoothing. The proper way to 'remove noise' is taking many passes at the measurement and then averaging the results. That is provided your DUT doesn't drift between the test runs, but if it does drift then correcting it using calibration data won't work anyway.
Jester:
This is such a great forum from the sharing of expertise perspective. Big thanks to all of you.

Noise was certainly a factor, as well as the relative error.  I broke the curve up into a few segments and then remeasured the outliers and found that on average they were actually closer to the trend. The function finder in ZunZun is great in that it lets you see the error for a multitude of solutions, polynominal solutions were usually not great.

Fortunately I'm using a 32bit uC, so some number crunching is not much of an impediment. I'm getting great corrected results now 0-250V ac and DC, will be moving on to current measurement later today. I'm using a LEM DCCT, and I'm anticipating a fair bit of drift, hopefully I can correct based on temperature in the box.
 
mrflibble:
Did some linear regression for the error term, and the results look suspiciously much like an ADC non-linearity error function. Complete with excursion within 1 LSB and some semi-random walking of the residue. That residue will contain a good bit of noise, but I would not be at all surprised if there's some zig-zag-zag-ziggy-zaggy pattern in there at multiples of 16 or 32 or something similar.

Below the result when using the data-points from 10 to 200, so excluding that big jump at 210. Not so much for numerical reasons but more to make it a bit easier to try and spot any potential periodicity in the residue.


And here the full range is used, so including the fairly big jump at 210. The curve fit is still fairly similar to the previous one due to the use of l1-minimization. That is far less sensitive to outliers than ye olde l2-norm least squares.
mrflibble:

--- Quote from: Jester on March 09, 2019, 02:56:07 pm ---Noise was certainly a factor, as well as the relative error.  I broke the curve up into a few segments and then remeasured the outliers and found that on average they were actually closer to the trend. The function finder in ZunZun is great in that it lets you see the error for a multitude of solutions, polynominal solutions were usually not great.

--- End quote ---
If you are serious about spending more time & effort on making a predictor for the error, then you might want to consider getting more data. Those few samples are okay for doing some quick checks, but IMO not near enough for getting a result with decent statistical significance. Also, if you have a max budget of measurement points due to time constraints or whatever, you are better of generating them at random intervals. As opposed to 0,10,20,30,etc... For the same number of samples the non-uniform intervals will typically give you more information for this kind of thing. And as a bonus feature, by using random instead of fixed intervals, certain systematic errors are less likely to occur. And at no extra cost to the user. :)
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod