Electronics > Projects, Designs, and Technical Stuff
Curve fitting question
Jester:
I'm working on the calibration aspect of a project and would like to correct for some non linearity by applying a correction factor.
The uncorrected data and graph can be seen here (correction only at zero and full-scale):
I tried plugging the data into an online 3rd order Polynomial Regression tool and it helps but is far from ideal. We covered curve fitting in school decades ago, I don't recall much at this stage, except that a polynomial fit is likely a poor choice, perhaps Cubic spline or some other fit method?
Also can you suggest an online tool that will accept preferably 10-15 data pairs.
Thanks
magic:
--- Quote from: Jester on March 06, 2019, 12:34:03 pm ---online tool
--- End quote ---
:--
Octave/Matlab have polyfit, splinefit, etc.
Polynomial may suffice and has the advantage of being trivial to implement in embedded firmware, but you will need a much higher degree than 3 for all that curvy waviness.
Siwastaja:
A strange curve. It may be difficult to fit a function. Of course you can, but I would consider using a lookup table instead.
Curve fitting works great when your error function is some easy, quick-to-calculate function with only one or two parameters - first order linear with offset and gain being the most typical example, and a second-order parabolic function often being the most complexity you want to deal with, especially in embedded. This tends to be the case when there is one dominant physical mechanism of error, over all other causes.
Lookup tables are most flexible, if you can afford the space for storage. Piecewise linear interpolation is simple to implement if you can't fit the LUT in full resolution. The best thing is, the lookup table adapts automatically even if a particular unit behaves very differently for any reason. With curves, once you have selected which function to use, you are limited to what you can do by changing its input parameters (which you typically try to minimize) - this is harder to analyze.
golden_labels:
You are calculating a relative error for 0, which is already an indication of a problem. 0 can have only three relative error values, none useful for calculations: 0, -∞ and +∞.
Using absolute error provides datapoints that reveal a nice curve with two separate, tiny subgroups on the sides. Therefore you can use a piecewise approach. First few values, having no error, have correction of +0. Then there is the huge blue part that fits logarithmic curve with R²≈0.93. Then there is the problem with the final values (red): there is not enough of them to actually calculate any curve. The two options, the 2nd and 3rd order polynomials, give some reasonable correction, but I feel like they are going to fail as soon as you collect more data in that region. The blue part gets a better fit (R²≥0.96, not shown on the chart) if the values are moved to the logarithmic scale and higher-order polynomial (≥4) is used, but I am also expecting this to be overfitting for that particular data set.
But all the above make sense for that single, particular sample. If you need a general approach for your system, those fixed values will not work. While the middle, blue part seems like it may work with logarithmic curve for any sample, the green and the red ones will probably not fit the models I have chosen for them if you collect samples from a different specimen or under different conditions.
Also, if you are implementing this in a microcontroller with limited resources, consider approximating the logarithmic curve with straight sections. Calculating logarithms is expensive.
SiliconWizard:
I suggest considering Lagrange polynomials: https://en.wikipedia.org/wiki/Lagrange_polynomial
Navigation
[0] Message Index
[#] Next page
Go to full version