Author Topic: 4th order polynomial coefficients for pressure at temperature readings, Help!!!  (Read 10118 times)

0 Members and 1 Guest are viewing this topic.

Online CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5441
  • Country: us
Those are the coefficients NOMINAL ANIMAL. Exactly. Now I need to figure out how to calculate those myself based on any dataset using excel.  I tried to download Gnuplot but had some issues installing.

As I mentioned in an earlier post, this is easily done using the Excel solver add-in (it is included with every version of Excel, but it does not show up on ribbon by default--you have to add it).

With the solver add-in you can say "vary these cells to minimize the value of that cell". It becomes quite easy to do all sorts of regressions that way.

I heartily endorse this.  There are closed form solutions to many regression problems, but the math gets difficult quickly for more complex forms.  This method works easily and for a very broad class of problems.  You do occasionally get convergence problems, which can often be solved by changing initial state or selecting other options in the solver setup, like bounding values for some of the coefficients or changing the form of the model.

The key point is that for a variety of reasons a lowest sum of squared errors is a "best" answer.  Do be aware that other definitions of best exist and may be appropriate to specific cases.  Lowest maximum error for example.
 

Online Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 3857
  • Country: nl
Why not just keep it simple?
Measure some reference points, and then do linear interpolation between those points.
You may need a few more points, (10 or so is usually plenty) but it makes the software much easier to understand, write, maintain, modify, etc.
 
The following users thanked this post: b_force

Offline The Electrician

  • Frequent Contributor
  • **
  • Posts: 747
  • Country: us

The author of the instrument stated the following:
Enter Temperature Coefficients
This command provides for the entry of temperature coefficients that will compensate the sensor for ambient temperature conditions. The coefficients are determined during the factory calibration process.
tcomp <a0> <a1> <a2> <a3> <a4>  (THESE ARE WHAT I AM TRYING TO IDENTIFY) :)
The operator specifies five coefficients, which are used in a fourth order polynomial that corrects temperature readings for the ambient temperature at the sensor.
The currently effective temperature compensation coefficients can be viewed using the coef command, which is described in the user’s manual.
default is 0 0 1 0 0.

Does this make sense?  I feel like this is close to what you were describing.  I'm attaching another spreadsheet here, and I'm also downloading the program you mentioned.  Thank you so much.
'

What is the independent variable (and its powers) which the temperature compensation coefficients are multiplying?

If you use the "coef" command, what are the currently effective coefficients?

Edit:

I see that the answer to my questions can be found in reply #39 and #41

Here is how I derived a least squares solution using some matrix methods:


I solved this with an HP50G calculator and got 9 correct digits for a1, and 12 correct digits for a2.
« Last Edit: August 27, 2020, 08:21:39 am by The Electrician »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6911
  • Country: fi
    • My home page and email address
Why not just keep it simple?
Because it is the device itself that internally uses \$P(p, c) = a_0 + a_1 c + a_2 p + a_3 p c + a_4 p c^2\$ to apply pressure \$p\$ and "temperature count" \$c\$ compensation to the pressure reading it provides.

I solved this with an HP50G calculator and got 9 correct digits for a1, and 12 correct digits for a2.
Except that that least squares fit (which is most commonly used) gives you maximum error an order of magnitude larger than gradient descent optimization (which yields \$a_0 = 0.007697997673805127\$, \$a_1 = -9.893590127282349\cdot 10^{-7}\$, \$a_2 = 0.9937816855224063\$, \$a_3 = 1.353571043339379 \cdot 10^{-6}\$, and \$a_4 = -6.784307221818112\cdot 10^{-11}\$).

Why do I care?  Because if OP is doing recalibration before giving the devices to actual users, I want those users to have the best readings they can.  Least squares regression is most commonly used to fit device parameters to calibration data, but popularity does not mean it is the best method.  In this instance, I have already shown how using the exact same calibration data, you can reduce the maximum error by a factor of 10.  Why so many members here think that is irrelevant or not worth the effort, boggles my mind.
« Last Edit: August 27, 2020, 11:57:52 am by Nominal Animal »
 

Offline The Electrician

  • Frequent Contributor
  • **
  • Posts: 747
  • Country: us
I solved this with an HP50G calculator and got 9 correct digits for a1, and 12 correct digits for a2.
Except that that least squares fit (which is most commonly used) gives you maximum error an order of magnitude larger than gradient descent optimization (which yields \$a_0 = 0.007697997673805127\$, \$a_1 = -9.893590127282349\cdot 10^{-7}\$, \$a_2 = 0.9937816855224063\$, \$a_3 = 1.353571043339379 \cdot 10^{-6}\$, and \$a_4 = -6.784307221818112\cdot 10^{-11}\$).

Using these coefficients I get the same result you reported in reply #49:

"No need to believe me, though.  Just use the curve fitting package you think does better or even just as well, and see if it discovers the minimum I reported a couple of messages back, that minimizes the maximum error among the calibration samples to 0.002421 pressure units.  In comparison, least squares fitting only reaches a minimum error a magnitude larger (around 0.02, depending on details)."

But you pay for a smaller maximum (absolute) error with a larger residual for the GDO method, namely .009256, compared to a residual for least squares of .007665

I don't find the maximum absolute error for a least squares solution to be "around .02", but rather I get .005115, which is only about twice the GDO error, not 10 times.  This problem has a condition number of about 2.646E9, which is why I used rational arithmetic to solve it rather than floating point, otherwise errors in results may be larger than expected at first glance.



This is often a choice to be made as to the "best" method; minimize absolute error, or residual.
« Last Edit: August 27, 2020, 12:58:49 pm by The Electrician »
 
The following users thanked this post: Nominal Animal

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6911
  • Country: fi
    • My home page and email address
I don't find the maximum absolute error for a least squares solution to be "around .02", but rather I get .005115, which is only about twice the GDO error, not 10 times.  This problem has a condition number of about 2.646E9, which is why I used rational arithmetic to solve it rather than floating point, otherwise errors in results may be larger than expected at first glance.
Right; I apologize.  I'm on a different machine (about 1Mm away from my normal machine), and remembered the difference wrong.  In any case, halving the error is in my opinion still significant enough to consider.

Of course, it all depends on the calibration method.  For the GDO to make sense, you need 40-50 calibration samples at minimum, preferably more – I'd prefer around a hundred, with more unique temperatures.  This is why I suggested research into continuous calibration, i.e. measuring a test environment whose pressure and temperature can be varied, but not controller, with at least two other measurement units, and doing GDO dynamically during measurement.  (There is no need for the temperature counts and actual pressures to form a regular grid; the only critical thing is the sample density in the parameter space.  I'm thinking of a small pressure vessel in a thermal bath, with more than one set of sensors, so that both pressure and temperature can be varied (but not precisely), and measured very precisely.)

This is often a choice to be made as to the "best" method; minimize absolute error, or residual.
Yes, exactly: choosing what to minimize.  Least squares fitting is reliable and easy to document; gradient descent optimization to minimize error at calibration points is harder, but potentially more useful for end users.  I mean, if you can calibrate a device to higher precision than guaranteed, should you or should you not?  What is the physical accuracy of the sensors themselves, and how fast do they drift?  Would the "extra precision" lead users astray?

We do not know how repeatable the calibration samples are, nor how much physical error there is in the pressure and temperature readings.

For example, if for each calibration sample you had error bars, you could treat both the calibration measurements and the device coefficients as a statistical problem instead.  (Then, you wouldn't have anything exact to fit to, but would need to treat the five coefficients as parameters choosing the most likely approximation.)

I do know that calibrating PT100 and PT1000 temperature sensors can give very, very precise results, because they are inherently quite stable devices at these approx. room temperatures.  (Self-heating due to the measurement current passing through the device can be a problem, but in my experience, it only tends to be an issue with small amounts of material to measure, and/or much lower temperatures.)

As to pressure sensors, there are many different types, and we don't know which kind it is.  Most of them are quite stable (strain gauge based ones for example), so I would guess that their time-dependent variation is small, and precise calibration can actually give better readings in the long term (as opposed to just at the moment of calibration).

It is definitely an interesting problem in my opinion, and something I'd like to see put some thought into, as an end user myself.  (Not OP's device, but pressure and PT100 temperature sensors in general, in a fusor project I'm helping with for example.)
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
Calibrating a sensor is one thing, but ever thought about the practical variables that influence the accuracy?
Especially measuring very accurate temperatures is extremely difficult, sensor NOT included.

Move the sensor a couple of millimetres and you're readings WILL be off.
Changes the airflow a teeny tiny bit and you'r readings WILL vary.
Same goes for humidity, type of mounting, mounting torque, use of thermal paste, type of thermal paste.
The list goes on and on.

You will be easily of ±0.5 degrees, which is even pretty generous.
I have seen differences over ±2 degrees in a professional mixing device, stabilized for 2 hours.

When working with samples, you should use the standard deviation with the amount of sigma you need (for accurate measurements, that's usually either 3 or 4 sigma), in combination with the standard error.
Depending if you're interested in the spread or how accurate your expected average will be.

I also agree to some extend with @Doctorandus_P .
You should focus on what you would expect, not if your mathematical equation is going to fit to a million decimals.
The default curve of a PT100 an alike isn't complicated at all, so it's not difficult to fit a regression line in there.

The rest of the systems accuracy you need to calculate with the relative error formula.
« Last Edit: August 28, 2020, 02:18:49 pm by b_force »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf