I can't seem to find good information on how to rank temperature sensor types by long-term stability.
What good is a volt-nut if he can't tell if 23.17C from three years ago is still 23.17C today?
This comment from splin seems to be the best info we have on the forum so far: https://www.eevblog.com/forum/testgear/temperature-test-box-for-component-characterization/msg617199/#msg617199
My assumption was that the long-term stability quality ordering went something like this (best to worst):
- Platinum RTD
- Thermistor in glass DO35
- Thermistor in epoxy bead
Also, practically, for this application I probably only need about 3.5 digits of long-term stable temperature comparison, so maybe that eases the requirements?
Thoughts?
You've clearly run into the same issue as I did, viz the almost complete lack of long term stability specifications in temperature sensor datasheets - but not much worse than resistor specifications whose stability specs are almost useless - if indeed there are any at all.
[
Rant mode] (bypass unless you are particularly bored)
As you might have guessed, this is one of my pet peeves - stability is one of the most important specs for sensors (and resistors). Great - but who wants a .001% accurate part that might drift outside that tolerance within a few weeks and become a .1% part within a year? I am heartily sick of datasheets which tout 'excellent', 'outstanding', or 'very good' long term stability but give zero information or even hints as to what that means. A Murata thermistor datasheet states:
"1. Excellent solderability and high stability in the
application's environment
2. Excellent long-term stability"
It hardly matters to me that I can't parse that to make any sensible differentiation between the two because they aren't quantified and thus are totally meaningless. Maybe it's a translation issue or an industry convention of which I'm unaware but I'm guessing it's just marketing BS (with some element of truth).
It's as if the industry expects its customers either to have a good understanding of the stability characteristics of the various types of devices (through experience) or be prepared to evaluate and characterise any parts they may consider using themselves. Why the hell should you have to spend countless hours and dollars doing this when the manufacturers already have this data? Obviously the big customers will get that information and quite likely will insist on doing their own testing and qualification anyway, but that leaves an awful lot of small to medium companies wasting a great deal of money doing mostly unnecessary evaluation work.
No doubt many don't bother and the (vast?) majority get away with it because in fact the devices actually are very stable on the whole providing they operate in relatively benign environments For example, the abstract of the paper linked by mycroft cites a jellybean $0.15 NTC -
The results show that the SMD type sensor from Murata manufacturing (NCP15XH103D03RC), intriguingly, is the most stable sensor among the sensors tested with a drift rate of 0.492 mK/year peak-to-peak.
Anybody expect that oustanding result? Anyone got access to the paper to see if it was a single sample or in unrealistic or highly specific operating conditions?
To be fair to the manufacturers it is impossible to fully specify drift rates as they are very much a function of operating conditions including high temperatures and thermal shocks, but that doesn't excuse providing absolutely no data - whether in application notes or white papers if not in the datasheets themselves.
[/
Rant mode]
Back to the original question. Stability is not an end in itself, it mainly determines the frequency of calibrations. So there are several options:
1) Have your equipment calibrated professionally. Whilst appropriate for some this is very expensive and clearly hard to justify for enthusiasts for reducing their measurment uncertainties by a relatively small amount.
2) Do your own calibration. Lars suggested using the TWP but I think that is also unnecessarily difficult and expensive. Using the ice point should be more than adequate. Take a look at this which suggests that an ice bath can get you within 3 or 4 mK of the TPW (method 3):
http://www.burnsengineering.com/local/uploads/files/Approximating_the_TPW_Presentation.pdfYou should also take a look at this website which has some very good information on DIY calibration using the ice point an the melting point of Gallium (actually quite inexpensive):
http://www.kandrsmith.org/RJS/Misc/Thermometers/absolute_ds18b20.html and
http://www.kandrsmith.org/RJS/Misc/Hygrometers/absolutetemperature.htmlSingle point calibration of NTCs raises the question of exactly how they drift - if the shape of the transfer curve drifts then it wouldn't be sufficient. Fortunately though it sems that NTC drift is typically thermometric - ie. the temperature change is constant, or an offset across the range. This was from a paper found on the Measurement Specialities website, but having been taken over by TE I can't find an online copy. I have thus attached a copy I made earlier. (And why I always try to take copies of anything I find interesting).
The implication of the above is that a single point calibration should be enough to restore the calibration across the whole range.
3) Purchasing sensors with sufficient accuracy. This is where it gets difficult and the drift specs become relevant given that high precision sensors are relatively expensive. The US Sensor PR series NTCs are attractive costing $7 to $10 with a +/- 50mK tolerance between 0 and 50C. They are claimed to be highly stable but that isn't quantified. However I found this:
http://www.ussensor.com/accelerated-life-testingWhich states:
Thermal Shock 200 cycles-Thermistors are subject to one half hour at 80°C and within ten seconds transition, are subject to one half hour at -40°C. This test is typically followed by the 100,000 cycle Thermal Shock Test.
Thermal Shock 100,000 cycles-This test consists of 45 seconds at 20°C and 45 seconds at -20°C. The transition between these temperatures is minimized to less than ten seconds creating a Maximum Transition State. At the completion of testing the thermistors must remain in tolerance.
(my bold)
This strongly implies that the 50mK tolerance PR series devices are expected to remain in tolerance after harsh testing as do other associated 'quality' pages. This may actually be a well understood industry convention, in which case much of my rant is misplaced, but I haven't seen it stated explicitly anywhere - I'd love to see anything authoritive on this issue though.
As stated earlier, the YSI super stable 46000 series, glass encapsulated thermistors do specify (low) typical drifts and are available in 50mK tolerance but are expensive and not widely available. [Oops] Measurement specialities, who took over YSI, have now been acquired by TE Connectivity who appear to have obsoleted all except the 46004, 100mK part. And lost all the relevant white papers - assuming you can even navigate their infernal website
4) Some combination of 2) and 3). The PR series thermistors are possibly inexpensive enough to allow you to purchase one or more periodically as references to recalibrate other, perhaps low cost parts - at least until you have sufficient history to be sure of the drift characteristics of your reference sensors. Of course that doesn't work if the supplier's turnover is so low that you don't get recently manufactured parts. (Don't buy from anyone who has them in stock? )
As I mentioned previously, the KTY8x series of silicon diode sensors are interesting. They need to be calibrated but they do claim to have low drift. The glass encapsulated KTY83 (SOD68/DO-34) and KTY85 (DO80/Mini MELF) parts have been obsoleted but there are some German sellors on ebay selling 30 for $7.5 inc postage. The plastic encapsulated KTY81 etc versions are still available but may suffer more from humidty/hysterisis. The drift is specified in this document along with the relevant Arrenius data:
http://www.elenota.pl/datasheet-pdf/138914/Philips/SC17?sid=8cae005bccd92bb89ead005554f44553Once again this document seems to have been lost by NXP, robbing designers of valuable information. (I hate the loss of such data whether inadvertently as is probably the case here, or much worse, willfully).
From the above document a maximum drift of 250mK for a KTY85 after 10,000 hours @ 125C is shown and from the Arennius equation, 450,000 hours @62.5C. By my calculation that also equates to 5.7mK after 10,000 hours at 62.5C or .1mK/10,000h @ 20C. That would appear to be outstanding for long term, low temperature measurements which is presumably the OP's intent. Of course there's a caveat which is that the Arennius equation is only part of the drift story. This Siemens KTY document gives a bit more insight:
http://www.b-kainka.de/Daten/Sensor/gentemp.pdf Page 5 shows the drift charatceristics at high temperature with a not surprising large change over the first 500h and relatively little thereafter, suggesting a burn in before calibration would be a good idea.
So where does this leave us? Those who deal with this stuff on a regular basis will undoubtably have a much better handle on what's really important and how realistic datasheets really are so hopefully some will enlighten us. Clearly long term drift of temperature sensors is a very small issue in the case of the LTZ1000 (transistor Vbe) and Fluke 734 (NTC) so drift, in reality may be a non-problem for this application for most sensors.
As to your question of types of sensor and order of preference:
a) PT100/PT1000 Probably a good choice for higher temperature measurements. But expensive and very hard (almost impossbile) to find any drift data for affordable (none lab grade secondary standard probes) when used at moderate temperatures. Low senstitivity doesn't help. I can't see any reason to choose these for extended temperatures below 50C and possibly even below 100C.
b) NTCs. For temperatures below 50C or so, they seem to be an excellent choice - high sensitiivity (10X PT100/PT1000) and capable of excellent stability, and can be very low cost. Anyone know what is used in the Fluke 734?
c) Silicon sensors. The SHT35 looks like a reasonable choice but the initial accuracy of +/-300mK isn't good (compared to a +/-50mK NTC) and the drift @ 30mK/y, in unspecified conditions, isn't that impressive. In reality it may be a star performer but there is only so much you can conclude from a datasheet. It isn't particularly cheap either and needs to be mounted on a PCB which could be a serious limitation. The KTY sensors look very good and whilst not as sensitive as NTCs are still twice as good as PT100x types, thus reducing the performance requirements of the rest of the system. It's also possible that jellybean transistors, such as the 2N3904, may give equally stable results when measuring Vbe.
Overall, the allure of that potential 0.492 mK/year of the Murata NCP15XH103D03RC @ $0.5 (1 off) is hard to ignore. It's also hard to ignore that very many similar NTC parts, including the really common place types, may well have similar (drift) characteristics but simply haven't been evaluated in this way.
Enough. I'm sure there are lots of grammatical and technical errors in the above - my main excuse is that I'm just recovering from an unpleasant flu type infection, so please forgive any incoherency.
Merry Christmas all!
[EDIT] Turns out I can't attach the paper as it's too big (1.7MByte v 1.2M limit). PM me if you want a copy.