Author Topic: Why read a 24 bit sensor value then convert to 16 bits?  (Read 1886 times)

0 Members and 1 Guest are viewing this topic.

Offline DTJTopic starter

  • Frequent Contributor
  • **
  • Posts: 997
  • Country: au
Why read a 24 bit sensor value then convert to 16 bits?
« on: January 06, 2017, 07:35:03 am »
I'm playing around with a TSYS01 temperature sensor with a serial I2C interface.

According to the data sheet you:
1) Read a 24 bit value from the sensor called ADC24
2) Divide the 24 bit value by 256 to get a 16 bit value called ADC16
3) Use the ADC16 value in a polynomial calculation to get the temperature.

Practically:
1) I read 3 x 8 bit bytes and assemble them into a 24 bit value
2) I do 8 right shifts to divide by 256, effectively throwing away the lower 8 bit byte.
3) I use the ADC16 variable in the temp calculation.


Why should I not just create the ADC16 value from the upper two bytes initially read and just ignore the least significant byte?

I'm suspect I'm misunderstanding something. Maybe in the future they will release a calculation based on the 24 bit value?


Attached below are the relevant parts of the data sheet.

The full data sheet is here: http://www.te.com/commerce/DocumentDelivery/DDEController?Action=srchrtrv&DocNm=TSYS01&DocType=Data+Sheet&DocLang=English&DocFormat=pdf&PartCntxt=G-NICO-018




 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: Why read a 24 bit sensor value then convert to 16 bits?
« Reply #1 on: January 06, 2017, 08:09:03 am »
In theory there's nothing stopping you from using the whole 24 bits provided you scale the arithmetic correctly. You could even upset a few whiny purists and do the calculation in floating point if you like.

It might be worth working out the effect on temperature of a 1 LSB step in each case. If 1 LSB is still insignificant in the 16 bit result, then the bottom 8 bits really can be discarded.

This is a very odd part anyway, IMHO. I've never seen a temperature sensor whereby the host CPU is expected to perform this calculation; most report a value which converts directly into deg C with a simple linear scaling factor. TE isn't a semiconductor company, so maybe this is a part they've had designed under contract for some special purpose?

Its accuracy spec is very good, though, at least within a limited range. Maybe this is why they've had it designed? Do you need the accuracy, or might you be better off with a simpler to use part from, say, TI or Microchip?

Offline DTJTopic starter

  • Frequent Contributor
  • **
  • Posts: 997
  • Country: au
Re: Why read a 24 bit sensor value then convert to 16 bits?
« Reply #2 on: January 06, 2017, 08:28:58 am »
Good ideas there Andy.

It seems that the least significant bit of the ADC16 value equates to about 0.009°C over the range -10°C to 50°C.
I'm willing to throw that away.

I'll go with just using the top two bytes read from the sensor.


Re the sensors - yes they are quite accurate over the range I'm interested in. They seem to have released some other models with less accuracy. I guess like anything they are grading them and the ones that don't make the cut are sold off as lower spec parts. I could not find a better sensor (price / accuracy) when I looked.
 

Online hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Why read a 24 bit sensor value then convert to 16 bits?
« Reply #3 on: January 06, 2017, 09:22:00 am »
Yep, the code basically converts 24-bit to 16-bit values. You could as well read 2 bytes and truncate the rest. Constructing a 24-bit integer first and shift right by 8 bits is the same thing.

But with 24-bit values you could do some averaging first and then convert to 16-bit, reducing noise on the last few bits of the 16-bit integer.

I doubt 24-bit calculations will be of much use since the curve polynomial is provided with 16-bits. You could maybe use the extra resolution though, if it isn't already swamped by noise.
 

Offline Andreas

  • Super Contributor
  • ***
  • Posts: 3246
  • Country: de
Re: Why read a 24 bit sensor value then convert to 16 bits?
« Reply #4 on: January 06, 2017, 11:45:42 am »
Yep, the code basically converts 24-bit to 16-bit values.

Hello,

I interpret the datasheet conversion as floating point formula and not as integer.
So the division / 256 is only a normalisation so that for the 24 Bit and 16 Bit value
the same coefficients can be used.

With best regards

Andreas
 
The following users thanked this post: DTJ


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf