Author Topic: ADC/DAC Conversion to "decimal", best practices....  (Read 14528 times)

0 Members and 1 Guest are viewing this topic.

Offline ChristopherTopic starter

  • Frequent Contributor
  • **
  • Posts: 429
  • Country: gb
ADC/DAC Conversion to "decimal", best practices....
« on: June 08, 2016, 06:20:50 pm »
So I'm doing a project with the use of some high-precision DACs and ADCs and am after some advice on how to do the conversion!

Just say I have a 12-bit ADC. I want to convert the output to a string using some kind of itoa() function (included in my C compiler)

I do not want to use Floating points, for obvious reasons.

So, I have 12 bits, which represents 0-10V, so each bit is worth 10/2^12=2.4414mV

My ADC result is 0-4095. I need to convert this to be 0-10V as a string.

Because the numbers aren't very easily divisible by 10 without floats (hardware cannot use an easier reference),  I'm struggling a bit. We were never taught this kinda thing at college either....

Also, the DAC part should be the same, just in reverse?
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3643
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #1 on: June 08, 2016, 06:53:32 pm »
Think carefully about how you will handle over-range, and whether you need to compensate for nonlinearity over the whole range.
If you are fine with using itoa() or printf("%d") then the conversion to base 10 is already being done for you. All you need to handle is the conversion from a range (n/4096) to two range parts: (n/10) and (n/10000).
A naive way of doing it is to first multiply the value by 100000 and then divide by 4096:
long voltage = adc_val * 100000UL / 4096;
There are other tricks with tables that can have better performance.
The key word to search for is "fixed point representations" (like floating point, but the exponent doesn't change).
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3461
  • Country: it
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #2 on: June 08, 2016, 07:11:04 pm »
I like the naive way. you want to avoid divisions as much as possible, even when you have an hardware divider and not only because computational speed.
"cheating" like this (multiply then shift out bits) will be FAST and will retain the greatest amount of information after you throw away those 12 bits
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14219
  • Country: de
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #3 on: June 08, 2016, 07:11:51 pm »
Performance of the  *100000UL/4096 is not that bad:  4096 is a power of 2 and thus shifts can be used instead of a full division. The 100000 can also be adjusted to do software scaling in case the full scale is not at  exactly 10 V but maybe 10.05 V.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #4 on: June 08, 2016, 07:17:08 pm »
My ADC result is 0-4095. I need to convert this to be 0-10V as a string.
First suggestion would be to convert it to milivolts (0-10,000) (maybe more on this in a later post), then convert to a string with repeated subtraction. Here's some pseudocode:
Code: [Select]
  set string to "00.000"
  while value > 10000
     string[0] += 1
     value -= 10000

  while value > 1000
     string[1] += 1
     value -= 1000
  ((( Skip the decimal place )))
  while value > 100
     string[3] += 1
     value -= 100

  while value > 10
     string[4] += 1
     value -= 10

  while value > 1
     string[5] += 1
     value -= 1;

  ((Suppress leading zeros))
  if string[0] = '0' then
     string[0] = ' '   
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #5 on: June 08, 2016, 07:17:58 pm »
It is a good custom to scale ADC values to units like V, mV, uV, A, mA, Hz, etc and append the unit to the variable names so it is absolutely clear what the variable is representing. It makes maintaining firmware much easier and therefore saves money.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #6 on: June 08, 2016, 07:21:49 pm »
Just multiply the raw ADC value (0 .. 4095) by 2442 (using 32 bit unsigned int arithmetic), which gives the answer in microvolts.

When you display the answer (microvolts) you can put in a decimal point, to make it volts.

9,999,999 (microvolts) = 9[.]999999 volts = 9999[.]999 millivolts (if you prefer).

EDIT:
Only the first 5 digits, are useful/correct.
So (ADC * 2442)/1000 using 32 unsigned ints, gives you a directly displayable value, in millivolts.

EDIT2;
As other(s) have mentioned, it is good programming practice to label the constants, and use sensible units within the variables. I've NOT done that here, as I just wanted to show you how to do it, simply.

tl;dr
print this (ensuring 32 bits are used, answers in millvolts using only integers):
(ADC * 2442)/1000
« Last Edit: June 08, 2016, 07:46:47 pm by MK14 »
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3643
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #7 on: June 08, 2016, 07:36:15 pm »
As alert readers have already commented, the slightly more optimized version would look like
long voltage_dmV /* decimillivolt */ = adv_value * 100000UL >> 12;
But the itoa() or printf() functions may be using divide instructions anyway, so don't assume you're saving time, profile!
You probably also will need to use division to separate the whole volts from the fraction part, unless you do grovely things with snprintf() and string insertions.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #8 on: June 08, 2016, 07:49:03 pm »
Another note about printing fixed point values: you'll also need to round the result to the number of decimals you want to display.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #9 on: June 08, 2016, 07:49:34 pm »
Code: [Select]
adc_mv = adc*2 + adc/2 - adc/16 + adc/256

Exercise for the reader (or compier) to convert to efficient left/right shifts.

Also assumes full range is ADC count of 4096 you will need to add one or two more terms to get 4095 to map to 10,000.
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #10 on: June 08, 2016, 07:51:58 pm »
Code: [Select]
adc_mv = adc*2 + adc/2 - adc/16 + adc/256

Exercise for the reader (or compier) to convert to efficient left/right shifts.

Also assumes full range is ADC count of 4096 you will need to add one or two more terms to get 4095 to map to 10,000.

(ADC*2442)/1000

Should be considerably faster (no divides), more accurate (as it uses the CORRECT 4095 value), and quicker to type (shorter).

EDIT:

BUT your solution ONLY needs 16 bit variables, which may be an advantage.
« Last Edit: June 08, 2016, 07:58:23 pm by MK14 »
 

Offline ChristopherTopic starter

  • Frequent Contributor
  • **
  • Posts: 429
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #11 on: June 08, 2016, 08:00:48 pm »
Code: [Select]
adc_mv = adc*2 + adc/2 - adc/16 + adc/256

Exercise for the reader (or compier) to convert to efficient left/right shifts.

Also assumes full range is ADC count of 4096 you will need to add one or two more terms to get 4095 to map to 10,000.

(ADC*2442)/1000

Should be considerably faster (no divides), more accurate (as it uses the CORRECT 4095 value), and quicker to type (shorter).

EDIT:

BUT your solution ONLY needs 16 bit variables, which may be an advantage.
Yes! Perfect! I understand how you got these numbers, hot to change them for other resolutions and have made a spreadsheet for future awesomeness..

I guess a DAC will be similar, just in reverse. I already have made sweet functions similar to itoa and scanf for converting strings to ints and back. That's the easy part for me !

Thanks everyone!
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #12 on: June 08, 2016, 10:23:13 pm »
Should be considerably faster (no divides), more accurate (as it uses the CORRECT 4095 value), and quicker to type (shorter).

EDIT:

BUT your solution ONLY needs 16 bit variables, which may be an advantage.

Yeah, here is it properly worked out.
Code: [Select]
Term     value   Qty Sub total
n x 2     8190     1      8190
n         4095         
n / 2     2047     1      2047
n / 4     1023         
n / 8      511         
n / 16     255    -1      -255
n / 32     127         
n / 64      63         
n / 128     31         
n / 256     15     1        15
n / 512      7         
n / 1024     3     1         3
n / 2048     1         
   Final sum of terms    10000

Code: [Select]
adc_mv <= (adc<<1)  + (adc>>1) - (adc>>4) + (adc>>8) + (adc>>10);

Are you sure it is more accurate? as 4095*2442/1000 = 9,999

Are you sure it will be faster? It really depends on target CPU...

Also a lot of this is on shaky foundations, as the ADC values are 'bins' - when properly calibrated anything from 0mv to 2.442mv should end up reading as 0, so maybe it should be displaying 1mV rather than 0mV. Likewise the 4095th bin might actually cover 9.9976mV to 10.000mV, giving options for the display of 9.997, 9.998, 9.999 or 10.000V as the displayed value (depending on rounding preferences).

Yeach - I hate edge cases :)



Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: bilibili

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #13 on: June 08, 2016, 10:45:10 pm »
Should be considerably faster (no divides), more accurate (as it uses the CORRECT 4095 value), and quicker to type (shorter).

EDIT:

BUT your solution ONLY needs 16 bit variables, which may be an advantage.

Yeah, here is it properly worked out.
Code: [Select]
Term     value   Qty Sub total
n x 2     8190     1      8190
n         4095         
n / 2     2047     1      2047
n / 4     1023         
n / 8      511         
n / 16     255    -1      -255
n / 32     127         
n / 64      63         
n / 128     31         
n / 256     15     1        15
n / 512      7         
n / 1024     3     1         3
n / 2048     1         
   Final sum of terms    10000

Code: [Select]
adc_mv <= (adc<<1)  + (adc>>1) - (adc>>4) + (adc>>8) + (adc>>10);

Are you sure it is more accurate? as 4095*2442/1000 = 9,999

Are you sure it will be faster? It really depends on target CPU...

Also a lot of this is on shaky foundations, as the ADC values are 'bins' - when properly calibrated anything from 0mv to 2.442mv should end up reading as 0, so maybe it should be displaying 1mV rather than 0mV. Likewise the 4095th bin might actually cover 9.9976mV to 10.000mV, giving options for the display of 9.997, 9.998, 9.999 or 10.000V as the displayed value (depending on rounding preferences).

Yeach - I hate edge cases :)

I agree, the "fastest" probably depends on the specific cpu and compiler etc. Sometimes the obviously faster method ends up taking 10 times as long, because of unforeseen optimizations and/or the compiler making a mess of things.

I also agree, I can be out (my formula above) by 1 part in 10,000 but since there are only 4096 values, I decided that was "good enough".

ADCs can get very complicated.
Some people would drop the least significant bit, or few bits, as they quite often just represent noise and/or instability, rather than particularly useful values.
Many give at least 1 or 2 bit values worth of jitter, each time you read it, even when it is fixed solidly to a specific voltage (depending on the quality of the ADC).

There are many ways of doing this, none of which is necessarily the "right" or "best" way.
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #14 on: June 08, 2016, 10:47:03 pm »

"
My ADC result is 0-4095. I need to convert this to be 0-10V as a string."

On chips with the right hardware, it iss simple.

On chips without the right hardware, you will need to simplify.

Adc x 10000 / 4096 = adc x 625 / 256.

"/ 256" can be done by disposing of the lsb.

"X 625" can be done by decomposin it into a series of shifts and sums. For example "x 625 = x 512 + x 128 - x 16 + x 1", all of that can be done via shifts plus sums.

You probably want to benchmark it against other alternate approaches to see if it is indeed faster on your target.
================================
https://dannyelectronics.wordpress.com/
 
The following users thanked this post: Ian.M

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #15 on: June 08, 2016, 10:58:55 pm »
The other way around, "x 4096 / 10000" is more difficult. What I usually do is to approximate it to something of a division by power of 2. For example, I would approximate it to "x 26529 / 64768". You can then decompose that into a series of shifts and sums and .....

Again, do bengmark it on your target to see for sure if you are gaining anything.
================================
https://dannyelectronics.wordpress.com/
 

Offline orin

  • Frequent Contributor
  • **
  • Posts: 445
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #16 on: June 08, 2016, 11:02:54 pm »
So the original poster wanted it as a string, with 0-10V.

Given the requirement to convert to decimal and a range of 0-10V, you can treat the ADC result as a binary fraction with the binary point to the left of the 12th bit.  Multiply by 10 and you get the most significant decimal digit in the 13th to 16th bits.  Discard the 13th to 16th bits and multiply by 10 again to get the next decimal digit.  Produce 5 digits this way, then round the result to 4 digits.

Only multiplies by 10 are required which can be done with a shift and a couple of adds.

If you want to scale by 4096/4095, you can add an extra bit to the right of the ADC result and set it to 1 for ADC values >= 2048.  You can also round up by adding 1 in the extra bit position (in this case, the first digit can come out as 0xA which needs to be special cased as 10).  Unfortunately, adding the extra bit requires 17 bits to implement; I'd shift the ADC result left by 4 to start with in this case and use 24/32 bit arithmetic, depending on the machine/compiler the code is running on.  I've used this method successfully in PIC assembly code where you really don't want to do multiply or divide.

 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #17 on: June 08, 2016, 11:18:01 pm »
Code: [Select]
adc_mv = adc*2 + adc/2 - adc/16 + adc/256
Exercise for the reader (or compier) to convert to efficient left/right shifts.
Any decent compiler already does this so no need to get creative (and probably make things worse). However since many modern microcontrollers have hardware multipliers multiplying the ADC value with the full range (say 10000mV) and then dividing by the ADC range (which is a power of 2) will be faster.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #18 on: June 08, 2016, 11:29:46 pm »
Given the requirement to convert to decimal and a range of 0-10V, you can treat the ADC result as a binary fraction with the binary point to the left of the 12th bit.  Multiply by 10 and you get the most significant decimal digit in the 13th to 16th bits.  Discard the 13th to 16th bits and multiply by 10 again to get the next decimal digit.  Produce 5 digits this way, then round the result to 4 digits.

Awesome - love it!  :-+
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #19 on: June 08, 2016, 11:58:27 pm »
Code: [Select]
adc_mv = adc*2 + adc/2 - adc/16 + adc/256
Exercise for the reader (or compiler) to convert to efficient left/right shifts.
Any decent compiler already does this so no need to get creative (and probably make things worse).
True, but I still find great comfort in knowing that it will only be working with numbers are bounded within the range of 0 to around 10500.

If this was in a function, and somebody gives it an input value of 4,095,000 as an input (e.g. if for some silly reason they wanted to display in microvolts) it will still calculate a correct answer.

It might not be important to you, but it is to me.

However Orin's answer is very, super nice:

Code: [Select]
  char digit[5];
  u16 i;
  u16 t = adc;

  /* Slight fudge to map 4095 to 10,000  */
  t += t>>11;

  /* Leading '1' or space */
  digit[0] = (t&0xF000) ?  '1' : ' ';
  t &= 0xFFF;

  /* Other digits */
  for(i = 1; i < 5; i++) {
    digit[i] = (t >> 12)+'0';
    t &= 0xFFF;
    t = t<<1 + t<<3;
 }

Can I steal it? Hats off to you sir!
« Last Edit: June 09, 2016, 02:53:55 am by hamster_nz »
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #20 on: June 09, 2016, 12:23:10 am »
Quote
Only multiplies by 10 are required which can be done with a shift and a couple of adds.

Pretty sleek indeed.

My implementation of your idea:

Code: [Select]
#define X2(x)  ((x) << 1)
#define X8(x)  ((x) << 3)
#define X10(x) (X8(x) + X2(x))

//convert a 12-bit dat to decimal string
void dat2str8(uint8_t buffer, uint16_t dat) {
  dat = X10(dat); str[0]=(dat >> 12) + '0'; dat &= 0x0fff;
  dat = X10(dat); str[1]=(dat >> 12) + '0'; dat &= 0x0fff;
  ...
}

very good, I have to say.
================================
https://dannyelectronics.wordpress.com/
 

Offline wraper

  • Supporter
  • ****
  • Posts: 16869
  • Country: lv
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #21 on: June 09, 2016, 12:24:54 am »
"cheating" like this (multiply then shift out bits) will be FAST
Depends on MCU, some like 8051 modern variants like Silabs CIP-51 core can divide in a few cycles but shifting a few bits will take more CPU cycles because it can only do "Rotate right through the carry". So your shifting for a few bits basically becomes a for loop of rotate right through the carry n times. About 2 weeks ago I needed to optimize the code for speed by throwing out bit shifting and using division instead.
 

Offline orin

  • Frequent Contributor
  • **
  • Posts: 445
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #22 on: June 09, 2016, 02:32:03 am »
Code: [Select]
adc_mv = adc*2 + adc/2 - adc/16 + adc/256
Exercise for the reader (or compiler) to convert to efficient left/right shifts.
Any decent compiler already does this so no need to get creative (and probably make things worse).
True, but I still find great comfort in knowing that it will only be working with numbers are bounded within the range of 0 to around 10500.

If this was in a function, and somebody gives it an input value of 4,095,000 as an input (e.g. if for some silly reason they wanted to display in microvolts) it will still calculate a correct answer.

It might not be important to you, but it is to me.

However Orin's answer is very, super nice:

Code: [Select]
  char digit[5];
  u16 i;
  u16 t = adc;

  /* Slight fudge to map 4095 to 10,000  */
  t += t>>11;

  /* Leading '1' or space */
  digit[0] = (t&0xF000) ?  '1' : ' ';
  t &= 0xFFF;

  /* Other digits */
  for(i = 1; i < 5; i++) {
    digit[i] = (t >> 24)+'0';
    t &= 0xFFF;
    t = t<<1 + t<<3;
 }

Can I steal it? Hats off to you sir!


Sure you can.  It's public domain as far as _I_ am concerned.

Orin.
 

Offline orin

  • Frequent Contributor
  • **
  • Posts: 445
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #23 on: June 09, 2016, 02:47:30 am »
"cheating" like this (multiply then shift out bits) will be FAST
Depends on MCU, some like 8051 modern variants like Silabs CIP-51 core can divide in a few cycles but shifting a few bits will take more CPU cycles because it can only do "Rotate right through the carry". So your shifting for a few bits basically becomes a for loop of rotate right through the carry n times. About 2 weeks ago I needed to optimize the code for speed by throwing out bit shifting and using division instead.


Horses for courses...

FWIW, multiply by 10 doesn't really need shifts, but you could use <<1 for all but the 3rd line:

t = x + x;  // t == 2*x
t = t + t;   // t == 4*x
t = t + x;  // t == 5*x
t = t + t;  // t == 10*x


 
The following users thanked this post: oPossum, hamster_nz

Offline orin

  • Frequent Contributor
  • **
  • Posts: 445
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #24 on: June 09, 2016, 03:43:17 am »
So here is some of my original PIC code.  ReadAtoD reads 10 bits from an ADC into the high bits of AtoDRes.

So it is using an actual multiply routine rather than shifts/adds to multiply by 10.  Partly because it's not always 10 in W at the ThreeC label - the same technique works for scaling by any integer 1 to 10.

CallRet is really just jump and DisplayDigit is uninteresting.

Scaling by powers of 10 is merely a matter of where you put the decimal point in the generated digits.  The code at ThreeC doesn't add a decimal point, so you have three multiplies by 10, hence a scale factor of 1000.

Code: [Select]
        call    ReadAtoD

         movlw   .10                     ; Scale by 1000 (2:1 on board)
ThreeC                                  ; Three digits then 'C'
        call    FirstDigit
        call    NextDigit
        call    NextDigit
; SNIP


;--------------------------------------------------------------

;
; FirstDigit - scale by factor in W and print the first digit
;
FirstDigit
movwf ACCbLO
clrf ACCbHI
movf AtoDResLO, W ; Max scale is 1000
movwf ACCaLO
movf AtoDResHI, W
movwf ACCaHI
call D_mpyS
CallRet DisplayDigit

;
; NextDigit - multiply by 10 and display digit
;
NextDigit
movf ACCcLO, W
movwf ACCaLO
movf ACCcHI, W
movwf ACCaHI
movlw .10
movwf ACCbLO
clrf ACCbHI
call D_mpyS
CallRet DisplayDigit
 

Offline Brutte

  • Frequent Contributor
  • **
  • Posts: 614
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #25 on: June 09, 2016, 09:39:34 am »
This adc returns 2^12 values. If you want that scaled and the reference does not match then some kind of rounding must be involved. So the most logical choice would be to minimize the rounding error (rounding noise).
That could be made with decimation. So if you sum n consecutive ADC samples and divide the sum by m then the scaling can be easily performed even at fixed point.
Mind in C "y=a/b;" is called "divide with truncation" and it injects 1 bit noise. That noise can be easily cut in half if you "divide with rounding".
The decimation requires some dithering and antialiasing filter (it limits the bandwidth) but it also cuts the variance of ADC noise.
 

Offline Tomorokoshi

  • Super Contributor
  • ***
  • Posts: 1212
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #26 on: June 10, 2016, 02:40:11 am »
Alternate solution:

Don't make your reference 10.000V; make it 10.2375V.

Then each count is 10.2375V / 4095 = 0.0025V.

Multiply the resulting number of counts by 25 to get the number of millivolts. Alternately divide by 400 to get volts.

This also gives you a little overhead to detect > 10.000 V conditions.
 

Offline bktemp

  • Super Contributor
  • ***
  • Posts: 1616
  • Country: de
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #27 on: June 10, 2016, 04:52:33 am »
Don't make your reference 10.000V; make it 10.2375V.

Then each count is 10.2375V / 4095 = 0.0025V.
Almost all ADCs and DACs can not reach the full positive reference voltage, there is one step missing. So the correct number is 4096, not 4095. Using a 10V reference voltage, the maximum measureable voltage is 4095/4096*10V=9.99756V
You often do not notice that, because the gain/offset error of most ADC/DACs is larger than 1 LSB.
Using 4096 you get a more common reference voltage of 10.24V. Most 10V references with a trimming pin can be adjusted to 10.24V.
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #28 on: June 10, 2016, 10:34:48 am »
Quote
Alternate solution:

Don't make your reference 10.000V; make it 10.2375V.

Plenty of references come in at 2's powers: 1.024v, 2.048v, ...., exactly for that reason.
================================
https://dannyelectronics.wordpress.com/
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #29 on: June 11, 2016, 08:26:04 pm »
I benchmarked orin's approach to the typical mod + div approach on a few chips. I was converting a 24-bit type into the highest 8 digits.

PIC16F1936: orin's approach takes 4% of the time to execute vs. 100% for the mod + div approach
CM0: orin's approach takes 7% to execute
CM3: orin's approach takes 46% to execute
LM4F120: orin's approach takes 83% to execute.

So it seems that on a bigger chip, the advantage is smaller; on the smaller chips, it can take a lot of time.

Very impressive, indeed.
================================
https://dannyelectronics.wordpress.com/
 
The following users thanked this post: orin

Online voltsandjolts

  • Supporter
  • ****
  • Posts: 2302
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #30 on: December 02, 2022, 03:21:59 pm »
[sorry for waking the dead]

I think Orin's approach is quite common in bin to dec conversion, although in this case the initial scaling of the adc result is not needed, as he points out.

e.g. itoa kinda function...

Code: [Select]
/*
 * https://stackoverflow.com/questions/7890194/optimized-itoa-function
 * Convert an unsigned number in the range 0...99,000 to decimal 5 ascii digits using Q4.28 notation.
 * Terminating NULL is NOT provided.
 */
void utl_unsigned_to_5digit_ascii(char *buf, uint32_t val) {
    const uint32_t f1_10000 = ((1UL << 28) / 10000UL);

    /* 2^28 / 10000 is 26843.5456, but 26843.75 is sufficiently close */
    uint32_t tmp = val * (f1_10000 + 1) - (val / 4);

    for(uint32_t i = 0; i < 5; i++)
    {
        uint8_t digit = (uint8_t)(tmp >> 28);
        buf[i] = '0' + (uint8_t) digit;
        tmp = (tmp & 0x0fffffff) * 10;
    }
}
« Last Edit: December 02, 2022, 03:23:41 pm by voltsandjolts »
 

Offline mariush

  • Super Contributor
  • ***
  • Posts: 5030
  • Country: ro
  • .
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #31 on: December 02, 2022, 03:38:50 pm »
Some PIC microcontrollers  can have the internal voltage reference configurable to 1.024v , 2.048v and 4.096v  - your input voltage must be above the voltage reference, of course, so you could use 2.048v if you power your microcontroller with 3.3v, and 4.096v if you power it with 5v.

If you set the voltage reference to 4.096v you basically have 4 mV per step, so you can simply measure with your ADC and multiply by 4.  Change your voltage divider resistors so that you'd a 1:3 ratio or something like that... for example 4096mV on the AC pin  = 12v input.

Now, each adc step is 4mV x 3 = 12mV

If the voltage reference is not configurable or you rely on the input voltage to the PIC to be the reference,  I would still consider choosing the voltage dividing resistors so that you could simply do some bit shifting (divide by 2 , by 4 etc)
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3701
  • Country: gb
  • Doing electronics since the 1960s...
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #32 on: December 02, 2022, 04:26:08 pm »
One should also consider whether there is a need for accuracy.

I haven't read the whole thread but even 12 bits is 1/4096 i.e. 0.025%. I've been in "precision analog" since the 1970s (when "precision analog" was really expensive, with an AD504 costing 20 quid) and 0.025% accuracy is not gonna be cheap/easy to achieve in production.

I am doing a product now which has a 16 bit ADC and it all needs factory cal stored in FLASH, against a really carefully built test rig calibrated to < 0.01%.

Also most 12 bit ADCs are pretty fast and it can take a lot more time to get the data out via SPI or I2C. I once bit-banged I2C on a H8/300 to drive an ADS7828 and it took 600us to get the data out :) That probably compares to a floating point multiply on a very slow CPU...

And almost no on-chip 12 bit DAC/ADC yields a real 12 bit value. The last 2 bits tend to be random.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #33 on: December 02, 2022, 04:59:07 pm »
Generally: for a n-bit ADC, round n up to nearest 8 bits. Then use a multiplication factor of same width, and output 2n bits. Then divide the result by 2^m, which often can be just 2^n.

The multiplier server three purposes, all-in-one:
* transform the input to be large enough so that further division does not reduce resolution,
* work as unit conversion (for example, going from ADC count to microvolts, millivolts, fixed point volts, whatever)
* work as calibration factor: make the multiplier adjustable (fine-tunable)

Divider could be anything, but usually in instruction sets, especially in small MCUs, multiplying is fast but arbitrary division is slow. But dividing by 2^n is fast. So adjust the multiplier to be any arbitrary number and divide by 2^n.

Example with 12-bit ADC.
Say 12-bit range [0 .. 4096] corresponds to [0 ... 3.3V]. We want to output 0.1mV: [0 .. 33000] so that one can just add decimal point before the last digit when printing. So we want to multiply by 33000/4096 = 8.056640625.

1) Multiply by 33000
2) Divide the result by 4096

How about numerical ranges of types?
33000 fits in uint16
4095 fits in uint16
Hence, result should fit in uint32.
Double-check with maximum count: 4095 * 33000 = 135135000. UINT32_MAX is 2^32, way more than this.

In C:
static uint16_t calib = 33000;

uint16_t adc = read_from_adc();
uint16_t result = ((uint32_t)calib * (uint32_t)adc)/4096;

Adding the decimal point prior to printing is left as an exercise to the reader.


TLDR:
* multiplication is cheap
* division by 2^n is cheap
* large integer types are cheap (even a 64-bit type on a 8-bit microcontroller) compared to floats, use them for intermediate results.
« Last Edit: December 02, 2022, 05:24:59 pm by Siwastaja »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #34 on: December 02, 2022, 08:25:39 pm »
I used to use fixed point as well but the often 'odd' units (like decimeters, deciVolts, etc) didn't make the code much easier to read. Nowadays I just use floating point and be done with it. Realistically all the fixed point calculations in between add up to quite a few cycles and speed of soft floating point is more often than not fast enough anyway. Especially where it comes to displaying numbers. But I'm using soft floating point in applications with samplerates into several kHz as well.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline mikerj

  • Super Contributor
  • ***
  • Posts: 3240
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #35 on: December 02, 2022, 09:33:21 pm »
Should be considerably faster (no divides), more accurate (as it uses the CORRECT 4095 value), and quicker to type (shorter).

A unipolar ADC or DAC hits it's full scale output at \$FS=V_{ref}{{2^N-1}\over{2^N}}\$ i.e. full scale is 1 LSB below the reference voltage which in this case is (4095/4096)*Vref.  To convert to voltage dividing by 4096 would be correct.
 

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #36 on: December 02, 2022, 10:00:41 pm »

 \$FS=V_{ref}{{2^N-1}\over{2^N}}\$

Where did you get that formula from?

If the reference is exactly 10.00000000000000000000 volts, and the maximum the ADC can read is $FFFF.

Then full scale (max) reading, for exactly 10.000000000 Volts input to the ADC, would be 4095 decimal ($FFFF).
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #37 on: December 03, 2022, 02:49:02 am »
\$FS=V_{ref}{{2^N-1}\over{2^N}}\$
Where did you get that formula from?
It applies to all successive approximation ADCs (aka SAR ADCs).  The reason is that the reference voltage corresponds to code \$2^N\$, which cannot be reached by the successive approximation.  SAR ADCs work by starting with \$V_\min=0\$, \$V_\max=V_\text{REF}\$, applying the bisection method to halve the range, keeping \$V_\min \le V_\text{IN} \le V_\max\$.  It takes \$N\$ steps to resolve a \$2^N\$-bit code, using essentially a DAC and a comparator (that tells whether \$V_\text{IN}\$ is above or below \$V_\text{DAC} = \frac{V_\min + V_\max}{2}\$).  Because of this, the range of the conversion is between \$0\$ and \$2^N-1\$, inclusive, and the largest possible code \$2^N-1\$ refers to \$V_\text{REF}\frac{2^N-1}{2^N}\$.

This is typical of many similar algorithms both in electronics and computing.  You can find detailed descriptions of this at Wikipedia, the successive approximation ADC article, and the binary search and bisection method articles.
 
The following users thanked this post: MK14

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14490
  • Country: fr
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #38 on: December 03, 2022, 03:03:34 am »
Yep. But in the end it's down to the fact that you can't represent 2^N with N bits. Or, if you can, you can't represent 0. Or, alternatively, the scale is not linear.
 
The following users thanked this post: MK14

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #39 on: December 03, 2022, 03:17:41 am »
\$FS=V_{ref}{{2^N-1}\over{2^N}}\$
Where did you get that formula from?
It applies to all successive approximation ADCs (aka SAR ADCs).  The reason is that the reference voltage corresponds to code \$2^N\$, which cannot be reached by the successive approximation.  SAR ADCs work by starting with \$V_\min=0\$, \$V_\max=V_\text{REF}\$, applying the bisection method to halve the range, keeping \$V_\min \le V_\text{IN} \le V_\max\$.  It takes \$N\$ steps to resolve a \$2^N\$-bit code, using essentially a DAC and a comparator (that tells whether \$V_\text{IN}\$ is above or below \$V_\text{DAC} = \frac{V_\min + V_\max}{2}\$).  Because of this, the range of the conversion is between \$0\$ and \$2^N-1\$, inclusive, and the largest possible code \$2^N-1\$ refers to \$V_\text{REF}\frac{2^N-1}{2^N}\$.

This is typical of many similar algorithms both in electronics and computing.  You can find detailed descriptions of this at Wikipedia, the successive approximation ADC article, and the binary search and bisection method articles.

The DEFINITION, for this thread, seems to be given as:

My ADC result is 0-4095. I need to convert this to be 0-10V as a string.

So, 4095 is DEFINED to be EXACTLY 10.00000000000000000... Volts.

It may or may not be successive approximation, or another type.  It may or may not have a 10 volt reference, etc.
 
The following users thanked this post: Nominal Animal

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #40 on: December 03, 2022, 03:36:12 am »
Also note.  The thread seems to be from around 6.5 years ago, and the OP, doesn't seem to have logged in here, for around 4.5 years.
 
The following users thanked this post: Siwastaja, newbrain

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #41 on: December 03, 2022, 04:32:46 am »
Sure.  And you're right, MK14; I didn't mean to imply it applies here, just wanted to describe where it comes from.  :-+

Let's delve into the math a bit, though, because even the integer form of the Arduino map() gets this stuff wrong (it does not yield the same results as the float form one does!).



When you want to convert \$V\$ to \$R\$, with \$V_\min \le V \le V_\max\$, and \$R = R_0\$ when \$V = V_\min\$ and \$R = R_1\$ when \$V = V_\max\$, the exact mathematical formula is
$$R = R_0 + \left(V - V_\min\right) \frac{R_1 - R_0}{V_\max - V_\min}$$

When we use integers and integer arithmetic (\$V, R \in \mathbb{Z}\$), the situation is somewhat more interesting, especially when uses SAR ADCs, because then \$V_\min \le V \lt V_\max\$, and \$V_\max = V_\min + 2^N\$.  In other words, \$V_\max\$ is exclusive.  Nothing else in the formula changes, though.  To apply correct rounding, i.e.
$$R = R_0 + \left\lfloor \left(V - V_\min\right) \frac{R_1 - R_0}{V_\max - V_\min} \right\rceil$$
we do
$$R = R_0 + \left\lfloor \frac{ (V - V_\min) (R_1 - R_0) + C}{V_\max - V_\min} \right\rceil, \quad C = \operatorname{sgn}(R_1 - R_0) \frac{V_\max - V_\min}{2}$$
i.e. \$C\$ is half the divisor with the same sign as \$R_1 - R_0\$, and picking \$\lfloor \rfloor\$ when \$C\$ is positive, and \$\lceil \rceil\$ when \$C\$ is negative.

In C, we can write this using correct rounding as for example
Code: [Select]
int32_t  map(const int32_t in, const int32_t inmin, const int32_t inlimit, const int32_t outmin, const int32_t outlimit)
{
    const uint32_t  i = (in >= inmin) ? in - inmin : 0;
    const uint32_t  im = (inlimit > inmin) ? inlimit - inmin : 1;
    if (outlimit > outmin) {
        const uint32_t  om = outlimit - outmin;
        return outmin + (int32_t)((i * om + (im >> 1)) / im);
    } else
    if (outmin > outlimit) {
        const uint32_t  om = outmin - outlimit;
        return outmin - (int32_t)((i * om + (im >> 1)) / im);
    } else
        return outmin;
}
When outlimit - outmin is a power of two, the division simplifies to a bit shift.   The above only works correctly when im*om ≤ 232, and inmin ≤ in < inlimit.  Then, if outmin < outlimit, outmin ≤ result < outlimit.  If outmin > outlimit, then outlimit < result ≤ outmin, and result is correctly rounded, halfway towards outlimit.  However, if im ≥ 2 om, i.e. the output range is half of the input range or less, then result can reach outlimit due to rounding.  When the output range is smaller than the input range, I recommend omitting the rounding.  Whenever output range is more than half the input range, result won't reach outlimit unless in ≥ inlimit.

(We could do the above for short, int and long types in Arduino, using __builtin_clz() or __builtin_clzl() to find the size of im and om in bits and using a larger cast (long or long long) when necessary, getting fast but accurate integer map() whose behaviour matches that of the one with float parameters.  Details like this bug me, because they are often the cause of weird behavioural differences due to tiny changes in the code that should not affect arithmetic results that much.)

When inmin, inlimit, outmin, and outlimit are fixed, it makes sense to precalculate om and im, and divide both with their greatest common denominators (which for a power of two om means shifting both right until im becomes odd).

For the 0,4096 → 0,10000 conversion (assuming exclusive limits, like on SAR ADCs), 10000/4096 = 625/256, exactly, and the result of the multiplication fits in 22 bits.  The function is then simply
Code: [Select]
uint_fast16_t  adc_to_millivolts(uint_fast16_t i) { return ((uint_fast32_t)i * 625 + 128) >> 8); }and adc_to_millivolts(4095) == 9998 , as expected. (40950000/4096 = 9997.55859375, which rounds to 9998.)
In fact, this routine returns the exact same results as (uint_fast16_t)floor(0.5 + i * 10000.0 / 4096.0); and, if using round away from zero, with (uint_fast16_t)round(i * 10000.0 / 4096.0).  The latter differs from the former for i=128,640,1152,1664,2176,2688,3200,3712 because the standard rule for IEEE 754 floating-point math is to round halfway towards odd, not away from zero, and these values of i are at exact half-millivolt boundaries.

With inclusive limits 0,4095 → 0,10000 conversion requires a division by 4095, or a fractional multiplication by 10000/4095 = 2000/819 ≃ 2.44200244200... which we can approximate with 160039/65536, compensating a bit with the rounding constant (32951 instead of 32768):
Code: [Select]
uint_fast16_t  adc_to_millivolts(uint_fast16_t i) { return ((uint_fast32_t)i * 160039 + 32951) >> 16; }which yields 169 instead of 168 for i=69; 1390 instead of 1389 for i=569; 3390 instead of 3389 for i=1388, and 8610 instead of 8611 for i=3526; and the exact same results as (uint_fast16_t)floor(0.5 + i * 10000.0 / 4095.0) for the rest of i=0..4095, inclusive.
« Last Edit: December 03, 2022, 04:45:17 am by Nominal Animal »
 
The following users thanked this post: helius, MK14

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #42 on: December 03, 2022, 07:34:05 am »
The DEFINITION, for this thread, seems to be given as:

My ADC result is 0-4095. I need to convert this to be 0-10V as a string.

So, 4095 is DEFINED to be EXACTLY 10.00000000000000000... Volts.

Technically correct (that OP said so), but it's very likely OP made a mistake here, and meant 4096 is defined as exactly 10.0V. This is a common mistake to make (assuming that maximum count of SAR ADC represents the reference voltage, when in reality (maximum count + 1) represents the reference).
 
The following users thanked this post: mikerj, Ian.M, MK14

Online jpanhalt

  • Super Contributor
  • ***
  • Posts: 3480
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #43 on: December 03, 2022, 10:30:23 am »
One "trick" is to use a reference voltage that is a power of 2, e.g., 4.096 V or 2.048 V.  Then each count is a power of 2 in millivolts.  After that, it is just a matter of converting the binary files to BCD (or whatever) for printing.  Example: 10 bit ADC with 4.096 Vref gives 4 mV per count, which is just a couple of left shifts.
 
The following users thanked this post: MK14

Offline wek

  • Frequent Contributor
  • **
  • Posts: 495
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #44 on: December 03, 2022, 01:23:36 pm »
Consider a perfect 1-bit ADC, which is a perfect comparator with VREF/2. The result of conversion is 0 or 1, corresponding to intervals 0..VREF/2 and VREF/2..VREF.

See the point? The converted digital value does not correspond to a single voltage but to an interval of voltages. You can consider the middle of the interval to be the "true value", with giving symmetrical error. Or you can consider the lower end of the interval (upper would be... strange), that's the equivalent of simply multiplying the conversion result with VREF/2^N (and "never reaching VREF").

Further discussion of this notion in footnote 3 here.

JW
« Last Edit: December 03, 2022, 01:26:30 pm by wek »
 
The following users thanked this post: mikerj, MK14

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #45 on: December 03, 2022, 03:19:25 pm »
One "trick" is to use a reference voltage that is a power of 2, e.g., 4.096 V or 2.048 V.  Then each count is a power of 2 in millivolts.  After that, it is just a matter of converting the binary files to BCD (or whatever) for printing.  Example: 10 bit ADC with 4.096 Vref gives 4 mV per count, which is just a couple of left shifts.

That's nice in theory but practical circuits usually have some kind of calculation anyway - resistor dividers, unit conversions, etc. Single multiplication wraps all (linear) calculations in one operation. Further, the multiplication and division are nearly free operations, especially on 32-bit micros but not bad on 8-bitters, either.

Binary to decimal conversion during printing is an order of magnitude slower.

This is why I have never seen the point in 2.048V etc. references - I pick the best reference based on all other specifications except that.
 
The following users thanked this post: MK14

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #46 on: December 03, 2022, 05:10:10 pm »
Ok, well I'll replace some peoples speculation, with my own WILD speculation.
Some of the OPs posts, seem to also describe a particular ADC, which MIGHT be the same as the one in this thread.  It seems so, but is only an ASSUMPTION.

https://www.eevblog.com/forum/beginners/i2c-multiple-chips-not-working/msg608117/#msg608117

Bold in quote, added by me.
A DAC, and ADC and an IO expander. The DAC and IO expander work OK, I can access the registers fine. The ADC (Linear LTC2990) does not like me.

https://www.analog.com/media/en/technical-documentation/data-sheets/ltc2990.pdf

But we would also probably need to know what external components were used (schematic), and what settings they used, to really get to the bottom of it.

The datasheet doesn't seem to make it clear, what technology(s), are used for the ADC, or precise internal details, of its operations.
 

Online Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 3369
  • Country: nl
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #47 on: December 03, 2022, 09:50:36 pm »
I do not want to use Floating points, for obvious reasons.

Please explain.
It's not so obvious to me.
There are plenty of uC's these days with built in multiplication instructions, and floating point libraries are (or at least should be) heavily optimized.
You can do manual fixed point math or other routines but it takes a lot more effort to fine tune these compared to a single floating point multiplication with a calibration value.

Floating point is not as "heavy" as it's perceived to be on a lot of microcontrollers these days.

Edit:
 :-DD OP has not posted in this thread after 2016
« Last Edit: December 03, 2022, 09:53:17 pm by Doctorandus_P »
 
The following users thanked this post: nctnico, MK14

Online jpanhalt

  • Super Contributor
  • ***
  • Posts: 3480
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #48 on: December 03, 2022, 10:00:42 pm »
That's nice in theory but practical circuits usually have some kind of calculation anyway - resistor dividers, unit conversions, etc. Single multiplication wraps all (linear) calculations in one operation. Further, the multiplication and division are nearly free operations, especially on 32-bit micros but not bad on 8-bitters, either.

Binary to decimal conversion during printing is an order of magnitude slower.

This is why I have never seen the point in 2.048V etc. references - I pick the best reference based on all other specifications except that.

The question asked was how to convert ADC binary to decimal.  That was answered.  As for further calculations, I agree, generally do that last.  BUT, that was not the question.  As for binary to BCD, I have no clue how those C wizards do it.  In Assembly, there are several ways.  Double Dabble is among the slowest.  Up to 17-bit, there are several polynomial methods on PICList that are quite fast compared to division with 16-bits on 8-bit horse carts.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #49 on: December 04, 2022, 12:43:59 am »
As for binary to BCD, I have no clue how those C wizards do it.
printf("%f", my_float_here);

I know this is a bit tongue in cheek but the C libraries that come with good C compilers are pretty well optimised. There likely isn't much improvement you can achieve by writing your own unless your application is a rare corner case.

Not using floating point is one of the many ill advised dogmas that surround embedded programming.
« Last Edit: December 04, 2022, 02:03:41 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #50 on: December 04, 2022, 08:55:49 am »

Not using floating point is one of the many ill advised dogmas that surround embedded programming.
That was my first thought when I read this thread. When I started using microcontrollers in 2001 I avoided using floating point. I manage to do rather complicated calculations using up to 64 bit integers and carefully taking care not to overflow. Yes it worked and probably even was necessary due to quite small FLASH space.

But likely it was slower compared to floating point. And it's much easier to program with floating point.

I would say in most application speed is not a issue as long as you don't do something stupid. A 8bit MCU can do quite a lot of floating point calculations in a ms.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14490
  • Country: fr
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #51 on: December 04, 2022, 08:10:49 pm »
Not using floating point is one of the many ill advised dogmas that surround embedded programming.

Not using FP in any context, unless you really need it, is a very reasonable approach actually. It does make even more sense on targets which do not have hardware support for FP, because it's pretty slow then, but speed matters only if uh, it matters. Depending on the target and constraints, speed may not be a factor. But in all cases (not restricted to "embedded"), using FP should be done knowing its limitations. Not just in terms of speed of course. Accuracy and rounding errors.

Using FP because you don't know basic arithmetic is pretty sad and never leads to anything good. People not mastering enough arithmetic to implement basic integer/fixed-point computation should not be allowed to code software for any computational task IMHO. I mean, in a professional context of course. Hobbyist can well do what they want as long as it's fun.
 
The following users thanked this post: Siwastaja, jpanhalt

Online jpanhalt

  • Super Contributor
  • ***
  • Posts: 3480
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #52 on: December 04, 2022, 08:47:48 pm »
Using FP because you don't know basic arithmetic is pretty sad and never leads to anything good. People not mastering enough arithmetic to implement basic integer/fixed-point computation should not be allowed to code software for any computational task IMHO. I mean, in a professional context of course. Hobbyist can well do what they want as long as it's fun.

I couldn't agree more.  One of the classic problems in my college days (slide rules was all we had) was, "Assume you have a rope that does not stretch and tie it around the Earth.  Then add 6 feet to it.  How high is it above the surface if raised everywhere equally."  Of course, the diameter and circumference of the Earth were give as distractors.  Alert students set the problem up, quickly realized the circumference and diameter don't matter and got the correct answer.  Others didn't.

As I said in my earlier post, you need to understand the math and get it reduced to its simplest form before crunching the numbers.  I only use fixed-point on 8-bit PIC's without much hardware support.  24-bit is a little slower than 16/17-bit, but neither is particularly slow, and 1 part in 100,000 has been good enough for me.  I don't do finance though.
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14219
  • Country: de
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #53 on: December 04, 2022, 09:34:12 pm »
If the resolution is enough depends on the use. When doing a frequency / time thing one may get to the range where the single precison FP is not sufficient. The same can happen with some math problems that are numerically not ideal, like calculating the standard deviation with quite some average value. This example may need around twice the number of bits of the data for an intermediate result if done the straight forward way. 

It depends on the µC, but the speed of FP multiplications tend to be not that bad. Adding / subtracting tends to be a bit slower with FP in many cases. Fixed point math may need additional headrom / extra bits. Chanche are 64 bit integers are slower then 32 bit FP and most C-comperilers don't support 40 or 48 bit integers. The bigger issue with FP is more the memory needed for the FP library. AFAIR for GCC-AVR this was something like 6 kBytes and AFAIK printf would come extra with also quite a bit of memory and also a bit of a slow down. There are extra conversion functions like some ftoa that can be quite a bit smaller. However not all IDEs / libs include all the coversion functions. On the PC there usually is an utoa and itoa for integers, but not so much with µCs and this braught up this whole topic.
 

Offline wek

  • Frequent Contributor
  • **
  • Posts: 495
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #54 on: December 04, 2022, 09:44:52 pm »
Using  integer/fixed-point should be done knowing its limitations. Not just in terms of speed of course. Resolution (and its change with operations) and rounding errors. Overflow, underflow, signedness. Implications to loading/storing/addressing. Behaviour in various intricate algorithms (think FFT or DSP). Availability of derived types (e.g. complex) and operations on them on various hardware and compilers. Interactions with third party libraries, and providing libraries for third parties. Interactions with external hardware. Future-proofing. Portability. Documentability.

For the last decade or so I am trying to figure out a simple rule of the thumb to decide what data type to use. And the more I try the more confused I am. I wish it could be as simple as "unless you really need it".

JW
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #55 on: December 04, 2022, 09:52:52 pm »
Not using floating point is one of the many ill advised dogmas that surround embedded programming.

Not using FP in any context, unless you really need it, is a very reasonable approach actually. It does make even more sense on targets which do not have hardware support for FP, because it's pretty slow then, but speed matters only if uh, it matters. Depending on the target and constraints, speed may not be a factor. But in all cases (not restricted to "embedded"), using FP should be done knowing its limitations. Not just in terms of speed of course. Accuracy and rounding errors.
Accuracy and rounding errors affect any kind of computation you do on a computer and should be considered no matter what. So that is not a valid reason not to use floating point.

My point is that using fixed point math is more often than not an unnecessary optimisation leading to convoluted code and use of weird units. Say you are developing a weight scale that has a range of 1000 grams with a resolution of 0.1 gram. Any type that can store 0 to 10000 is enough. Now do you want to use internal units in deci-micrograms, micrograms or kilograms? With floating point you can make the base unit kilograms (straight SI unit) and scale the ADC reading from the strain gauge accordingly. It makes it much easier to maintain such code. In the past couple of years I have converted a couple of projects from fixed point to floating point because it lead to code that is much easier to maintain.
« Last Edit: December 04, 2022, 10:07:06 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline wek

  • Frequent Contributor
  • **
  • Posts: 495
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #56 on: December 04, 2022, 10:09:13 pm »
Quote
range of 1000 grams with a resolution of 0.1 gram
 Now do you want to use internal units in deci-micrograms, micrograms or kilograms?

I see and partially agree (see my post above) with your point; but this particular example makes me to ask: why would you want to use any of these in this particular case? Kilograms vaguely have some justification (as you've said, it's an SI unit, but how is it relevant to the internal representation?) but the other two not - besides both would require an unnecessarily wide type (neither fitting into 24 bits, and the former not fitting even into 32 bits).

JW
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #57 on: December 04, 2022, 10:15:15 pm »
Quote
range of 1000 grams with a resolution of 0.1 gram
 Now do you want to use internal units in deci-micrograms, micrograms or kilograms?

I see and partially agree (see my post above) with your point; but this particular example makes me to ask: why would you want to use any of these in this particular case? Kilograms vaguely have some justification (as you've said, it's an SI unit, but how is it relevant to the internal representation?) but the other two not - besides both would require an unnecessarily wide type (neither fitting into 24 bits, and the former not fitting even into 32 bits).

JW
IMHO IF you are going to use fixed point math, then it is nice to use units that can be expressed with SI prefixes like Mega, kilo, micro, nano, etc in order to have at least some anchor towards an SI unit and make code easier to understand. However, that leads to using unnecessary wide types indeed. Hence floating point is a better solution. It takes care of shifting the mantissa automatically for you. So the only thing you really need to worry about are resolution and rounding errors.
« Last Edit: December 04, 2022, 10:17:06 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline wek

  • Frequent Contributor
  • **
  • Posts: 495
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #58 on: December 04, 2022, 11:01:54 pm »
Milli- (or in some languages, mili-) is an SI prefix, too.

Even deci- is an entirely compliant SI prefix, using decigrams in your case would reduce the required type width to 16-bit (14 in fact).

My real point is, that there's no one size fits all in microcontrollers.

JW
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3147
  • Country: ca
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #59 on: December 04, 2022, 11:11:31 pm »
Often you don't need to scale anything. ADC readings not always go to humans, but rather are used in control loop or in calculations, such as FFT. For such uses, ADC readings can be used directly, without scaling. Instead, you scale the coefficients. Or, if you use a table lookup (as you may do when you have non-linearities), you just fill in the table with units which are convenient to you - no scaling is necessary neither.

The need to scale arises mostly when you pass data to/from humans. Then you must use units familiar to the particular human - deci-pounds won't do you any good. But humans are slow, so you have plenty of time for conversion - doesn't need to be very efficient. You probably will have enough time and resources for doubles and printf() if you really love to bloat. But not many users will understand 2E-1 pounds, so you will have to use a fixed decimal point in your display. And for that, you need to figure out how many digits to show and where to put the decimal point anyway.
 

Offline wek

  • Frequent Contributor
  • **
  • Posts: 495
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #60 on: December 04, 2022, 11:29:12 pm »
> deci-pounds won't do you any good

Of course not. Nobody in its right mind would use pounds in 21.century.
;-)

Decipounds are absolutely okay to display e.g. on an display with literally fixed decimal point (i.e. where the decimal point is always lit to the left of the rightmost digit).

I still insist that there's no one size fits all, but thanks for bringing up another option to the bunch.

JW
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #61 on: December 05, 2022, 12:05:07 am »
Milli- (or in some languages, mili-) is an SI prefix, too.

Even deci- is an entirely compliant SI prefix, using decigrams in your case would reduce the required type width to 16-bit (14 in fact).
Why fixate on wanting the smallest size? That is only a concern when you are really tight on memory. Otherwise it is just another unnecessary optimisation with potential pitfalls for the next person working on the code.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline wek

  • Frequent Contributor
  • **
  • Posts: 495
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #62 on: December 05, 2022, 12:52:52 am »
I never worked on a project, where spending memory generously would bring me any perceivable benefit. OTOH, too often a previously unexpected requirement to keep a log of last 1000 measured value or similar appears. My experience is rather limited, I admit.

> potential pitfalls for the next person working on the code.

There is no inherent pitfall in using *any* type - the "next person" needs to familiarize itself with *whatever* type used, and understand fully its limitations.

I also never came across a case where the "next person" wouldn't completely dismiss the "previous person's" design choices. It's just built into our DNA.

JW
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #63 on: December 05, 2022, 01:02:34 am »
I never worked on a project, where spending memory generously would bring me any perceivable benefit. OTOH, too often a previously unexpected requirement to keep a log of last 1000 measured value or similar appears. My experience is rather limited, I admit.

> potential pitfalls for the next person working on the code.

There is no inherent pitfall in using *any* type - the "next person" needs to familiarize itself with *whatever* type used, and understand fully its limitations.

I also never came across a case where the "next person" wouldn't completely dismiss the "previous person's" design choices. It's just built into our DNA.
I guess you never came across good software engineers then  >:D IMHO The art of writing good code (*) is that the next person working on your code, doesn't want to throw it away but actually wants to add to it and keep the basic structure 'as is' as much as possible. And the person does add code by such an extend that the next time you work on it, you wonder whether you wrote that code or somebody else did. That is the synergy I like to achieve when working with others to develop software because that is a very efficient way of working. Not forcing anything on anyone but just by setting up a good structure and writing code in clear sentences. I know for a fact that many of the code I have written has stayed in use years after I have left the company (sometimes to my own surprise). People made changes to it as well. And yet I never got questions about how it worked.

* I'm very deliberately avoiding the word software here because that implies functionality which is a different subject.
« Last Edit: December 05, 2022, 01:22:06 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #64 on: December 05, 2022, 01:32:49 am »
As for binary to BCD, I have no clue how those C wizards do it.  In Assembly, there are several ways.  Double Dabble is among the slowest.  Up to 17-bit, there are several polynomial methods on PICList that are quite fast compared to division with 16-bits on 8-bit horse carts.
If you have all relevant powers of ten –– 10, 100 for 8 bits; plus 1000, 10'000 for 16 bits; plus 100'000, 1'000'000, 10'000'000 for 24 bits; plus 100'000'000, 1'000'000'000 for 32 bits –– you can do repeated subtraction, starting at the largest one.  For 16 bits, you do a maximum of 5+9+9+9=32 subtractions (of a 16-bit quantity); for 32 bits, 4+8*9=76 subtractions (of a 32-bit quantity), although you can switch down to 24-bit and 16-bit at specific steps.  So, it isn't fast in the time complexity O() sense, but since each comparison and subtraction tends to be fast and the code compact, it is a viable approach.

When you can treat the value as a fraction (i.e., all bits right of the decimal point), then repeated multiplication by ten ((x<<1)+(x<<3)) is quite efficient.  The problem on 8-bit machines is that for integer values, the most significant decimal digit tends to be difficult to extract.  For example, for 32 bits, you'd use 100'000'000 = 0b00000101'11110101'11100001'00000000 (since it is the largest power of ten that can handle all decimal digits; you'd use the subtraction for the larger one).  You can't just use the most significant byte, because 199'999'999 and 200'000'000 only differ in the least significant 9 bits: each extraction must look at three most significant bytes, so it is quite slow.

For BCD, you can use a variant of the subtraction algorithm, that starts at subtractor 64, and halves it for each subtraction (by shifting it one bit right).  The constants needed are then 100 and 64 for 8-bit; 64, 6'400, 10'000 for 16-bit; 64, 6'400, 640'000, 16'000'000 for 24-bit; 64, 6'400, 640'000, 64'000'000, 3'200'000'000 for 32-bit.  Each decimal digit pair takes 5 steps, so a maximum of 25 steps for 32-bit.  Again, simple and relatively compact (although you do need space for those constants, they can be stored in ROM/Flash; and only the currently used one in RAM), but not the fastest known.

When building decimal number strings, the BCD approach produces pairs of decimal digits, 00..99 inclusive.  If you have 100 bytes of ROM/Flash available for lookup, just store the BCD encoded value; you can also use this to encode any two-digit integer to BCD.  Low nibble then gives the lower decimal digit, and upper nibble the upper decimal digit.
For 16-, 24-, and 32-bit conversions, you then need 100+2*4+2*3+2*2+1=119 bytes of ROM/Flash for the constants.

Since 100=64+32+4, 100*x=(x + x<<3 + x<<4)<<2, you can use the 100-byte BCD table to extract pairs of decimal digits from a fractional binary value by repeated multiplication by 100.  The decimal value (0..99) will be in the overflow byte, so converting it to a pair of digits via the BCD table is fast.

Standard library implementations (newlib, in particular) are not designed or intended to be fast; they are written to be correct.  For example, when converting float or double to a string, if it is within certain bounds, multiplying it by a suitable power of ten (both integral and fractional powers!) depending on the desired format of output, rounding to an integer, and printing the integer value and just inserting the decimal point depending on the power of ten multiplier, is often much, much faster than using printf() or snprintf().  It is only the larger magnitudes (absolute values) whose conversion to decimal/string is problematic, and much of the standard library implementations involves getting those right.  As discussed in another programming thread, I'm working on a solution intended for 32-bit architectures (ARM cores) that has a very limited memory footprint, and it already beats all standard library implementations I've seen, while being exactly correct.  My only problem is how to avoid having to store the larger positive powers of ten, because for double, there are over 300 of them.
 
The following users thanked this post: mycroft, RJSV

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #65 on: December 05, 2022, 07:47:08 am »
Don't use float "just because". Fixed point is trivially easy, but of course for a beginner everything is difficult at first. This is not a reason not to learn to use them.

Even on higher performance CPUs like Cortex-M4 or -M7 using floats in ISR context causes stacking of FP registers which doubles your interrupt latency, this is sometimes important and may be totally unexpected for a first-timer.

Single precision float does not have enough resolution so you could ignore it and assume it's an "ideal" real number. And if you need to think about accuracy and resolution, fixed point is easier. double has enough accuracy for most practical purposes so you can get quite far by ignoring the whole problem. But double is even slower and larger.

Fixed point arithmetics literally throws the whole resolution/accuracy/range thing directly on your face. As you manually deal with it, you are forced to see loss of resolution in intermediate steps etc. For example, when multiplying two 32-bit numbers together, it is obvious you will need a 64-bit intermediate result. With 32-bit floats, every step just loses data for you, silently.

Usually floats are fine, yes, but the claim they are magically very much easier is completely untrue. You get different set of footguns.

You really have to learn to understand both floating and fixed point, no way around it IMHO.
« Last Edit: December 05, 2022, 07:52:22 am by Siwastaja »
 
The following users thanked this post: elbucki

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #66 on: December 05, 2022, 10:21:12 am »
With fixed point, it is also important to remember you can have either a binary fraction (AKA binary fixed point), or a decimal fraction (AKA decimal fixed point).

In binary fixed point, FixedPoint(1.0) = 2b, where b is the number of fractional bits.

In decimal fixed point, FixedPoint(1.0) = 10d, where d is the number of fractional decimal digits.

Many programmers use decimal fixed point without realizing it.  For example, if your variable is in milliVolts, it is also in volts in decimal fixed point with d=3.  To convert it to string, you just convert the integral value to a decimal number as usual (with at least four digits), and just insert the decimal point between the third and fourth least significant decimal digits.

To convert binary fixed point to a string, you handle the integer bits and the fractional bits separately.  The integer bits form the integral part, and can be printed as is (except if you apply rounding, and the fractional part rounds to next integer; then you need to add one to the magnitude).
The fractional part is easiest to convert by multiplying by a suitable power of ten, so that the desired digits are in the integer part, and rounding is based on whatever is left in the fractional part.  (IEEE by default rounds floats to even: so, you only increment the integer part if the fractional part is greater than one half, or when it is exactly one half and the integer part is odd.)

Both have their benefits and downsides, and it pays to carefully consider exactly what you do with the variables.  Converting between decimal and binary fixed point boils down to multiplying by a factor of 2b/10d=2b-d/5d or 10d/2b=5d/2b-d, which is usually less work than converting to decimal string in between.

On 8-bit architectures, you can convert an 8-bit fraction to three decimal digits by multiplying by 125 (so you'll have a 15-bit intermediate value to work on), leaving you with 5 fractional bits to determine the rounding, because 1000/256=125/32.  If the five bits are 0b01111 or less, leave it be; if 0b10001 or more, increment the three decimal digits by 1; and if 0b10000, apply your rounding logic.  Trick on architectures without a fast multiplication and only a single-bit shifts and rotations: 125=128-2-1, so you only really need one (wide) shift left and one wide shift right, and two temporary values to compute the result.
For printing, just do repeated multiplications by ten, and extract each digit in descending order of importance from the upper byte (bits 8..11).
« Last Edit: December 05, 2022, 10:26:02 am by Nominal Animal »
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #67 on: December 05, 2022, 04:17:22 pm »
Yes:
TLDR:
* decimal fixed point for human consumption (ease of printing, just insert dot character at a fixed point)
* binary fixed point for further computer consumption
 
The following users thanked this post: Nominal Animal

Offline wek

  • Frequent Contributor
  • **
  • Posts: 495
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #68 on: December 05, 2022, 06:14:01 pm »
I also never came across a case where the "next person" wouldn't completely dismiss the "previous person's" design choices. It's just built into our DNA.
I guess you never came across good software engineers then  >:D
That is painfully obvious.

So, what's the agreement on best practices in this thread, exactly?

>:D ++
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #69 on: December 05, 2022, 06:59:04 pm »
Single precision float does not have enough resolution so you could ignore it and assume it's an "ideal" real number. And if you need to think about accuracy and resolution, fixed point is easier. double has enough accuracy for most practical purposes so you can get quite far by ignoring the whole problem. But double is even slower and larger.

You really have to learn to understand both floating and fixed point, no way around it IMHO.

Float (32bit) isn't very accurate, but doesn't overflow easily and the accuracy isn't that bad (23 bits or about 7 significant digits in base 10 system) . Depends a lot what you are doing. Floats may have problems with adding and subtracting, when the numbers have different order of magnitude. Thus you need to be careful not to loose the effect of the smaller one. Then again multiplying and dividing is quite safe and not much is lost. Dividing is often faster than with equal accuracy fixed point. And you may need some functions like trigonometric, which only work with floating point in most libraries (sure you can make your own with Taylor series etc.)

I always use double with actual computers. Solves almost all problems with float, but is far from optimal for MCU and some MCU C compilers even downgrade double to 32 bit.

My MCU software have always had some physics involved needing the use of several equations involving multiplying, dividing and more complex mathematical functions between variables, that are unknown while coding. I find it much easier to work with float and never had a problem with that. Even in cases needing very high accuracy and using a 24 bit ADC and a sensor giving 16 bit real resolution I have had no problems with accuracy after doing some calculations with floats.

I wouldn't use float for money or time etc. which are often big numbers with even the small differences important. With most other physical stuff, you can just ignore the small values, since they are not meaningful for the data. E.g. if you are measuring 1 km, you (usually) don't really care about each mm while within a year you may care about each second or even ms.
« Last Edit: December 05, 2022, 08:05:27 pm by jmaja »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14490
  • Country: fr
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #70 on: December 05, 2022, 07:03:19 pm »
So, what's the agreement on best practices in this thread, exactly?

Using one's brain? :popcorn:
 
The following users thanked this post: Siwastaja, wek

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #71 on: December 05, 2022, 07:46:13 pm »
Depends a lot what you are doing.

That's exactly my point, with float you don't get that "don't think about it at all" luxury. It almost feels like it, and usually works out, but at some point you hit a case where the accuracy isn't quite enough, when inaccuracies accumulate along the way. With a double, you can pretty much consider it a Matlab-like ideal real number, unless you are doing something super fancy like Nominal Animal's atomic/universe scale physical simulations. You see, I either want to prove the fitness of solution, or when handwaving, I want to leave a lot of margin for error, and float does not have much, think about it, if you had a 30-bit float with 2 bits less accuracy you would start truly hitting those corner cases.

I think this is also why double is the standard floating point type in C and C++, despite the age of the languages; literals like 10.0 are of type double, and math.h library functions like sin() take and return doubles. (And one needs to be careful to explicitly use float when doing float stuff: 10.0f, sinf()...)
« Last Edit: December 05, 2022, 07:50:11 pm by Siwastaja »
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #72 on: December 05, 2022, 07:47:52 pm »
So, what's the agreement on best practices in this thread, exactly?

Using one's brain? :popcorn:

No, avoid everything if you have ever seen someone making a mistake with something!
 

Online DavidAlfa

  • Super Contributor
  • ***
  • Posts: 5914
  • Country: es
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #73 on: December 05, 2022, 08:37:21 pm »
You don't need floats at all, fixed point can provide quite good precision:
uint32_t mV_x100 = ADC*1000000/4095

ADC=2048 : 500122 (5001.22mV)
ADC=2813 : 686935 (6869.35mV)
ADC=1024 : 250061 (1500.61mV)

You can make some hacks to avoid division (Usually pretty slow), losing some accuracy at microvolt level, but overall giving the same results:
uint32_t mV_x100 = (ADC*1000244)>>12

ADC=2048 : 500122 (5001.22mV)
ADC=2813 : 686935 (6869.35mV)
ADC=1024 : 250061 (1500.61mV)
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 
The following users thanked this post: MK14

Online jpanhalt

  • Super Contributor
  • ***
  • Posts: 3480
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #74 on: December 05, 2022, 08:49:51 pm »
You don't need floats at all, fixed point can provide quite good precision:

Isn't that like suggesting metric is more precise that imperial?  They provide exactly the same precision if set up comparably. ;)
 

Online uer166

  • Frequent Contributor
  • **
  • Posts: 893
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #75 on: December 05, 2022, 09:19:06 pm »
That's exactly my point, with float you don't get that "don't think about it at all" luxury.

Eh, float is nice since it's atomic on ARM, easily formatted, makes code readable, is very easy to printf/format for output, and makes everything in SI units by definition. I admit that it's less accurate than fixed point, but say for a DSMPS, the analog and ADC accuracy (say 11 bits ENOB best case), mean it's not even close to being the limiting factor. I'll gladly take the register push/pop penalty on ISR entry.

I usually default to converting all ADC values to SI units immediately and work on those instead. The only problem I found is of course division by small numbers and ending up with NaN of inf, but the same problem exists in fixed point just as much, so best to avoid division by a variable altogether.
 

Online uer166

  • Frequent Contributor
  • **
  • Posts: 893
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #76 on: December 05, 2022, 09:24:15 pm »
You don't need floats at all, fixed point can provide quite good precision:

Isn't that like suggesting metric is more precise that imperial?  They provide exactly the same precision if set up comparably. ;)

Shouldn't be the case, 23 bits for float vs 30 bits for fixed point of effective resolution. Fixed should definitely be more precise, all else being equal.
 

Offline wek

  • Frequent Contributor
  • **
  • Posts: 495
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #77 on: December 05, 2022, 09:26:10 pm »
I think this is also why double is the standard floating point type in C and C++, despite the age of the languages;
Contrary - it's *because* of the age of C. While it was developed on PDP-11, Thompson and Ritchie having had worked on the Multics project before, must had been painfully aware of the fact that it's one of the less capable computers around.

The 600 series used 36-bit words and 18-bit addresses. They had two 36-bit accumulators, eight 18-bit index registers, and one 8-bit exponent register. It supported floating point in both 36-bit single-precision and 2 x 36-bit double precision, the exponent being stored separately, allowing up to 71 bits of precision and one bit being used for the sign.
Wide floating points were available in hardware since relatively early computers, as their primary purpose was to compute ballistics and similar stuff quickly.

Minicomputers and microprocessors were in fact a significant regression in this regard, momentum regained only after some 20-30 years (x87 etc).

JW
 

Online DavidAlfa

  • Super Contributor
  • ***
  • Posts: 5914
  • Country: es
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #78 on: December 05, 2022, 09:29:49 pm »
Isn't that like suggesting metric is more precise that imperial?  They provide exactly the same precision if set up comparably. ;)
Don't you know the difference between a 32bit float and fixed point?
Integers have a range of 2^bits, but floats use mantissa, they have a range of something like  ±1E38 with ~7 decimal precision, and doubles go to about ±1E38.

It's mostly the range what makes floats great.
With fixed points, you must scale the data in a static manner so it can store the whole required range, the higher the range, the lower the precision.
"Single" floats lose precision but can store really large/small numbers.
« Last Edit: December 05, 2022, 09:36:31 pm by DavidAlfa »
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Online jpanhalt

  • Super Contributor
  • ***
  • Posts: 3480
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #79 on: December 05, 2022, 10:12:44 pm »
Isn't 4 byte (32 bit), 7 decimal precision?  On the other hand, if I enter in your float for 8-bit MCU's, "3/pi," what is the answer?
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3643
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #80 on: December 05, 2022, 10:37:30 pm »
Isn't 4 byte (32 bit), 7 decimal precision?  On the other hand, if I enter in your float for 8-bit MCU's, "3/pi," what is the answer?
No, a signed 32-bit integer has \$ \log_{10} (2^{31}-1) \approxeq 9.3 \$ decimal places of precision, all inclusive.
Any fixed-point format defines how much of that is allocated to the fraction part.
So you can define fixed-point formats with ~7 decimal places (or anything up to ~9) after the point.

The key advantage of fixed-point is that for numbers in a known range, you know the precision. Mantissa+exponent formats have a (mostly, modulo denormals) consistent precision in scientific notation, but not in terms of actual reals. The tradeoff is that with an exponent you can approximate a much larger range of reals.
« Last Edit: December 05, 2022, 10:41:47 pm by helius »
 
The following users thanked this post: MK14

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #81 on: December 05, 2022, 10:46:53 pm »
Isn't 4 byte (32 bit), 7 decimal precision?  On the other hand, if I enter in your float for 8-bit MCU's, "3/pi," what is the answer?
No, a signed 32-bit integer has \$ \log_{10} (2^{31}-1) \approxeq 9.3 \$ decimal places of precision, all inclusive.
Any fixed-point format defines how much of that is allocated to the fraction part.
So you can define fixed-point formats with ~7 decimal places (or anything up to ~9) after the point.

The key advantage of fixed-point is that for numbers in a known range, you know the precision. Mantissa+exponent formats have a (mostly, modulo denormals) consistent precision in scientific notation, but not in terms of actual reals. The tradeoff is that with an exponent you can approximate a much larger range of reals.
Yes and no. In both cases you can express the precision in the number of bits used for expressing a number. With the advantage of floating points that you can use the mantissa to scale the number into useful units.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #82 on: December 06, 2022, 03:44:35 am »
Isn't 4 byte (32 bit), 7 decimal precision?  On the other hand, if I enter in your float for 8-bit MCU's, "3/pi," what is the answer?
IEEE 754 Binary32, which most float implementations nowadays are, have 24 bits of precision in their mantissa, and \$\log_{10}(2^{24}) \approx 7.2\$ decimal digits.  That's where the 7 decimal digits of precision for floats come from.

A fixed point format such as Q8.24 has \$\log_{10}(2^{24}) \approx 7.2\$ decimal fractional digits (right of the decimal point), with total precision being \$\log_{10}(2^{32}) \approx 9.6\$ decimal digits.

A decimal fixed point format using d=8 in a 32-bit signed integer has range -21.47483648 to +21.47483647, and exactly eight decimal digits.

On 32-bit machines, as I've described elsewhere, the UQ4.28 format (unsigned 32-bit ints with 4 integral bits and 28 fractional bits) is particularly useful.  The 28 bits yield \$\log_{10}(2^{28}) \approx 8.4\$ digits of decimal precision, but these can be chained together easily into BigNums, using only 32-bit math.  It is a perfect format for converting higher precision fixed-point and floating-point formats to decimal and vice versa on architectures with 32-bit multiplication (32×32=32) and division (32/32=32).  In the middle limbs, the integral part is left unused, and will be used for carry handling during operations.
« Last Edit: December 06, 2022, 05:38:10 am by Nominal Animal »
 
The following users thanked this post: SiliconWizard

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #83 on: December 06, 2022, 04:10:28 am »
IEEE 754 Binary32, which most float implementations nowadays are, have 23 bits of precision in their mantissa, and \$\log_{10}(2^{23}) \approx 6.9\$ decimal digits.  That's where the 7 decimal digits of precision for floats come from.

For floats (32 bits), IEEE754.
Surely you mean \$\log_{10}(2^{24}) \$
Because there are 23 STORED bits, which makes a total of 24 bits in total, because there is always (with rare exceptions), a leading binary 1 digit.  Which is NOT stored for floats IEEE754, but is, in some floating point formats, such as X87's 80 (or 128 in memory) bit extended floating points.
« Last Edit: December 06, 2022, 04:12:46 am by MK14 »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #84 on: December 06, 2022, 05:44:11 am »
IEEE 754 Binary32, which most float implementations nowadays are, have 23 bits of precision in their mantissa, and \$\log_{10}(2^{23}) \approx 6.9\$ decimal digits.  That's where the 7 decimal digits of precision for floats come from.
For floats (32 bits), IEEE754.
Surely you mean \$\log_{10}(2^{24}) \$
Yes, of course.  The mantissa has 24 bits of precision, even though the most significant one is implicit.
Similarly, double (IEEE 754 Binary64) has 53 bits of precision, with the most significant one implicit.

(Guess why I made the error.  I always get bitten in the butt when I don't verify:palm:
 And yes, a bit of code I just wrote assumed double had 54 bits of precision in the mantissa. ::))
 
The following users thanked this post: MK14

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #85 on: December 06, 2022, 05:52:54 am »
A fixed point format such as Q8.24 has \$\log_{10}(2^{24}) \approx 7.2\$ decimal fractional digits (right of the decimal point), with total precision being \$\log_{10}(2^{32}) \approx 9.6\$ decimal digits.

A decimal fixed point format using d=8 in a 32-bit signed integer has range -21.47483648 to +21.47483647, and exactly eight decimal digits.
Yes fixed point can represent numbers with this accuracy, but you can't keep that accuracy without using more bits while doing most mathematical functions. Clearest example being division.

Say you want to calculate speed. You have U8.24 for both in meters and seconds nicely precented with about 7 decimals. How accurately you would get m/s? With floating point you would still have the same about 7 significant digit no matter what the values of distance and time would be.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #86 on: December 06, 2022, 06:45:12 am »
A fixed point format such as Q8.24 has \$\log_{10}(2^{24}) \approx 7.2\$ decimal fractional digits (right of the decimal point), with total precision being \$\log_{10}(2^{32}) \approx 9.6\$ decimal digits.

A decimal fixed point format using d=8 in a 32-bit signed integer has range -21.47483648 to +21.47483647, and exactly eight decimal digits.
Yes fixed point can represent numbers with this accuracy, but you can't keep that accuracy without using more bits while doing most mathematical functions. Clearest example being division.
True, except I'd argue the clearest example is multiplication.

TL;DR: Even floating-point arithmetic is really affected, but the implementations deal with it internally, by rounding the high-precision intermediate results to the floating-point type.

When you multiply an A-bit value with a B-bit value, the result has A+B bits, arithmetically speaking.  In other words, for multiplication, you need a temporary value that is as wide in bits as the sum of the widths of the multiplicands.

For Q8.24, the temporary result has 16 integer bits (including sign bit), and 48 fractional bits.

This is part and parcel of both integer and fixed point arithmetic: an unavoidable fact.

Floating-point arithmetic works around this by using a mantissa-exponent format to describe each value, v = m·Bx, where m is the mantissa (left aligned, so without superfluous leading zeroes), and x is the exponent, and B is the radix (2 for binary floating-point formats, 10 for decimal floating-point formats).
The product of two such values has a double-wide mantissa which is immediately (except in cases like fused multiply-add) rounded to the normal size; and the exponent is the sum of the terms' exponents.

So, in a way, even floating-point arithmetic does not completely avoid this issue, especially because IEEE 754 requires exact correct rounding; it's just that the floating-point arithmetic implementations take care of it internally.

Addition and subtraction between two values can only underflow or overflow by one bit.  But, division is complicated, because many rational values cannot be expressed exactly in binary.  The most common example is 0.1 = 1/10, which in binary is 0b0.00001100... and cannot be exactly represented as a scalar binary value.  So, some kind of rounding is needed.  Arithmetically, we usually implement integer division in terms of division and modulus, i.e. v / d = n with remainder r, such that d·n+r = v (and the remainder r is crucial when implementing BigNum division, division for arithmetic types larger than the native register size), and 0 ≤ abs(r) < abs(nd), and preferably (but not necessarily) r and n having the same sign.  With floating point division, n will be rounded to the stated precision, but be correct to within half an unit in the least significant place (ULP), i.e. within half a mantissa bit for IEEE 754 Binary32 (float) and Binary64 (double).

It is this rounding why one should not use exact comparison between floating-point values, but a range instead, i.e. abs(a-b) ≤ eps, where eps is the largest value with respect to a and b that one considers "zero"; small enough to ignore.
« Last Edit: December 06, 2022, 11:45:47 am by Nominal Animal »
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #87 on: December 06, 2022, 07:24:00 am »
You don't need floats at all, fixed point can provide quite good precision:
uint32_t mV_x100 = ADC*1000000/4095

ADC=2048 : 500122 (5001.22mV)
ADC=2813 : 686935 (6869.35mV)
ADC=1024 : 250061 (1500.61mV)

You can make some hacks to avoid division (Usually pretty slow), losing some accuracy at microvolt level, but overall giving the same results:
uint32_t mV_x100 = (ADC*1000244)>>12

ADC=2048 : 500122 (5001.22mV)
ADC=2813 : 686935 (6869.35mV)
ADC=1024 : 250061 (1500.61mV)

When using scaled fixed point it is very important to check your ranges and limits closely....

Code: [Select]
#include <stdio.h>
#include <stdint.h>

int main(int argc, char *argv[]) {
  int32_t adc = 2813;
  uint32_t mV_x100 = adc*1000000/4095;
  printf("ADC = %d : %u  (%7.2fmV)\n", adc, mV_x100, mV_x100*0.01);
  return 0;
}
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #88 on: December 06, 2022, 07:32:35 am »
What's the benefit of fixed point in real life, if you don't insist keeping count of each cent or whatever you are calculating?

With 32bit floating point vs 32bit fixed point you probably need more memory for the variables, since you need 64 bit intermediates.

How much is the speed difference on different MCU? Fixed point will be much faster for add and substract. How about other mathematical functions? At least divide will be much faster with floats, if you need to use 64 bit for fixed. 

Any real life examples of the same problem done with the same accuracy (significant digits, not decimals nor bits).
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3643
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #89 on: December 06, 2022, 07:40:45 am »
Arithmetically, we usually implement integer division in terms of division and modulus, i.e. v / d = n with remainder r, such that d·n+r = v (and the remainder r is crucial when implementing BigNum division, division for arithmetic types larger than the native register size), and 0 ≤ abs(r) < abs(n), and preferably (but not necessarily) r and n having the same sign.
I believe you mean that 0 ≤ abs(r) < abs(d). But otherwise what you wrote is true.
 
The following users thanked this post: Nominal Animal

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #90 on: December 06, 2022, 08:59:35 am »
Fixed point is great when the range of numbers is trivial to see, and for linear response systems like SAR ADCs. When you don't need to waste extra bits for uncertainties about the range, you get significantly more precision than with floats.

For example, with an ADC you know by design it can only output numbers from 0 .. 4095, and when you multiply it by some calibration factor, you can just decide this factor is going to be less than 232/4096 = 1048576, and then you know the maximum possible result is 4095 * 1048576 = 4293918720, which fits in uint32, with no bits wasted. It's really as simple as that.

float wastes bits into the ability of representing massive range of numbers, which is not needed at all in typical ADC/DAC DAQ systems. But of course, combining accuracy and range (making automagic compromises between them) allows the developer to be lazy. But for the lazy people, I really really recommend using double. float is almost certainly good enough for most jobs, but just barely so.
« Last Edit: December 06, 2022, 09:02:52 am by Siwastaja »
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #91 on: December 06, 2022, 09:34:21 am »
Fixed point is great when the range of numbers is trivial to see, and for linear response systems like SAR ADCs. When you don't need to waste extra bits for uncertainties about the range, you get significantly more precision than with floats.

For example, with an ADC you know by design it can only output numbers from 0 .. 4095, and when you multiply it by some calibration factor, you can just decide this factor is going to be less than 232/4096 = 1048576, and then you know the maximum possible result is 4095 * 1048576 = 4293918720, which fits in uint32, with no bits wasted. It's really as simple as that.
Sure, but then you aren't really doing much with the value you just measured. If all you need to do is use some IF etc or display the value for the user, then all is fine. But then again that doesn't sound at all CPU intensive job and thus nothing is lost with floats. You certainly get the full real accuracy of any ADC with 32 bit floats, not to mention 0...4095 12 bit ADC.
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14219
  • Country: de
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #92 on: December 06, 2022, 09:44:49 am »
For most simple cases both fixed point / integer calculation and FP calculation works. The main point with FP is usually more the extra code needed for the FB library. This can be an issue with a small µC (e.g. < 32 kB of flash). The speed for the computation is rarely an isse, as the decimal format is mainly used for output to humans and there speed is limited. If speed is critical one can write binary data to a file / memory card.

With some a little more complex operations the resolution can already become an issue, like calculating the standard deviation with the brute force formula. a 12 bit values squared tests the limits of SP floats and if more are summed up even 32 bit integers can reach there limit. One may still need floats as at the end one may need the square root.
Doint the math with fixed point one gets more aware of the limitations - with floating points one gets rounding errors, that may not be that bad, but if so they are more tricky to find than overflow problems.
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4795
  • Country: pm
  • It's important to try new things..
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #93 on: December 06, 2022, 10:04:03 am »
SW floating point libs are small, like 3.5kB single precision in an 8bitter like pic16. Double precision libs are usually 2x larger and the computation is 2x slower. I can remember I was using a floating point single precision sw lib around 2000 in a pic16F88 (cc5x free compiler had 32bit fp lib at that time).
PS: the only issue you may cope with 32bit fp is it is less precise than a 32bit integer. So max 6digits after the decimal point with +-*/, no more..
« Last Edit: December 06, 2022, 10:16:33 am by imo »
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #94 on: December 06, 2022, 10:36:06 am »
With some a little more complex operations the resolution can already become an issue, like calculating the standard deviation with the brute force formula. a 12 bit values squared tests the limits of SP floats and if more are summed up even 32 bit integers can reach there limit. One may still need floats as at the end one may need the square root.
How would that work with fixed point? How would you know in advance, how much the standard deviation will be? That will affect how you should do it unless you do everything in 64bit to actually be certain not to overflow and still get some resolution. Standard deviation can be a small fraction of the average or much more than the average. With former you are just summing zeros and latter would easily cause an overflow, if you are using only 32bit variables.

I don't see much problems with this for 32bit float unless you are using a very large number of data points. Even with say 100 000 data points you would get standard deviation close enough. After all it's just a statistical value and there is no need to get more than 1-3 significant digits correct. And it's quite trivial to add accuracy by doing the summing in pieces. Say after each 1000 or 100 start from zero and add the current sum to another variable.

About the only pitfalls with 32 bit floats are add/subtract of clearly different orders of magnitude (only in cases where this actually matters), zero might not be exactly zero (but is very small) and you may overflow with divisions etc.

Fixed point has all the same problems, if used with mathematical functions and not with 64 bit for many operations. And it can have much more.
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4795
  • Country: pm
  • It's important to try new things..
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #95 on: December 06, 2022, 11:29:40 am »
Messing with 12bits and SP FP cannot be a big problem, imho, incl. filters, stddev, etc. I did with ADuC 8bitter (80C52) SP FP math with 24 bit ADC and it worked. What I did was I made all calculations with integer where possible, and used to use SP FP at the very end of calcs, like with gain, scale, stddev, EMA filters, ppm, etc. Needs to play with it for a while and watch carefully what you do. You can get 1uV resolution printed out, sure.
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14219
  • Country: de
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #96 on: December 06, 2022, 12:18:10 pm »
The simple formula for the std dev. uses sum of values square - the square of the average. With values consistanly near the maximum one has the problem of doing the difference of near equal sized numbers. So quite a lot of bits lost and with low noise the last bits are what matters.  SP FP already can start to round with the sum of two 12 bit numbers squared. 
Adding many similar numbers can than also lead to repeated rounding in the same direction.

32 bit integers have 7 or 8 bits of extra headroom and one is generally more aware that there are limits. With integers is is reasonable easy to mix 32 bit and 64 bit numbers. So the sum could be 64 bit (if needed) and the rest 32 bit or even less. The integer conversion between different resolutions is usually easy and fast. With floating point variables mixing SP and doubles adds quite some overhead, both code length and run-time.

For the Sdt. dev example one can do things more intelligent (e.g. subtract an approximate average first), if one is aware of the possible problems.
There are other cases too that are numerically not favorable, some are easy to circumvent like the std. dev example, other are not that easy - though this often involves non integer operations like square roots or log, so something one would still do with floating-point. The point is that using FP does not allow to be too sloppy.  Bugs doe to rounding errors are hard to find ones - overflow errors are more drastic and easier to find (though sometimes the hard way, as with one of the Ariane rockets that crashed on frist flight because of an overflow with integer / fixed point math).


Most data also start as interger, so one has at least the conversion from integer to float as a first step.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #97 on: December 06, 2022, 04:21:26 pm »
SW floating point libs are small, like 3.5kB single precision
"Small" is so relative. I think that's pretty huge, for such obvious feature (basic arithmetics).There still are small budget microcontrollers which have something like 8-16KB of flash. These are used in cost-sensitive projects so considerable effort may be already put making the application fit. 3.5KB is a large percentage of this!

And we are not even talking about AVRs and PICs, STM32F030 value line starts at 16KB of flash.

On 8-bit CPUs, 16-bit fixed point arithmetic saves registers, RAM and processing time. Half-precision (16-bit) float, while used in 3D graphics for non-critical visual things, would be nearly useless, but the same 16 bits offer much more precision in fixed point (with carefully considered ranges, of course).

But if you have some Cortex-M4 which today does not cost more than $2... this does not matter. Especially with the ones with double precision hardware floats, then the only reason to use fixed point would be some super timing critical (or frequently repeating) interrupt handler where you would want to avoid the stacking of the gazillion FP registers. I mean, if you run the CPU at say 100MHz and fire interrupts at 1MHz, just FP stacking and unstacking is significant CPU load %. Even then, you can use FP in the project, not just within the ISR.
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #98 on: December 06, 2022, 07:28:18 pm »
The simple formula for the std dev. uses sum of values square - the square of the average. With values consistanly near the maximum one has the problem of doing the difference of near equal sized numbers. So quite a lot of bits lost and with low noise the last bits are what matters.  SP FP already can start to round with the sum of two 12 bit numbers squared.
Adding many similar numbers can than also lead to repeated rounding in the same direction.
I did some tests on a simulator using AVR XMega and later with a PC. There are at least two simple ways to calculate standard deviation.

and
I calculated these for an array of 100 floats. It took (without sqrt) 57 000 vs 41 000 cycles thus less than 15 or 2 ms at 4 or 32 MHz. The latter was faster, but clearly less accurate. The values I used where 1+ i * 10^n. The accurate value (with sqrt) is 2.886607e-m where m depends on n. With n>=-1 (until overflow) you get exactly the same result using both formulas with float or double. At n=-2 the latter gives 2.886605e-01. At n=-3 the latter starts to fail 2.886267e-02 and even more at n=-4 2.883754e-03. At n=-5 the latter is about useless 3.906250e-04. The former is still quite accurate at n=-6 2.886695e-05 and still OK at n=-7 2.900095e-06. At n=-9 even double becomes about useless with the latter formula 3.371748e-08, while the former works with double up to about n=-15 2.884401e-14.

At n=-7 standard deviation is about 18 bits. I have worked with ADC having about 22 bit noise margin, but not with anything measurable at that accuracy (except GND and ref). One sensor I use reaches about 15 bits noise margin.

So use the former equation, if you expect to have a small standard deviation compared to average. If you expect the difference to be more than 7 orders of magnitude, you should use double or be more careful how you add.

How do you get better accuracy with fixed point? Maybe I'll try.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14490
  • Country: fr
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #99 on: December 06, 2022, 08:37:56 pm »
All this is pretty nice, but learn some arithmetics if you're not comfortable with that. It'll make you a much better programmer, I promise.

I've seen many programmers just using FP because they are not very comfortable with arithmetics and are thus just meaning to use a programming language as a glorified pocket calculator. Which was my main point.

But there sure are uses for FP, and uses for integer and fixed-point. If you know why you use one or the other with good technical reasons, then it's engineering. If you use one almost exclusively because you are more comfortable with it, then it becomes a silver bullet and it will end up equating to throwing things against the wall until they stick.
 
The following users thanked this post: Siwastaja

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3147
  • Country: ca
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #100 on: December 06, 2022, 09:06:16 pm »
I did some tests on a simulator using AVR XMega and later with a PC. There are at least two simple ways to calculate standard deviation.
There's no need to experiment. It all can be derived mathematically.

For example, for a 12-bit ADC, to calculate the sum of 100 squares exactly (without any error), you need 12 + 12 + 7 = 31 bits. So, 32-bit integers can calcuilate it exactly, floats cannot, doubles can.

For a 24-bit ADC you need 24 + 24 + 7 = 55 bits. So, 32-bit integers will not work, but 64-bit integers are Ok. Floats are not Ok. Doubles may have a very small error.

That's all there is to it.
 
The following users thanked this post: SiliconWizard

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #101 on: December 06, 2022, 09:34:16 pm »
I did some test with 32 and 64 bit unsigned integers. They seem to work a bit differently in PC and Xmega simulator. I used another way around approach without much thinking. Thus I set values as 10^n + i thus n=7 is about equal to n=-7 in the fp test.

I didn't do the final scaling (divide by 100 and by scale factor) nor sqrt in Xmega simulator and in PC I did those with double. Otherwise I would have lost accuracy.

In this case the latter standard deviation calculation method gave better results. But that may also be due to some not optimal calculation orders. With n=2 (1.01, 1.02 etc) I got quite good results, but the former method gives 2.887040 thus it has only 4 correct digits. At n=3 32 bit with the former method failed badly. The other worked fine at n=7, but at n=8 64bit with former method failed badly and 32bit with the latter even worse.

I tested only the latter method in Xmega. With 64bit it was spot on (unlike with PC) up to n=7 and failed badly at n=8. With 32bit it was spot on at n=2 but then failed totally at n=3.

32bit used about 10000 cycles and was 4 times faster than 32bit float. 64bit used about 60000 cycles and was 50% slower than 32bit fp without improving accuracy with my setup.

Most likely there is room for improvement with my quick and dirty test. Didn't even try to optimize accuracy nor speed.

 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #102 on: December 06, 2022, 09:43:09 pm »
There's no need to experiment. It all can be derived mathematically.

For example, for a 12-bit ADC, to calculate the sum of 100 squares exactly (without any error), you need 12 + 12 + 7 = 31 bits. So, 32-bit integers can calcuilate it exactly, floats cannot, doubles can.
Of course you can derive that mathematically. I was also interested how many cycles are needed. That's easier to test. Also I was interested what you can achieve without that much thought for every operation in a case you don't really know how many bits you have in the first place. Most often there is no need to calculate exactly even when using fixed point.
 

Online DavidAlfa

  • Super Contributor
  • ***
  • Posts: 5914
  • Country: es
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #103 on: December 07, 2022, 12:22:24 am »
Why do all the programming threads end with some (Apparently bored) tech guys kidnapping it and going mental?
And after all that, you didn't help OP at all. :popcorn:
OP asked form something simple, how to scale/convert adc value into voltage and output a string with it, avoiding floats if possible.

This is trivial, not the Large Hadron Collider, no complex scientific operations, just a damn multiplication and a division, are you going to make 200 pages of nosense overthinking?

Code: [Select]
void adc_to_str(uint16_t adc, char *out){           // Ex. adc = 3192
    const char *zero = "0.00mV";                    // For case adc = 0
    uint32_t val;
    uint8_t len=0;
    if (adc==0){
        do{
            *out++ = *zero;
        }while(*zero++);                            // Output: "0.00mV"
        return;
    }
    else{                                           // Scale adc value
        val = ((uint32_t)adc*1000244ul)+2048>>12;   // Fastest conversion
        //val = ((uint32_t)adc*1000000ul)/4095;     // Slower
    }                                               // val = 779487
    while(val){
        out[len++] = (val % 10) + '0';              // Output ascii numbers
        val /= 10;
    }                                               // Output is reversed: "784977"
    len--;
    for(uint8_t i=0; i<=(len/2); i++){              // Reverse string
        char c=out[i];
        out[i]=out[len-i];
        out[len-i]=c;
    }                                               // "779487"
    len++;
    out[len] = out[len-1];                          // Add decimal point, move last 2 digits to the right: "7794877"
    out[len-1] = out[len-2];                        // "7794887"
    out[len-2] = '.';                               // "7794.87"
    len++;
    out[len++] = 'm';                               // "7794.87m"
    out[len++] = 'V';                               // "7794.87mV"
    out[len] = '\0';                                // "7794.87mV" with NULL string termination (Finished)
}

Demo here: https://www.jdoodle.com/ia/AJc

Code: [Select]
adc: 0000    Output: 0.00mV
adc: 0512    Output: 1250.31mV
adc: 2048    Output: 5001.22mV
adc: 3192    Output: 7794.87mV
adc: 4095    Output: 10000.00mV
« Last Edit: December 07, 2022, 02:03:56 am by DavidAlfa »
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #104 on: December 07, 2022, 01:05:21 am »
Why do all the programming threads end with some (Apparently bored) tech guys kidnapping it and going mental?
And after all that, you didn't help OP at all. :popcorn:
OP asked form something simple, how to scale/convert adc value into voltage and output a string with it, avoiding floats if possible.

This is trivial, not the Large Hadron Collider, no complex scientific operations, just a damn multiplication and a division, are you going to make 200 pages of nosense overthinking?

Code: [Select]
void adc_to_str(uint16_t adc, char *out){           // Ex. adc = 3192
    const char *zero = "0.00mV";                    // For case adc = 0
    uint32_t val;
    uint8_t len=0;
    if (adc==0){
        do{
            *out++ = *zero;
        }while(*zero++);                            // Output: "0.00mV"
        return;
    }
    else{                                           // Scale adc value
        val = ((uint32_t)adc*1000244ul)+2048>>12;   // Fastest conversion
        //val = ((uint32_t)adc*1000000ul)/4095;     // Slower
    }                                               // val = 779487
    while(val){
        out[len++] = (val % 10) + 0x30;             // Output ascii numbers
        val /= 10;
    }                                               // Output is reversed: "784977"
    len--;
    for(uint8_t i=0; i<=(len/2); i++){              // Reverse string
        char c=out[i];
        out[i]=out[len-i];
        out[len-i]=c;
    }                                               // "779487"
    len++;
    out[len] = out[len-1];                          // Add decimal point, move last 2 digits to the right: "7794877"
    out[len-1] = out[len-2];                        // "7794887"
    out[len-2] = '.';                               // "7794.87"
    len++;
    out[len++] = 'm';                               // "7794.87m"
    out[len++] = 'V';                               // "7794.87mV"
    out[len] = '\0';                                // "7794.87mV" with NULL string termination (Finished)
}

Demo here: https://www.jdoodle.com/ia/AJc

Code: [Select]
adc: 0000    Output: 0.00mV
adc: 0512    Output: 1250.31mV
adc: 2048    Output: 5001.22mV
adc: 3192    Output: 7794.87mV
adc: 4095    Output: 10000.00mV


I've tried warning people, earlier.  But this thread was started around 6.5 years ago (somewhat big necro), it was successfully answered (arguably), around the same time.  The OP doesn't seem to have logged in, for around the last 4.5 years.

Good job on the playground run-able (I tried it), program.  :)

EDIT:  Maybe the system should be configured to warn, NOT JUST the first person, to create a significant necro, to avoid possible mistakes like this.

Also, perhaps the system should more forcibly lock/hinder the necroing of threads, especially by 1 post brand new (usually/often SPAM BOT) users, or especially foolish users, in some cases.
« Last Edit: December 07, 2022, 01:17:08 am by MK14 »
 
The following users thanked this post: DavidAlfa

Online DavidAlfa

  • Super Contributor
  • ***
  • Posts: 5914
  • Country: es
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #105 on: December 07, 2022, 02:01:34 am »
Holly cow, I didn't realize this was a 2016 necropost!

Yeah, this forum warns you with the message:

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

But if someone already ignored that and posted anyways, others coming later won't get the warning.
A necropost isn't necessary bad if you're going in the same route as the original thread...

The problem comes when RandomGuy2 posts into RandomGuy1's 2016 thread "Hey did you fix it?".
RandomGuy1 last time online: 2017 :palm:.

Other forums won't allow reopening old threads at all...
« Last Edit: December 07, 2022, 02:10:39 am by DavidAlfa »
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 
The following users thanked this post: MK14

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #106 on: December 07, 2022, 08:49:40 am »
All this is pretty nice, but learn some arithmetics if you're not comfortable with that. It'll make you a much better programmer, I promise.

I've seen many programmers just using FP because they are not very comfortable with arithmetics and are thus just meaning to use a programming language as a glorified pocket calculator. Which was my main point.

But there sure are uses for FP, and uses for integer and fixed-point. If you know why you use one or the other with good technical reasons, then it's engineering. If you use one almost exclusively because you are more comfortable with it, then it becomes a silver bullet and it will end up equating to throwing things against the wall until they stick.
Most sensible post in ages on this thread, just hitting the thank you button is not enough.

I am truly amazed of to what lengths technical people, who really understand how to engineer these things, go when rationalizing non-technical decisions!
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #107 on: December 07, 2022, 08:55:35 am »
And after all that, you didn't help OP at all. :popcorn:
OP asked form something simple, how to scale/convert adc value into voltage and output a string with it, avoiding floats if possible.
And since the OP has not logged in in four years, it's pretty obvious no matter what we discussed, we did not help OP.

You "don't discuss anything I'm not interested in!!!" guys are weird. Why not just ignore (technical) posts you are not interested in? After all, this is all 100% relevant to the forum, subforum, and even to the thread. I'm truly baffled why discussing the accuracy of fixed points and floats triggers you, when the title says "ADC/DAC conversion to decimal, best practices". It's spot on, on topic!

Besides, I don't see a big issue discussing something in an old thread. This forum is not a helpdesk, it's a discussion forum. Live with it.

Your own mistake was to reply "to the OP" without reading the whole thread. That really sucks. But don't worry, I'm guilty of doing it all the time, too.
« Last Edit: December 07, 2022, 09:02:00 am by Siwastaja »
 

Online voltsandjolts

  • Supporter
  • ****
  • Posts: 2302
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #108 on: December 07, 2022, 08:57:56 am »
Holly cow, I didn't realize this was a 2016 necropost!

I deliberately woke this thread.
Keeping related information together seems sensible to me, rather than starting a dozen threads about the same thing.

 
The following users thanked this post: Siwastaja

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #109 on: December 07, 2022, 11:30:35 am »
It's been an ABSOLUTE DISASTER, train wreck of a thread, NECROING IT wise.

Total madness.

Let's wake up a long answered forum question.

NO, just NO!

You have ended up, by necroing this old/ancient thread, causing significant confusion, and wasted a lot of peoples time (it would seem).


Big Necro EDIT 15/03/23:
I have just striked out the entire post.
Soon or somewhat soon after making that post.  An increasing feeling of, I was TOO HARSH, making it, was felt (if I remember, correctly).

At some gentle or higher level, I agree with the post.  But the way it was originally written, was over the top.
« Last Edit: March 15, 2023, 12:28:29 pm by MK14 »
 

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #110 on: December 07, 2022, 11:43:57 am »
On some forums, more like stack exchange and similar.  There is an attempt to reach the best possible technical answer, for some question.  E.g. Best/fastest way of factoring a large number.

So, keeping a thread like that alive (on the appropriate forums, especially), would make a lot of sense.

But, if someone asks for help, on a particular, fairly specific issue, on a more general help/questions and answers forum, such as this one.  It is much more geared up to leaving a long finished thread alone.

With fairly obvious exceptions, such as 'Post what your lab looks like' etc threads.

The complication with this thread, is that it has a fairly generic title, applicable to other things.  But in practice is about a very specific set of requirements, needs, to suit the original OP's, current project, at the time.
« Last Edit: December 07, 2022, 11:47:54 am by MK14 »
 

Online voltsandjolts

  • Supporter
  • ****
  • Posts: 2302
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #111 on: March 15, 2023, 11:55:31 am »
On reflection, I don't agree at all with MK14 about necro posting.
It happens every day here on eevblog, with people regularly adding useful information to old threads, or asking in-context questions.
Better that than similar discussions spread across many threads.

If MK14 feels strongly about it, I suggest starting a poll about auto-locking threads older then x days.
Otherwise, please remember to check the post dates before joining a discussion.
 

Online MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #112 on: March 15, 2023, 12:27:02 pm »
On reflection, I don't agree at all with MK14 about necro posting.
It happens every day here on eevblog, with people regularly adding useful information to old threads, or asking in-context questions.
Better that than similar discussions spread across many threads.

If MK14 feels strongly about it, I suggest starting a poll about auto-locking threads older then x days.
Otherwise, please remember to check the post dates before joining a discussion.

Revised previous post:
https://www.eevblog.com/forum/microcontrollers/adcdac-conversion-to-_decimal_-best-practices/msg4567165/#msg4567165

I'm happy for our opinions to vary, about thread organization and necroing of older threads.

Because the original opening post, is in the form of a very specific question (in my opinion) , I think it is probably best to leave the thread alone.  As reawakening it, could cause some confusion.  As, not everyone checks the dates.  So, they may spend time trying to answer a specific question.
While unaware the question was asked many years ago, and the original poster, may have solved the problem, long ago, and could have left the forum, also many years ago.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf