Author Topic: ADC/DAC Conversion to "decimal", best practices....  (Read 14389 times)

0 Members and 1 Guest are viewing this topic.

Offline uer166

  • Frequent Contributor
  • **
  • Posts: 887
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #75 on: December 05, 2022, 09:19:06 pm »
That's exactly my point, with float you don't get that "don't think about it at all" luxury.

Eh, float is nice since it's atomic on ARM, easily formatted, makes code readable, is very easy to printf/format for output, and makes everything in SI units by definition. I admit that it's less accurate than fixed point, but say for a DSMPS, the analog and ADC accuracy (say 11 bits ENOB best case), mean it's not even close to being the limiting factor. I'll gladly take the register push/pop penalty on ISR entry.

I usually default to converting all ADC values to SI units immediately and work on those instead. The only problem I found is of course division by small numbers and ending up with NaN of inf, but the same problem exists in fixed point just as much, so best to avoid division by a variable altogether.
 

Offline uer166

  • Frequent Contributor
  • **
  • Posts: 887
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #76 on: December 05, 2022, 09:24:15 pm »
You don't need floats at all, fixed point can provide quite good precision:

Isn't that like suggesting metric is more precise that imperial?  They provide exactly the same precision if set up comparably. ;)

Shouldn't be the case, 23 bits for float vs 30 bits for fixed point of effective resolution. Fixed should definitely be more precise, all else being equal.
 

Offline wek

  • Frequent Contributor
  • **
  • Posts: 494
  • Country: sk
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #77 on: December 05, 2022, 09:26:10 pm »
I think this is also why double is the standard floating point type in C and C++, despite the age of the languages;
Contrary - it's *because* of the age of C. While it was developed on PDP-11, Thompson and Ritchie having had worked on the Multics project before, must had been painfully aware of the fact that it's one of the less capable computers around.

The 600 series used 36-bit words and 18-bit addresses. They had two 36-bit accumulators, eight 18-bit index registers, and one 8-bit exponent register. It supported floating point in both 36-bit single-precision and 2 x 36-bit double precision, the exponent being stored separately, allowing up to 71 bits of precision and one bit being used for the sign.
Wide floating points were available in hardware since relatively early computers, as their primary purpose was to compute ballistics and similar stuff quickly.

Minicomputers and microprocessors were in fact a significant regression in this regard, momentum regained only after some 20-30 years (x87 etc).

JW
 

Offline DavidAlfa

  • Super Contributor
  • ***
  • Posts: 5890
  • Country: es
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #78 on: December 05, 2022, 09:29:49 pm »
Isn't that like suggesting metric is more precise that imperial?  They provide exactly the same precision if set up comparably. ;)
Don't you know the difference between a 32bit float and fixed point?
Integers have a range of 2^bits, but floats use mantissa, they have a range of something like  ±1E38 with ~7 decimal precision, and doubles go to about ±1E38.

It's mostly the range what makes floats great.
With fixed points, you must scale the data in a static manner so it can store the whole required range, the higher the range, the lower the precision.
"Single" floats lose precision but can store really large/small numbers.
« Last Edit: December 05, 2022, 09:36:31 pm by DavidAlfa »
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Offline jpanhalt

  • Super Contributor
  • ***
  • Posts: 3455
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #79 on: December 05, 2022, 10:12:44 pm »
Isn't 4 byte (32 bit), 7 decimal precision?  On the other hand, if I enter in your float for 8-bit MCU's, "3/pi," what is the answer?
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3639
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #80 on: December 05, 2022, 10:37:30 pm »
Isn't 4 byte (32 bit), 7 decimal precision?  On the other hand, if I enter in your float for 8-bit MCU's, "3/pi," what is the answer?
No, a signed 32-bit integer has \$ \log_{10} (2^{31}-1) \approxeq 9.3 \$ decimal places of precision, all inclusive.
Any fixed-point format defines how much of that is allocated to the fraction part.
So you can define fixed-point formats with ~7 decimal places (or anything up to ~9) after the point.

The key advantage of fixed-point is that for numbers in a known range, you know the precision. Mantissa+exponent formats have a (mostly, modulo denormals) consistent precision in scientific notation, but not in terms of actual reals. The tradeoff is that with an exponent you can approximate a much larger range of reals.
« Last Edit: December 05, 2022, 10:41:47 pm by helius »
 
The following users thanked this post: MK14

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26868
  • Country: nl
    • NCT Developments
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #81 on: December 05, 2022, 10:46:53 pm »
Isn't 4 byte (32 bit), 7 decimal precision?  On the other hand, if I enter in your float for 8-bit MCU's, "3/pi," what is the answer?
No, a signed 32-bit integer has \$ \log_{10} (2^{31}-1) \approxeq 9.3 \$ decimal places of precision, all inclusive.
Any fixed-point format defines how much of that is allocated to the fraction part.
So you can define fixed-point formats with ~7 decimal places (or anything up to ~9) after the point.

The key advantage of fixed-point is that for numbers in a known range, you know the precision. Mantissa+exponent formats have a (mostly, modulo denormals) consistent precision in scientific notation, but not in terms of actual reals. The tradeoff is that with an exponent you can approximate a much larger range of reals.
Yes and no. In both cases you can express the precision in the number of bits used for expressing a number. With the advantage of floating points that you can use the mantissa to scale the number into useful units.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6227
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #82 on: December 06, 2022, 03:44:35 am »
Isn't 4 byte (32 bit), 7 decimal precision?  On the other hand, if I enter in your float for 8-bit MCU's, "3/pi," what is the answer?
IEEE 754 Binary32, which most float implementations nowadays are, have 24 bits of precision in their mantissa, and \$\log_{10}(2^{24}) \approx 7.2\$ decimal digits.  That's where the 7 decimal digits of precision for floats come from.

A fixed point format such as Q8.24 has \$\log_{10}(2^{24}) \approx 7.2\$ decimal fractional digits (right of the decimal point), with total precision being \$\log_{10}(2^{32}) \approx 9.6\$ decimal digits.

A decimal fixed point format using d=8 in a 32-bit signed integer has range -21.47483648 to +21.47483647, and exactly eight decimal digits.

On 32-bit machines, as I've described elsewhere, the UQ4.28 format (unsigned 32-bit ints with 4 integral bits and 28 fractional bits) is particularly useful.  The 28 bits yield \$\log_{10}(2^{28}) \approx 8.4\$ digits of decimal precision, but these can be chained together easily into BigNums, using only 32-bit math.  It is a perfect format for converting higher precision fixed-point and floating-point formats to decimal and vice versa on architectures with 32-bit multiplication (32×32=32) and division (32/32=32).  In the middle limbs, the integral part is left unused, and will be used for carry handling during operations.
« Last Edit: December 06, 2022, 05:38:10 am by Nominal Animal »
 
The following users thanked this post: SiliconWizard

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4527
  • Country: gb
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #83 on: December 06, 2022, 04:10:28 am »
IEEE 754 Binary32, which most float implementations nowadays are, have 23 bits of precision in their mantissa, and \$\log_{10}(2^{23}) \approx 6.9\$ decimal digits.  That's where the 7 decimal digits of precision for floats come from.

For floats (32 bits), IEEE754.
Surely you mean \$\log_{10}(2^{24}) \$
Because there are 23 STORED bits, which makes a total of 24 bits in total, because there is always (with rare exceptions), a leading binary 1 digit.  Which is NOT stored for floats IEEE754, but is, in some floating point formats, such as X87's 80 (or 128 in memory) bit extended floating points.
« Last Edit: December 06, 2022, 04:12:46 am by MK14 »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6227
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #84 on: December 06, 2022, 05:44:11 am »
IEEE 754 Binary32, which most float implementations nowadays are, have 23 bits of precision in their mantissa, and \$\log_{10}(2^{23}) \approx 6.9\$ decimal digits.  That's where the 7 decimal digits of precision for floats come from.
For floats (32 bits), IEEE754.
Surely you mean \$\log_{10}(2^{24}) \$
Yes, of course.  The mantissa has 24 bits of precision, even though the most significant one is implicit.
Similarly, double (IEEE 754 Binary64) has 53 bits of precision, with the most significant one implicit.

(Guess why I made the error.  I always get bitten in the butt when I don't verify:palm:
 And yes, a bit of code I just wrote assumed double had 54 bits of precision in the mantissa. ::))
 
The following users thanked this post: MK14

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #85 on: December 06, 2022, 05:52:54 am »
A fixed point format such as Q8.24 has \$\log_{10}(2^{24}) \approx 7.2\$ decimal fractional digits (right of the decimal point), with total precision being \$\log_{10}(2^{32}) \approx 9.6\$ decimal digits.

A decimal fixed point format using d=8 in a 32-bit signed integer has range -21.47483648 to +21.47483647, and exactly eight decimal digits.
Yes fixed point can represent numbers with this accuracy, but you can't keep that accuracy without using more bits while doing most mathematical functions. Clearest example being division.

Say you want to calculate speed. You have U8.24 for both in meters and seconds nicely precented with about 7 decimals. How accurately you would get m/s? With floating point you would still have the same about 7 significant digit no matter what the values of distance and time would be.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6227
  • Country: fi
    • My home page and email address
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #86 on: December 06, 2022, 06:45:12 am »
A fixed point format such as Q8.24 has \$\log_{10}(2^{24}) \approx 7.2\$ decimal fractional digits (right of the decimal point), with total precision being \$\log_{10}(2^{32}) \approx 9.6\$ decimal digits.

A decimal fixed point format using d=8 in a 32-bit signed integer has range -21.47483648 to +21.47483647, and exactly eight decimal digits.
Yes fixed point can represent numbers with this accuracy, but you can't keep that accuracy without using more bits while doing most mathematical functions. Clearest example being division.
True, except I'd argue the clearest example is multiplication.

TL;DR: Even floating-point arithmetic is really affected, but the implementations deal with it internally, by rounding the high-precision intermediate results to the floating-point type.

When you multiply an A-bit value with a B-bit value, the result has A+B bits, arithmetically speaking.  In other words, for multiplication, you need a temporary value that is as wide in bits as the sum of the widths of the multiplicands.

For Q8.24, the temporary result has 16 integer bits (including sign bit), and 48 fractional bits.

This is part and parcel of both integer and fixed point arithmetic: an unavoidable fact.

Floating-point arithmetic works around this by using a mantissa-exponent format to describe each value, v = m·Bx, where m is the mantissa (left aligned, so without superfluous leading zeroes), and x is the exponent, and B is the radix (2 for binary floating-point formats, 10 for decimal floating-point formats).
The product of two such values has a double-wide mantissa which is immediately (except in cases like fused multiply-add) rounded to the normal size; and the exponent is the sum of the terms' exponents.

So, in a way, even floating-point arithmetic does not completely avoid this issue, especially because IEEE 754 requires exact correct rounding; it's just that the floating-point arithmetic implementations take care of it internally.

Addition and subtraction between two values can only underflow or overflow by one bit.  But, division is complicated, because many rational values cannot be expressed exactly in binary.  The most common example is 0.1 = 1/10, which in binary is 0b0.00001100... and cannot be exactly represented as a scalar binary value.  So, some kind of rounding is needed.  Arithmetically, we usually implement integer division in terms of division and modulus, i.e. v / d = n with remainder r, such that d·n+r = v (and the remainder r is crucial when implementing BigNum division, division for arithmetic types larger than the native register size), and 0 ≤ abs(r) < abs(nd), and preferably (but not necessarily) r and n having the same sign.  With floating point division, n will be rounded to the stated precision, but be correct to within half an unit in the least significant place (ULP), i.e. within half a mantissa bit for IEEE 754 Binary32 (float) and Binary64 (double).

It is this rounding why one should not use exact comparison between floating-point values, but a range instead, i.e. abs(a-b) ≤ eps, where eps is the largest value with respect to a and b that one considers "zero"; small enough to ignore.
« Last Edit: December 06, 2022, 11:45:47 am by Nominal Animal »
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #87 on: December 06, 2022, 07:24:00 am »
You don't need floats at all, fixed point can provide quite good precision:
uint32_t mV_x100 = ADC*1000000/4095

ADC=2048 : 500122 (5001.22mV)
ADC=2813 : 686935 (6869.35mV)
ADC=1024 : 250061 (1500.61mV)

You can make some hacks to avoid division (Usually pretty slow), losing some accuracy at microvolt level, but overall giving the same results:
uint32_t mV_x100 = (ADC*1000244)>>12

ADC=2048 : 500122 (5001.22mV)
ADC=2813 : 686935 (6869.35mV)
ADC=1024 : 250061 (1500.61mV)

When using scaled fixed point it is very important to check your ranges and limits closely....

Code: [Select]
#include <stdio.h>
#include <stdint.h>

int main(int argc, char *argv[]) {
  int32_t adc = 2813;
  uint32_t mV_x100 = adc*1000000/4095;
  printf("ADC = %d : %u  (%7.2fmV)\n", adc, mV_x100, mV_x100*0.01);
  return 0;
}
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #88 on: December 06, 2022, 07:32:35 am »
What's the benefit of fixed point in real life, if you don't insist keeping count of each cent or whatever you are calculating?

With 32bit floating point vs 32bit fixed point you probably need more memory for the variables, since you need 64 bit intermediates.

How much is the speed difference on different MCU? Fixed point will be much faster for add and substract. How about other mathematical functions? At least divide will be much faster with floats, if you need to use 64 bit for fixed. 

Any real life examples of the same problem done with the same accuracy (significant digits, not decimals nor bits).
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3639
  • Country: us
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #89 on: December 06, 2022, 07:40:45 am »
Arithmetically, we usually implement integer division in terms of division and modulus, i.e. v / d = n with remainder r, such that d·n+r = v (and the remainder r is crucial when implementing BigNum division, division for arithmetic types larger than the native register size), and 0 ≤ abs(r) < abs(n), and preferably (but not necessarily) r and n having the same sign.
I believe you mean that 0 ≤ abs(r) < abs(d). But otherwise what you wrote is true.
 
The following users thanked this post: Nominal Animal

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8166
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #90 on: December 06, 2022, 08:59:35 am »
Fixed point is great when the range of numbers is trivial to see, and for linear response systems like SAR ADCs. When you don't need to waste extra bits for uncertainties about the range, you get significantly more precision than with floats.

For example, with an ADC you know by design it can only output numbers from 0 .. 4095, and when you multiply it by some calibration factor, you can just decide this factor is going to be less than 232/4096 = 1048576, and then you know the maximum possible result is 4095 * 1048576 = 4293918720, which fits in uint32, with no bits wasted. It's really as simple as that.

float wastes bits into the ability of representing massive range of numbers, which is not needed at all in typical ADC/DAC DAQ systems. But of course, combining accuracy and range (making automagic compromises between them) allows the developer to be lazy. But for the lazy people, I really really recommend using double. float is almost certainly good enough for most jobs, but just barely so.
« Last Edit: December 06, 2022, 09:02:52 am by Siwastaja »
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #91 on: December 06, 2022, 09:34:21 am »
Fixed point is great when the range of numbers is trivial to see, and for linear response systems like SAR ADCs. When you don't need to waste extra bits for uncertainties about the range, you get significantly more precision than with floats.

For example, with an ADC you know by design it can only output numbers from 0 .. 4095, and when you multiply it by some calibration factor, you can just decide this factor is going to be less than 232/4096 = 1048576, and then you know the maximum possible result is 4095 * 1048576 = 4293918720, which fits in uint32, with no bits wasted. It's really as simple as that.
Sure, but then you aren't really doing much with the value you just measured. If all you need to do is use some IF etc or display the value for the user, then all is fine. But then again that doesn't sound at all CPU intensive job and thus nothing is lost with floats. You certainly get the full real accuracy of any ADC with 32 bit floats, not to mention 0...4095 12 bit ADC.
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14155
  • Country: de
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #92 on: December 06, 2022, 09:44:49 am »
For most simple cases both fixed point / integer calculation and FP calculation works. The main point with FP is usually more the extra code needed for the FB library. This can be an issue with a small µC (e.g. < 32 kB of flash). The speed for the computation is rarely an isse, as the decimal format is mainly used for output to humans and there speed is limited. If speed is critical one can write binary data to a file / memory card.

With some a little more complex operations the resolution can already become an issue, like calculating the standard deviation with the brute force formula. a 12 bit values squared tests the limits of SP floats and if more are summed up even 32 bit integers can reach there limit. One may still need floats as at the end one may need the square root.
Doint the math with fixed point one gets more aware of the limitations - with floating points one gets rounding errors, that may not be that bad, but if so they are more tricky to find than overflow problems.
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4742
  • Country: nr
  • It's important to try new things..
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #93 on: December 06, 2022, 10:04:03 am »
SW floating point libs are small, like 3.5kB single precision in an 8bitter like pic16. Double precision libs are usually 2x larger and the computation is 2x slower. I can remember I was using a floating point single precision sw lib around 2000 in a pic16F88 (cc5x free compiler had 32bit fp lib at that time).
PS: the only issue you may cope with 32bit fp is it is less precise than a 32bit integer. So max 6digits after the decimal point with +-*/, no more..
« Last Edit: December 06, 2022, 10:16:33 am by imo »
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #94 on: December 06, 2022, 10:36:06 am »
With some a little more complex operations the resolution can already become an issue, like calculating the standard deviation with the brute force formula. a 12 bit values squared tests the limits of SP floats and if more are summed up even 32 bit integers can reach there limit. One may still need floats as at the end one may need the square root.
How would that work with fixed point? How would you know in advance, how much the standard deviation will be? That will affect how you should do it unless you do everything in 64bit to actually be certain not to overflow and still get some resolution. Standard deviation can be a small fraction of the average or much more than the average. With former you are just summing zeros and latter would easily cause an overflow, if you are using only 32bit variables.

I don't see much problems with this for 32bit float unless you are using a very large number of data points. Even with say 100 000 data points you would get standard deviation close enough. After all it's just a statistical value and there is no need to get more than 1-3 significant digits correct. And it's quite trivial to add accuracy by doing the summing in pieces. Say after each 1000 or 100 start from zero and add the current sum to another variable.

About the only pitfalls with 32 bit floats are add/subtract of clearly different orders of magnitude (only in cases where this actually matters), zero might not be exactly zero (but is very small) and you may overflow with divisions etc.

Fixed point has all the same problems, if used with mathematical functions and not with 64 bit for many operations. And it can have much more.
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4742
  • Country: nr
  • It's important to try new things..
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #95 on: December 06, 2022, 11:29:40 am »
Messing with 12bits and SP FP cannot be a big problem, imho, incl. filters, stddev, etc. I did with ADuC 8bitter (80C52) SP FP math with 24 bit ADC and it worked. What I did was I made all calculations with integer where possible, and used to use SP FP at the very end of calcs, like with gain, scale, stddev, EMA filters, ppm, etc. Needs to play with it for a while and watch carefully what you do. You can get 1uV resolution printed out, sure.
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14155
  • Country: de
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #96 on: December 06, 2022, 12:18:10 pm »
The simple formula for the std dev. uses sum of values square - the square of the average. With values consistanly near the maximum one has the problem of doing the difference of near equal sized numbers. So quite a lot of bits lost and with low noise the last bits are what matters.  SP FP already can start to round with the sum of two 12 bit numbers squared. 
Adding many similar numbers can than also lead to repeated rounding in the same direction.

32 bit integers have 7 or 8 bits of extra headroom and one is generally more aware that there are limits. With integers is is reasonable easy to mix 32 bit and 64 bit numbers. So the sum could be 64 bit (if needed) and the rest 32 bit or even less. The integer conversion between different resolutions is usually easy and fast. With floating point variables mixing SP and doubles adds quite some overhead, both code length and run-time.

For the Sdt. dev example one can do things more intelligent (e.g. subtract an approximate average first), if one is aware of the possible problems.
There are other cases too that are numerically not favorable, some are easy to circumvent like the std. dev example, other are not that easy - though this often involves non integer operations like square roots or log, so something one would still do with floating-point. The point is that using FP does not allow to be too sloppy.  Bugs doe to rounding errors are hard to find ones - overflow errors are more drastic and easier to find (though sometimes the hard way, as with one of the Ariane rockets that crashed on frist flight because of an overflow with integer / fixed point math).


Most data also start as interger, so one has at least the conversion from integer to float as a first step.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8166
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #97 on: December 06, 2022, 04:21:26 pm »
SW floating point libs are small, like 3.5kB single precision
"Small" is so relative. I think that's pretty huge, for such obvious feature (basic arithmetics).There still are small budget microcontrollers which have something like 8-16KB of flash. These are used in cost-sensitive projects so considerable effort may be already put making the application fit. 3.5KB is a large percentage of this!

And we are not even talking about AVRs and PICs, STM32F030 value line starts at 16KB of flash.

On 8-bit CPUs, 16-bit fixed point arithmetic saves registers, RAM and processing time. Half-precision (16-bit) float, while used in 3D graphics for non-critical visual things, would be nearly useless, but the same 16 bits offer much more precision in fixed point (with carefully considered ranges, of course).

But if you have some Cortex-M4 which today does not cost more than $2... this does not matter. Especially with the ones with double precision hardware floats, then the only reason to use fixed point would be some super timing critical (or frequently repeating) interrupt handler where you would want to avoid the stacking of the gazillion FP registers. I mean, if you run the CPU at say 100MHz and fire interrupts at 1MHz, just FP stacking and unstacking is significant CPU load %. Even then, you can use FP in the project, not just within the ISR.
 

Offline jmaja

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #98 on: December 06, 2022, 07:28:18 pm »
The simple formula for the std dev. uses sum of values square - the square of the average. With values consistanly near the maximum one has the problem of doing the difference of near equal sized numbers. So quite a lot of bits lost and with low noise the last bits are what matters.  SP FP already can start to round with the sum of two 12 bit numbers squared.
Adding many similar numbers can than also lead to repeated rounding in the same direction.
I did some tests on a simulator using AVR XMega and later with a PC. There are at least two simple ways to calculate standard deviation.

and
I calculated these for an array of 100 floats. It took (without sqrt) 57 000 vs 41 000 cycles thus less than 15 or 2 ms at 4 or 32 MHz. The latter was faster, but clearly less accurate. The values I used where 1+ i * 10^n. The accurate value (with sqrt) is 2.886607e-m where m depends on n. With n>=-1 (until overflow) you get exactly the same result using both formulas with float or double. At n=-2 the latter gives 2.886605e-01. At n=-3 the latter starts to fail 2.886267e-02 and even more at n=-4 2.883754e-03. At n=-5 the latter is about useless 3.906250e-04. The former is still quite accurate at n=-6 2.886695e-05 and still OK at n=-7 2.900095e-06. At n=-9 even double becomes about useless with the latter formula 3.371748e-08, while the former works with double up to about n=-15 2.884401e-14.

At n=-7 standard deviation is about 18 bits. I have worked with ADC having about 22 bit noise margin, but not with anything measurable at that accuracy (except GND and ref). One sensor I use reaches about 15 bits noise margin.

So use the former equation, if you expect to have a small standard deviation compared to average. If you expect the difference to be more than 7 orders of magnitude, you should use double or be more careful how you add.

How do you get better accuracy with fixed point? Maybe I'll try.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14429
  • Country: fr
Re: ADC/DAC Conversion to "decimal", best practices....
« Reply #99 on: December 06, 2022, 08:37:56 pm »
All this is pretty nice, but learn some arithmetics if you're not comfortable with that. It'll make you a much better programmer, I promise.

I've seen many programmers just using FP because they are not very comfortable with arithmetics and are thus just meaning to use a programming language as a glorified pocket calculator. Which was my main point.

But there sure are uses for FP, and uses for integer and fixed-point. If you know why you use one or the other with good technical reasons, then it's engineering. If you use one almost exclusively because you are more comfortable with it, then it becomes a silver bullet and it will end up equating to throwing things against the wall until they stick.
 
The following users thanked this post: Siwastaja


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf