Electronics > Microcontrollers

Precision of operations using #define

(1/4) > >>

ricko_uk:
Hi,
just has some strange numbers in a calculation with multiply, divide and SQRT that used #define.

I always assumed that if I have various defines (some integers, some decimals) like this (these are just random numbers/calculation):
#define PI           3.1415926535
#define ANGLE    279
#define SPEED     0.02343534
#define ALPHA     1.4543345654

and then:

#define RESULT  ((PI * ANGLE / SPEED) + SQRT(ALPHA))

then the compiler would maintain the highest precision (i.e. like doing the calculation i Excel or a calculator) and the truncate/adapt the result to a specific type that it is assigned to. So in the following two cases:

double dVar = RESULT;    //THIS WOULD STORE   37402.171778219100000

int iVar = RESULT;  //THIS WOULD STORE    37402

Isnt that correct?

Doesn't the compiler treat all calculation results coming out of #defines as the highest precision it can handle and only reduce it at the end when assigning to a specific type var?

Thank you :)

Ian.M:
Nope.  #define performs the equivalent of textual substitution*  and thereafter the compiler treats the resulting expanded token stream like any other C expression or numeric constant, with the same default precisions for int or float, unless some part of the expression forces promotion to a longer type.

* It actually does token substitution equivalent to textual substitution with the limitation that it does not join tokens unless you make use of the preprocessor stringizing or concatenation operators.

DavidAlfa:
Most compiler treat decimals as double by default.
Ex.:

This operation uses double, then converts to float:
float p = 0.322554*0.000164

This operation uses float all the way:
float p = (float)0.322554*(float)0.000164

In any case, you can try this:
#define    pi     (double)3.1415926535
.
.

#define RESULT  (double)((PI * ANGLE / SPEED) + SQRT(ALPHA))

ajb:
This is called "constant folding", and AFAIK the behavior isn't terribly well specified outside of specific compilers, other than it should produce results equivalent to what would happen if the calculations were done at runtime.

In general you want to be careful about relying on the preprocessor or compiler to optimize expressions like this--or, really, you want  to be careful about complicated expressions like this in general--especially when integers are involved.  If you have to do a lot of integer multiplication/division steps it's very easy to get an overflow or divide down intermediate values in a way that causes a loss of precision.  Using the largest available integer size may help with overflows, but sometimes you want to manually rearrange the expression, including the constants, to provide the best precision while avoiding overflows.  Floating point values are less susceptible to these problems with the much larger range they can represent, but you don't always want to incur the overhead of doing floating point math in embedded systems.

SiliconWizard:

--- Quote from: ajb on June 12, 2021, 09:31:35 pm ---This is called "constant folding", and AFAIK the behavior isn't terribly well specified outside of specific compilers, other than it should produce results equivalent to what would happen if the calculations were done at runtime.
--- End quote ---

Yep. Exactly. Constants that can be fully calculated at compile time will be. As you said, it's rather implementation-specific what really happens. In particular, on implementations for which math operations at run-time rely on function calls implemented by the compiler's library, the fact you'll get the exact same result with constants evaluated at compile-time may not be guaranteed. Of course, if using floating point, the difference will usually be negligible. Yes, FP literals are double by default, unless you use the "f" prefix (for float).

But as was said above, the preprocessor has nothing to do with it (it just does string substitution). So it's all in the way the compiler evaluates calculated constants at compile-time.