Author Topic: Precision of operations using #define  (Read 1770 times)

0 Members and 1 Guest are viewing this topic.

Offline ricko_ukTopic starter

  • Super Contributor
  • ***
  • Posts: 1015
  • Country: gb
Precision of operations using #define
« on: June 12, 2021, 08:37:03 pm »
Hi,
just has some strange numbers in a calculation with multiply, divide and SQRT that used #define.

I always assumed that if I have various defines (some integers, some decimals) like this (these are just random numbers/calculation):
#define PI           3.1415926535
#define ANGLE    279
#define SPEED     0.02343534
#define ALPHA     1.4543345654

and then:

#define RESULT  ((PI * ANGLE / SPEED) + SQRT(ALPHA))

then the compiler would maintain the highest precision (i.e. like doing the calculation i Excel or a calculator) and the truncate/adapt the result to a specific type that it is assigned to. So in the following two cases:

double dVar = RESULT;    //THIS WOULD STORE   37402.171778219100000

int iVar = RESULT;  //THIS WOULD STORE    37402

Isnt that correct?

Doesn't the compiler treat all calculation results coming out of #defines as the highest precision it can handle and only reduce it at the end when assigning to a specific type var?

Thank you :)
 

Online Ian.M

  • Super Contributor
  • ***
  • Posts: 12849
Re: Precision of operations using #define
« Reply #1 on: June 12, 2021, 08:50:02 pm »
Nope.  #define performs the equivalent of textual substitution*  and thereafter the compiler treats the resulting expanded token stream like any other C expression or numeric constant, with the same default precisions for int or float, unless some part of the expression forces promotion to a longer type.

* It actually does token substitution equivalent to textual substitution with the limitation that it does not join tokens unless you make use of the preprocessor stringizing or concatenation operators.
« Last Edit: June 12, 2021, 09:51:14 pm by Ian.M »
 

Offline DavidAlfa

  • Super Contributor
  • ***
  • Posts: 5890
  • Country: es
Re: Precision of operations using #define
« Reply #2 on: June 12, 2021, 09:07:55 pm »
Most compiler treat decimals as double by default.
Ex.:

This operation uses double, then converts to float:
float p = 0.322554*0.000164

This operation uses float all the way:
float p = (float)0.322554*(float)0.000164

In any case, you can try this:
#define    pi     (double)3.1415926535
.
.
 
#define RESULT  (double)((PI * ANGLE / SPEED) + SQRT(ALPHA))
« Last Edit: June 12, 2021, 09:09:57 pm by DavidAlfa »
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Offline ajb

  • Super Contributor
  • ***
  • Posts: 2596
  • Country: us
Re: Precision of operations using #define
« Reply #3 on: June 12, 2021, 09:31:35 pm »
This is called "constant folding", and AFAIK the behavior isn't terribly well specified outside of specific compilers, other than it should produce results equivalent to what would happen if the calculations were done at runtime. 

In general you want to be careful about relying on the preprocessor or compiler to optimize expressions like this--or, really, you want  to be careful about complicated expressions like this in general--especially when integers are involved.  If you have to do a lot of integer multiplication/division steps it's very easy to get an overflow or divide down intermediate values in a way that causes a loss of precision.  Using the largest available integer size may help with overflows, but sometimes you want to manually rearrange the expression, including the constants, to provide the best precision while avoiding overflows.  Floating point values are less susceptible to these problems with the much larger range they can represent, but you don't always want to incur the overhead of doing floating point math in embedded systems. 
« Last Edit: June 12, 2021, 09:33:22 pm by ajb »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14429
  • Country: fr
Re: Precision of operations using #define
« Reply #4 on: June 12, 2021, 09:45:42 pm »
This is called "constant folding", and AFAIK the behavior isn't terribly well specified outside of specific compilers, other than it should produce results equivalent to what would happen if the calculations were done at runtime.

Yep. Exactly. Constants that can be fully calculated at compile time will be. As you said, it's rather implementation-specific what really happens. In particular, on implementations for which math operations at run-time rely on function calls implemented by the compiler's library, the fact you'll get the exact same result with constants evaluated at compile-time may not be guaranteed. Of course, if using floating point, the difference will usually be negligible. Yes, FP literals are double by default, unless you use the "f" prefix (for float).

But as was said above, the preprocessor has nothing to do with it (it just does string substitution). So it's all in the way the compiler evaluates calculated constants at compile-time.

 

Offline ricko_ukTopic starter

  • Super Contributor
  • ***
  • Posts: 1015
  • Country: gb
Re: Precision of operations using #define
« Reply #5 on: June 12, 2021, 10:07:30 pm »
Thank you all.

So if you want to have high precision but not incur in the overhead of doing those calculations at run time (during the main code's execution), what are the possible solutions?

The one that comes to mind is to:
1) do all those calculations using doubles at boot and store the result into some double
2) whenever required assign that variable by type-casting it

Any other solutions/suggestions?

Thank you :)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19447
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Precision of operations using #define
« Reply #6 on: June 12, 2021, 10:37:18 pm »
Any other solutions/suggestions?

There are reasons why the high performance computing community use Fortran to crunch numbers.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline ajb

  • Super Contributor
  • ***
  • Posts: 2596
  • Country: us
Re: Precision of operations using #define
« Reply #7 on: June 12, 2021, 10:49:37 pm »
Computing constants at boot time can be a good solution, especially if those constants need to take into account calibration or configuration data that isn't available at compile time.  Although you could also do the computation at calibration or configuration time and just store the computed constants then. 

2) whenever required assign that variable by type-casting it

I'm not sure what you mean by this.  You can't increase the precision of a stored value by typecasting it.  You may be able to store a higher precision value into a lower precision variable if that would help I guess.

If the constants that go into the expression are all known at compile time, then there's a third option, which is to do the constant folding/simplification yourself and just include the simplified expression in your code.  This works well with the other thing I was saying about rearranging integer expressions to give the best results within the precision you have available in the runtime target.  When I do things like ADC conversion functions I'll often write out the original expression that relates a measured parameter to the expected ADC value (via voltage dividers or whatever) and do the algebra to rearrange the expression and any constants into whatever form will work best and just leave all of that work in comments above the final expression.  That way I know the expression will get evaluated the way I want at runtime but I also have a record of how I came up with the simplified values used if I come back to it later. 

Of course this doesn't need to be done manually.  If the constants need to be changed a lot during development you could even have a pre-build step that invokes some other piece of software to pre-compute whatever you need and generate the correct constant values.  Then you have the full capability of the PC running the build available and can do calculations at arbitrary precision or at arbitrary complexity relatively easily.  It would be fairly simply to throw the results into a header file that your application uses. 
 

Offline ricko_ukTopic starter

  • Super Contributor
  • ***
  • Posts: 1015
  • Country: gb
Re: Precision of operations using #define
« Reply #8 on: June 12, 2021, 10:56:28 pm »
Thank you,

2) whenever required assign that variable by type-casting it

I'm not sure what you mean by this.  You can't increase the precision of a stored value by typecasting it.  You may be able to store a higher precision value into a lower precision variable if that would help I guess.

No, I wasn't thinking about increasing the precision but decreasing it if/when required. Like doing all in doubles and then typecasting to uint32_t (for example) if required (i.e. if I don't need the decimal place like for example the number of steps of a stepper motor.

 

Offline Silenos

  • Regular Contributor
  • *
  • Posts: 54
  • Country: pl
  • Fumbling in ignorance
Re: Precision of operations using #define
« Reply #9 on: June 13, 2021, 10:31:46 am »
Concurring with constexpr functions for c++.
As for C for mcu target or sth I personally refrained from struggling with preprocessor to do any advanced crap, found it not really worth the time. I generate initialization lines in included external textfiles with external tools, as those are used for generation od the const data anyway.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8166
  • Country: fi
Re: Precision of operations using #define
« Reply #10 on: June 13, 2021, 11:27:14 am »
Concurring with constexpr functions for c++.
As for C for mcu target or sth I personally refrained from struggling with preprocessor to do any advanced crap, found it not really worth the time. I generate initialization lines in included external textfiles with external tools, as those are used for generation od the const data anyway.

Yes, your method has some advantages:
* You increase visibility into the system and remove uncertainty because the exact literal value is available without tediously looking at assembly listing
* You can create a reproducible record of stages that lead into the calculation of the final value. You can do this manually, documented using comments (which, on the other hand, is prone to typing errors, but on the other hand, is error-checkable thanks to visibility), or automated using simple tools - matlab script, python script, a small C program.

One typical example of this mindset difference is; how do you deal with UART baud rate registers? The appealing way is to initialize it like USART->BRR = F_CPU/BAUDRATE, which results in some baud rate you exactly won't know and only trust it "likely works" given some implicit assumptions like F_CPU being many orders of magnitude bigger than BAUDRATE; but the code "looks better" and is more easily modified for different baud rates.

The second way is to manually calculate F_CPU/BAUDRATE, then round it up or down, document the resulting actual baudrate and the error% to the desired baudrate as a comment. The most complex, third option is to write either compile-time tool or runtime configuration function that takes desired baudrate, finds the closest one, makes sure error is below some maximum allowable error threshold, and reports error if it isn't. The first and last options are most error prone due to being simplistic and complex, respectively.
 

Offline Silenos

  • Regular Contributor
  • *
  • Posts: 54
  • Country: pl
  • Fumbling in ignorance
Re: Precision of operations using #define
« Reply #11 on: June 13, 2021, 01:17:53 pm »
Yes, your method has some advantages:
* You increase visibility into the system and remove uncertainty because the exact literal value is available without tediously looking at assembly listing
* You can create a reproducible record of stages that lead into the calculation of the final value. You can do this manually, documented using comments (which, on the other hand, is prone to typing errors, but on the other hand, is error-checkable thanks to visibility), or automated using simple tools - matlab script, python script, a small C program.

One typical example of this mindset difference is; how do you deal with UART baud rate registers? The appealing way is to initialize it like USART->BRR = F_CPU/BAUDRATE, which results in some baud rate you exactly won't know and only trust it "likely works" given some implicit assumptions like F_CPU being many orders of magnitude bigger than BAUDRATE; but the code "looks better" and is more easily modified for different baud rates.

The second way is to manually calculate F_CPU/BAUDRATE, then round it up or down, document the resulting actual baudrate and the error% to the desired baudrate as a comment. The most complex, third option is to write either compile-time tool or runtime configuration function that takes desired baudrate, finds the closest one, makes sure error is below some maximum allowable error threshold, and reports error if it isn't. The first and last options are most error prone due to being simplistic and complex, respectively.
Well, that "method" I run for math, graphic, databases, or things exactly like @ricko_uk's - whatever what is detached from mcu config, assuming his target is mcu. To have the control over the math precision, in his case. Yes, that's most likely my educated duck from forcing and assuring C compiler to actually always calculate the load whatever it is. That is why I consider the c++ constexpr actually really nice feature.

Baudrate case? I don't know, any automated calculation tool would converge to entity resembling Cube etc, because there would always be someone to think some additional check to consider up. And the boss would get mad if he found out.
I actually looked up what I did with last uart:
Code: [Select]
    //USART2->BRR = 555U; /* 64mhz/115200 = 555,555 */
    USART2->BRR = 69U; /* 64mhz/921600 = 555,555*/
Perfection.  :)
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8166
  • Country: fi
Re: Precision of operations using #define
« Reply #12 on: June 13, 2021, 01:44:10 pm »
I actually looked up what I did with last uart:
Code: [Select]
    //USART2->BRR = 555U; /* 64mhz/115200 = 555,555 */
    USART2->BRR = 69U; /* 64mhz/921600 = 555,555*/
Perfection.  :)

Resembles what I'm usually doing, including forgetting to update the comment :).
 

Offline cv007

  • Frequent Contributor
  • **
  • Posts: 824
Re: Precision of operations using #define
« Reply #13 on: June 13, 2021, 04:05:53 pm »
>how do you deal with UART baud rate registers?

If using c++, and the cpu speed is known at compile time, then use the compiler to do the work-
https://godbolt.org/z/bEPKKP7W8

If the cpu speed/baud rate combo will not work as you want, you let the compiler tell you. Compiler does not like == you get no compiled code. The only downside is some template syntax which is easy enough. Not really something you necessarily want scattered all over your code, only because you now have to figure out what type of function you are dealing with- template syntax or normal, but it is useful for things like the example given.

edit- since I have that code example, I'll show a use for the typed enums in addition to their normal c++ usefulness- in this case it is a template function that takes any number of arguments where all of the arguments have to be any of the 3 enum types (any order). The values provided in the function argument are computed at compile time, and end up as a single value to be written to a single register in this case.
https://godbolt.org/z/Yv4Tcvfq7

Lots of ways to do similar, but in many cases you will end up with several writes to the register (although not that important). This just happens to be another option where you can provide arguments in any order and via templates and typed enums you can get the values computed at compile time and when done you can write the value/values to the register/registers (that value passed around can be a struct also, containing a number of values). The typed enum makes it happen since they are unique types and provides a way for template argument deduction to end up in the correct function template.
« Last Edit: June 14, 2021, 01:30:13 am by cv007 »
 
The following users thanked this post: thm_w, evb149

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14429
  • Country: fr
Re: Precision of operations using #define
« Reply #14 on: June 14, 2021, 05:45:56 pm »
If you're using integer calculations, be they compile-time or run-time, as long as you follow a few basic rules (promotion and literal suffixes), results will be exact. No need to overthink it, especially for things as simple as baud rate calculation. If you are afraid of rounding errors, just use integers only.

Only in rare cases where you'd need very large integers for representing fractional frequencies, for instance, and which could lead to overflow when multiplied, should you be careful. In any case, a quick analysis of the calculation should tell you what's OK and what is not.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf