Also, re #define vs const:
Using #define (preprocessor macro) for constants is a bad practice because the compiler has no way to check the type of the value.
For numerics like this I don't see a problem, maybe even avoids some unnecessary type conversions.possibly even some run-time calculations if the compiler isn't smart enough to flatten them into a compile-time constant.
Never had a problem with differently sized types on different compilers? Or signed vs unsigned? All of that gets muddied up if you use macros - the compiler will promote the literal to whatever type it needs to perform the operation - which may not be the type the programmer has intended (or assumed). If you use a const instead, the type is explicit and if the conversion would be problematic you at least get a warning about it.
#define is basically string replacement/concatenation before even the compiler sees it. So if someone #defines the constant to something unexpected or invalid, you are going to have a much harder time trying to find the problem
Any method is going to have problems with typos etc.
Errors caused by #defines can sometimes be harder to find, though in the case of XC32 which is what I mostly use, it generally tells you where something was defined so typos etc. can usually be found easily.
I wasn't referring so much to typos (those tend to be obvious) but things such as libraries (re)defining various things behind your back. Typical cases are the min/max macros, size_t, char being redefined as unsigned char, etc. That can break code in subtle ways you won't notice because the code will likely still compile. Just not behave as expected.
I'd argue that for something that is fixed at compile time like this, #define makes more sense than const, as const is semntically treated as a variable, which the compiler may or may not be smart enough to deal with efficiently.
That applies only for C where the compiler (or rather the language standard) is sadly too dumb and prevents many of these compile-time optimizations.
Just wondering if all compilers be smart enough to combine multiple floating point consts into an integer result without invoking any FP code?
e.g.
#define clock 24000000
#define divclock clock*0.25
If you define it like this - i.e. macros, then not. The use of divclock gets literally substituted for clock * 0.25 which is of double type. Not to mention that you should put the macro in parentheses, i.e. (clock * 0.25) otherwise you may get an unexpected surprise if it gets used in an expression with a higher priority operator than a multiplication next to it. The preprocessor works really on the text search & replace level, nothing more. I think the xc32 compiler is GCC-based, so you can see the preprocessing result by running it with the -E switch on your source file. It will stop after the preprocessing stage before compilation and you can see the results of the macro substitutions. You will see that the preprocessor does not do any arithmetics there for you.
OTOH, if you use C++11 or newer, you can do this:
constexpr int clock = 24000000;
constexpr int divclock = clock / 4;
and this will be calculated by the compiler at compile time and optimized out, basically ending as equivalent to writing:
constexpr int divclock = 6000000;
If you are wondering why did I replace the multiplication by 0.25 by division by 4 - multiplication by 0.25 will work too but it will end up promoted to double and then truncated into the int constant expression. That will likely trigger a warning about a possible data loss due to the truncation. Division by 4 is an integral division in this case, so no warning. Another way of avoiding that warning would be by using an explicit cast there.