Things like if (!my_pointer) yadayada_error(); is just bad coding since NULL can be defined as any value.No, this is wrong on two counts:
- !my_pointer is guaranteed to yield true if my_pointer is the NULL pointer.
- NULL is defined as either the constant 0 or the same cast to (void*)
References (C11):
- 6.5.3.3 Unary arithmetic operators, §5:
The result of the logical negation operator ! is 0 if the value of its operand compares unequal to 0, 1 if the value of its operand compares equal to 0. The result has type int. The expression !E is equivalent to (0==E).
Now, 0 in a pointer context is the NULL pointer so the expression will do the right thing.- 6.3.2.3 Pointers, §3:
An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant . 66) If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
So NULL cannot be defined in other ways, and 0 is a perfectly good NULL pointer.
If the actual machine NULL pointer needs to be something different for whatever reason, it all must happen by compiler magic.
EtA: OTOH, if one does something like (uintptr_t)p and p is a NULL pointer, I don't see anything in the standard that would prevent getting something different from 0 (but I should check more accurately, definitely nothing in 6.3.2.3).
Still -in general-,
[...]
relying on a specific C standard
That said, I could not completely avoid them, in things like thisCode: [Select]data=0;
for (address=0x080e0000; address<0x080fffff; address+=4)
{
if ( (*(volatile uint32_t*)address) != data )
error++;
data+=4;
}
Things like if (!my_pointer) yadayada_error(); is just bad coding since NULL can be defined as any value. BTW: IMHO you should always use conditional statements with an explicit value anyway.
#include <stdbool.h>
bool foo(int n, void *p)
{
bool b = n > 10;
int m = b && p + 1;
return m < 2;
}
Perfectly valid C, but what the heck.
Constraints
For addition, either both operands shall have arithmetic type, or one operand shall be a pointer to a complete object type and the other shall have integer type. (Incrementing is equivalent to adding 1.)
Perfectly valid C, but what the heck.I beg to differ.
Had p been any pointer but a pointer to void, that code would have been compliant.
As it is, it violates the constraint found in 6.5.6 Additive operators, §2:QuoteConstraints
For addition, either both operands shall have arithmetic type, or one operand shall be a pointer to a complete object type and the other shall have integer type. (Incrementing is equivalent to adding 1.)
By definition, void is not a complete type, so this is not valid C.
Note that it's not UB but just plainly wrong, as the 'shall' is not satisfied inside a constraint, as oppsed to outside (UB).
Now, gcc is especially lenient here.
It will issue a warning with -std=c11 -pedantic <- That's in my default options. Maybe it says something about me
gcc accepts arithmetic on void pointers as an extension (treating them as pointers to char) - as if C needed more type relaxation...
I eschew bit fields as they are a minefield of implementation defined behaviour
I like C's flexibility, but IMHO, all those implicit conversions and type compatibility are atrocious from any reasonable point of view.
I eschew bit fields as they are a minefield of implementation defined behaviourNoticed that bitfields yields significantly more optimised code with GCC on Cortex-M, an example: https://godbolt.org/z/G5ErEfrMc
And this may actually be useful. I ran into an issue where I inadvertently used an uninitialized pointer and the compiler removed the whole body of the function replacing it with one "udf" instruction. If the code contains undefined behaviour - just mark it as such and replace with a clear undefined instruction. This actually helped me to quickly find the source of the issue, as I've got a clear and repeatable fault.
(Out of curiosity, I'll check tomorrow if we have a specific design rule at work - I program for fun, very seldom at work nowadays).
IMO this is an example of how deeply the C world is broken - a compiler plants a bomb in the code and we are happy about the explosion being clear and repeatable. Why not just raise a compile-time error??
IMO this is an example of how deeply the C world is broken ...
GCC faithfully implements the standard to the fullest extent. So, yes, it is the standard that is broken.
My personal preference would be for the standard to define behaviour in all cases. The definition should make for a sensible implementation at least on the current platforms. But if not possible - then a bloated version should be implemented. But this goes back to the "high level assembler" vs a full programming language discussion.
"Undefined" situation occurs when you make an error or mistake.
Again, most modern compilers will act reasonably in cases like this, so there are no real issues. But as this thread shows, pushing optimization levels will shift what is "reasonable" and you may experience issues in edge cases. This is not a big deal in a grand scheme of things.
compiler can trace that "y" is always bigger than 32
[...]
you don't have the right to expect this.