It looks like you can convince GCC to perform the same optimization like so:
void f(int *x, int n)
{
int i;
if ((n > 0) && ((n&7)==0)) {
for (i = 0; i < n; i++) {
x[i]++;
}
}
else {
__builtin_unreachable();
}
}
00000000 <f>:
0: 3804 subs r0, #4
2: eb00 0181 add.w r1, r0, r1, lsl #2
6: f850 3f04 ldr.w r3, [r0, #4]!
a: 3301 adds r3, #1
c: 4281 cmp r1, r0
e: 6003 str r3, [r0, #0]
10: d1f9 bne.n 6 <f+0x6>
12: 4770 bx lr
Compare that to the version without:
void g(int *x, int n)
{
int i;
for (i = 0; i < n; i++) {
x[i]++;
}
}
00000014 <g>:
14: 2900 cmp r1, #0
16: dd08 ble.n 2a <g+0x16>
18: 3804 subs r0, #4
1a: eb00 0181 add.w r1, r0, r1, lsl #2
1e: f850 3f04 ldr.w r3, [r0, #4]!
22: 3301 adds r3, #1
24: 4281 cmp r1, r0
26: 6003 str r3, [r0, #0]
28: d1f9 bne.n 1e <g+0xa>
2a: 4770 bx lr
I don't see this as any worse than all other situations where lying to the compiler invokes undefined behaviour, eg. casting unaligned pointers and so on. Static code analyzers will use regular assert()s in their analysis, but I don't know offhand if any compilers do. Maybe in optimized debug builds?