detect missing brace better than gcc
gcc seems to be particularly horrible. IIRC, one of the "teaching advantages" of University implementations (PL/C, WatFor, etc) was supposed to be MUCH BETTER error messaging than the industry equivalents. We could do with another round of that sort of thinking. Maybe Python "got it."
That, plus fast compiling to code that was not close to the best possible, but much faster than an interpreted language and fine for something that was typically only run once or twice.
I thought I'd try knocking my HiFive Unleashed (64 bit RISC-V) back from its usual 1.5 GHz to 1 MHz and compiling gcc -O0 hello.c -o hello. Sadly, it turns out any setting slower than 37.75 MHz gives 37.75. Grr. I'd have thought it could run slower than that. I'm sad because the otherwise very similar (but 32 bit, and 180nm instead of 28nm) FE310 very happily runs at 16 MHz by default in the Arduino environment, and I bet it would go slower.
However, instead of the normal 0.275 real 0.2 user, at 37.75 MHz I get 6.1 seconds real and 4.6 seconds user time. So 39.74x slower clock gives 22.2x slower execution. I guess because cache misses and disk access gets a whole heck cheaper, relatively.
I don't remember exactly, but I think five or six seconds was about the time for compiling a small Pascal program on the VAX 11/780 or PDP 11/70. The PDP 11/70 ran the microcode engine at 6.7 MHz and the fastest instruction, register to register move, took 2 clock cycles. So effectively 3.3 MHz in modern RISC terms.
So, to a first approximation, modern gcc/as/ld is 10x less efficient than compilers on the PDP 11, even at -O0.
gcc -S (just producing assembly language), takes 2.35 real, 1.85 user at that same 37.75 MHz clock on the HiFive Unleashed, and gcc -c (running the compiler and assembler but not linker) takes 2.85 real, 2.0 user. So it's the gnu linker taking most of the time.