OK, sure, but nobody answered my question
Except in terms of "it is the C standard, so accept it".
Well, it
is one of those choices by language developers that have complicated (and often not that convincing) reasoning and history behind it, and cannot be changed without making the result a different programming language.
You are probably right in that the reasoning did not emphasize use cases (i.e., us developers and programmers using the language), and instead had more to do with the compiler implementation and the abstract machine approach the C standard uses. There are related things like argument promotion (similar; but
float is also promoted to
double) applied when there is no prototype or the arguments are passed as variable arguments to a variadic function, see
<stdarg.h>) making variadic function calls much easier to support in C, as each passed argument always takes a full native register or native-size word on the stack, depending on the ABI.
Just like many others here, I've done paid work in something like a dozen different programming languages. It is useful to wonder why language designers made a specific choice, because it
can reveal their intent and viewpoint (related to the paradigm underlying the language design). However, sometimes such choices don't have any deeper meaning for us developers, because their reasoning is elsewhere, or just plain wrong. I could be wrong, but I've always
assumed integer promotions in C belong to this latter category. It is sometimes useful, sometimes annoying; just something one has to deal with when writing C, without any huge programming paradigm insights in it. Similar to e.g. how you can swap array names and indices in C,
a[b] being equivalent to
b[a]. Other than obfuscated code contests, I've never seen a real world use case where that would be actually useful.
So, AIUI, C makes any variable size shorter than int pointless, unless you are trying to save storage space.
In particular (unless very short of stack space) using a variable smaller than int inside a function is pointless because any storage used is chucked away upon exit. And a lot of variables are optimised anyway so never stored in RAM.
No; this is exactly why the
int_fastN_t and
uint_fastN_t types were introduced (and standardized in C99 in
<stdint.h>), for
N = 8, 16, 32, and 64. They provide
at least N-bit range, but their actual size depends on what is fast on a particular architecture.
For example, on 32- and 64-bit architectures,
uint_fast16_t is usually 32 or 64 bit unsigned integer, whichever is "faster" for that architecture.
I still wonder why C doesn't internally represent bytes as bytes. They are quite common in embedded systems
Actually, it does: it just calls them
char (implementation-defined signedness),
unsigned char, and
signed char. In particular,
sizeof (char) = sizeof (unsigned char) = sizeof (signed char) = 1 by definition –– and the size is not explicitly 8 bits. There are even specific rules about conversion to and from ((un)signed) char buffers in C.
It is only in (arithmetic and logical) expressions that integer promotions are done; and also when calling functions without prototypes or variadic functions.
So, it is not really about internal representation, and more about the definition of how arithmetic and logical expressions are evaluated.
And even then, the compiler does not need to do that, as long as it is proven that the results are the same as if it had done so. (Kinda annoying, yeah, but the C language is defined in terms of an abstract machine.)