Products > Programming

GCC ARM32 comparison question

<< < (6/8) > >>

brucehoult:

--- Quote from: peter-h on June 18, 2022, 11:29:59 am ---OK, sure, but nobody answered my question :)

Except in terms of "it is the C standard, so accept it".

--- End quote ---

When the rubber meets the road, yes, that's the specification of the language.

As to why...


--- Quote ---ISTM that this "promotion to int of almost everything shorter" is an attempt to prevent bad coding leading to what has since for ever been known in the embedded world (whether manually blowing fusible link PROMs one byte at a time, or asm, or C or whatever) as "integer overflow". The cost of doing this is

--- End quote ---

No. It's not to protect against sloppy programmers. It's because very many of the machines that C runs on -- basically all mainframes or RISC that started life as 32 bit -- there are only instructions for full register arithmetic, so if you forced them to mask results down to 8 bits after every operation it would be very inefficient.

At the same time, machines that *do* have byte arithmetic are -- as I showed above -- free to use it IF IT WILL GET THE SAME RESULT.

It's a very pragmatic policy.


--- Quote ---- on CPUs which are mostly not natively int sized (most of the 8/16 bit stuff like the Z80 etc) it bloats a lot of stuff, and slows it down, a lot

--- End quote ---

Rubbish.

What does it bloat? You showed one calculation that you said would be bloated on z80 if done in C. I showed actual C compiler output for the z80 which was not bloated.

I presented another calculation, and showed that it also was not boated on any of a range of machines.

Want to come up with an example which will actually show bloat?


--- Quote ---- if there is enough oveflow to make MSB=1 then you will likely get real problems with any comparisons, because the promotion is to signed int
- it conceals most integer overflow, which is nearly always loss of real data - unless doing a checksum ;)

--- End quote ---

The MSB of what? A 1 byte value?  Promotion of an unsigned char is to (signed) int values from 0-255, and promotion of a signed char is to int values from -128 to 127. Arithmetic and comparisons operated perfectly in either case.

If anything, it is *easier* to detect overflow.

Examples in C, please.

Nominal Animal:

--- Quote from: peter-h on June 18, 2022, 11:29:59 am ---OK, sure, but nobody answered my question :)

Except in terms of "it is the C standard, so accept it".

--- End quote ---
Well, it is one of those choices by language developers that have complicated (and often not that convincing) reasoning and history behind it, and cannot be changed without making the result a different programming language.

You are probably right in that the reasoning did not emphasize use cases (i.e., us developers and programmers using the language), and instead had more to do with the compiler implementation and the abstract machine approach the C standard uses.  There are related things like argument promotion (similar; but float is also promoted to double) applied when there is no prototype or the arguments are passed as variable arguments to a variadic function, see <stdarg.h>) making variadic function calls much easier to support in C, as each passed argument always takes a full native register or native-size word on the stack, depending on the ABI.

Just like many others here, I've done paid work in something like a dozen different programming languages.  It is useful to wonder why language designers made a specific choice, because it can reveal their intent and viewpoint (related to the paradigm underlying the language design).  However, sometimes such choices don't have any deeper meaning for us developers, because their reasoning is elsewhere, or just plain wrong.  I could be wrong, but I've always assumed integer promotions in C belong to this latter category.  It is sometimes useful, sometimes annoying; just something one has to deal with when writing C, without any huge programming paradigm insights in it.  Similar to e.g. how you can swap array names and indices in C, a[b] being equivalent to b[a].  Other than obfuscated code contests, I've never seen a real world use case where that would be actually useful.


--- Quote from: peter-h on June 18, 2022, 10:35:26 am ---So, AIUI, C makes any variable size shorter than int pointless, unless you are trying to save storage space.

In particular (unless very short of stack space) using a variable smaller than int inside a function is pointless because any storage used is chucked away upon exit. And a lot of variables are optimised anyway so never stored in RAM.
--- End quote ---
No; this is exactly why the int_fastN_t and uint_fastN_t types were introduced (and standardized in C99 in <stdint.h>), for N = 8, 16, 32, and 64.  They provide at least N-bit range, but their actual size depends on what is fast on a particular architecture.

For example, on 32- and 64-bit architectures, uint_fast16_t is usually 32 or 64 bit unsigned integer, whichever is "faster" for that architecture.


--- Quote from: peter-h on June 18, 2022, 11:29:59 am ---I still wonder why C doesn't internally represent bytes as bytes. They are quite common in embedded systems :)
--- End quote ---
Actually, it does: it just calls them char (implementation-defined signedness), unsigned char, and signed char.  In particular, sizeof (char) = sizeof (unsigned char) = sizeof (signed char) = 1 by definition –– and the size is not explicitly 8 bits.  There are even specific rules about conversion to and from ((un)signed) char buffers in C.

It is only in (arithmetic and logical) expressions that integer promotions are done; and also when calling functions without prototypes or variadic functions.
So, it is not really about internal representation, and more about the definition of how arithmetic and logical expressions are evaluated.
And even then, the compiler does not need to do that, as long as it is proven that the results are the same as if it had done so.  (Kinda annoying, yeah, but the C language is defined in terms of an abstract machine.)

peter-h:
Thank you. It does clarify it for me, and importantly I can see why my code is working fine, until yesterday ;)


--- Quote ---Examples in C, please.

--- End quote ---

I can't because I don't have a Z80 C environment and don't have the time to set one up. I probably have the 1980s IAR Z180 compiler somewhere (I don't think it was dongled, but it was ~£1000) and could try with that, but clearly compilers have got a lot more clever since then. I spent months on a particular project coding a lot of its functions in asm. Including runtimes, which were almost certainly written wholly in C too (as distinct from Hitech C which had e.g. printf() in C but a lot of runtime stuff done in asm; Clyde Smith-Stubbs knew his stuff).

brucehoult:

--- Quote from: peter-h on June 18, 2022, 12:23:33 pm ---
--- Quote ---Examples in C, please.

--- End quote ---

I can't because I don't have a Z80 C environment and don't have the time to set one up.

--- End quote ---

I only need source code, I have a z80 compiler. It also does 6502, z180, mcs51, r2k, pic14, pic16, hc08, s08, stm8.


--- Quote --- I probably have the 1980s IAR Z180 compiler somewhere (I don't think it was dongled, but it was ~£1000) and could try with that, but clearly compilers have got a lot more clever since then.

--- End quote ---

Probably. Certainly the open source ones have. And this isn't even gcc or llvm, because they don't handle the weirdness of 8 bit ISAs very well, but a compiler with a lot less work in it.

peter-h:

--- Quote ---I only need source code, I have a z80 compiler
--- End quote ---

Yes; a current one.

I did another project using an H8/323 in 1997. We used the Hitech C compiler then, which was about £350. That worked well. There was also an open source compiler around (GNU?) which Hitachi were giving away on a CD but it produced hugely bloated code; about 2x bigger than the Hitech one.

Further back I used some Zilog-distributed MUFOM tools. Mainly their z280 assembler (I was the first Z280 design-in in Europe, according to Zilog) but there was also a C compiler which was so bloated nobody dared use it for anything real. And a Z8000 compiler, similarly bloated. Reportedly those compilers were generated with YACC.

This discussion applies to current tools, and I accept that.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod