Author Topic: GCC ARM32 comparison question  (Read 2544 times)

0 Members and 1 Guest are viewing this topic.

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4028
  • Country: nz
Re: GCC ARM32 comparison question
« Reply #25 on: June 18, 2022, 11:45:26 am »
OK, sure, but nobody answered my question :)

Except in terms of "it is the C standard, so accept it".

When the rubber meets the road, yes, that's the specification of the language.

As to why...

Quote
ISTM that this "promotion to int of almost everything shorter" is an attempt to prevent bad coding leading to what has since for ever been known in the embedded world (whether manually blowing fusible link PROMs one byte at a time, or asm, or C or whatever) as "integer overflow". The cost of doing this is

No. It's not to protect against sloppy programmers. It's because very many of the machines that C runs on -- basically all mainframes or RISC that started life as 32 bit -- there are only instructions for full register arithmetic, so if you forced them to mask results down to 8 bits after every operation it would be very inefficient.

At the same time, machines that *do* have byte arithmetic are -- as I showed above -- free to use it IF IT WILL GET THE SAME RESULT.

It's a very pragmatic policy.

Quote
- on CPUs which are mostly not natively int sized (most of the 8/16 bit stuff like the Z80 etc) it bloats a lot of stuff, and slows it down, a lot

Rubbish.

What does it bloat? You showed one calculation that you said would be bloated on z80 if done in C. I showed actual C compiler output for the z80 which was not bloated.

I presented another calculation, and showed that it also was not boated on any of a range of machines.

Want to come up with an example which will actually show bloat?

Quote
- if there is enough oveflow to make MSB=1 then you will likely get real problems with any comparisons, because the promotion is to signed int
- it conceals most integer overflow, which is nearly always loss of real data - unless doing a checksum ;)

The MSB of what? A 1 byte value?  Promotion of an unsigned char is to (signed) int values from 0-255, and promotion of a signed char is to int values from -128 to 127. Arithmetic and comparisons operated perfectly in either case.

If anything, it is *easier* to detect overflow.

Examples in C, please.
« Last Edit: June 18, 2022, 12:14:44 pm by brucehoult »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6239
  • Country: fi
    • My home page and email address
Re: GCC ARM32 comparison question
« Reply #26 on: June 18, 2022, 12:08:14 pm »
OK, sure, but nobody answered my question :)

Except in terms of "it is the C standard, so accept it".
Well, it is one of those choices by language developers that have complicated (and often not that convincing) reasoning and history behind it, and cannot be changed without making the result a different programming language.

You are probably right in that the reasoning did not emphasize use cases (i.e., us developers and programmers using the language), and instead had more to do with the compiler implementation and the abstract machine approach the C standard uses.  There are related things like argument promotion (similar; but float is also promoted to double) applied when there is no prototype or the arguments are passed as variable arguments to a variadic function, see <stdarg.h>) making variadic function calls much easier to support in C, as each passed argument always takes a full native register or native-size word on the stack, depending on the ABI.

Just like many others here, I've done paid work in something like a dozen different programming languages.  It is useful to wonder why language designers made a specific choice, because it can reveal their intent and viewpoint (related to the paradigm underlying the language design).  However, sometimes such choices don't have any deeper meaning for us developers, because their reasoning is elsewhere, or just plain wrong.  I could be wrong, but I've always assumed integer promotions in C belong to this latter category.  It is sometimes useful, sometimes annoying; just something one has to deal with when writing C, without any huge programming paradigm insights in it.  Similar to e.g. how you can swap array names and indices in C, a[b] being equivalent to b[a].  Other than obfuscated code contests, I've never seen a real world use case where that would be actually useful.

So, AIUI, C makes any variable size shorter than int pointless, unless you are trying to save storage space.

In particular (unless very short of stack space) using a variable smaller than int inside a function is pointless because any storage used is chucked away upon exit. And a lot of variables are optimised anyway so never stored in RAM.
No; this is exactly why the int_fastN_t and uint_fastN_t types were introduced (and standardized in C99 in <stdint.h>), for N = 8, 16, 32, and 64.  They provide at least N-bit range, but their actual size depends on what is fast on a particular architecture.

For example, on 32- and 64-bit architectures, uint_fast16_t is usually 32 or 64 bit unsigned integer, whichever is "faster" for that architecture.

I still wonder why C doesn't internally represent bytes as bytes. They are quite common in embedded systems :)
Actually, it does: it just calls them char (implementation-defined signedness), unsigned char, and signed char.  In particular, sizeof (char) = sizeof (unsigned char) = sizeof (signed char) = 1 by definition –– and the size is not explicitly 8 bits.  There are even specific rules about conversion to and from ((un)signed) char buffers in C.

It is only in (arithmetic and logical) expressions that integer promotions are done; and also when calling functions without prototypes or variadic functions.
So, it is not really about internal representation, and more about the definition of how arithmetic and logical expressions are evaluated.
And even then, the compiler does not need to do that, as long as it is proven that the results are the same as if it had done so.  (Kinda annoying, yeah, but the C language is defined in terms of an abstract machine.)
 
The following users thanked this post: peter-h

Offline peter-hTopic starter

  • Super Contributor
  • ***
  • Posts: 3694
  • Country: gb
  • Doing electronics since the 1960s...
Re: GCC ARM32 comparison question
« Reply #27 on: June 18, 2022, 12:23:33 pm »
Thank you. It does clarify it for me, and importantly I can see why my code is working fine, until yesterday ;)

Quote
Examples in C, please.

I can't because I don't have a Z80 C environment and don't have the time to set one up. I probably have the 1980s IAR Z180 compiler somewhere (I don't think it was dongled, but it was ~£1000) and could try with that, but clearly compilers have got a lot more clever since then. I spent months on a particular project coding a lot of its functions in asm. Including runtimes, which were almost certainly written wholly in C too (as distinct from Hitech C which had e.g. printf() in C but a lot of runtime stuff done in asm; Clyde Smith-Stubbs knew his stuff).

« Last Edit: June 18, 2022, 12:26:07 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4028
  • Country: nz
Re: GCC ARM32 comparison question
« Reply #28 on: June 18, 2022, 12:36:14 pm »
Quote
Examples in C, please.

I can't because I don't have a Z80 C environment and don't have the time to set one up.

I only need source code, I have a z80 compiler. It also does 6502, z180, mcs51, r2k, pic14, pic16, hc08, s08, stm8.

Quote
I probably have the 1980s IAR Z180 compiler somewhere (I don't think it was dongled, but it was ~£1000) and could try with that, but clearly compilers have got a lot more clever since then.

Probably. Certainly the open source ones have. And this isn't even gcc or llvm, because they don't handle the weirdness of 8 bit ISAs very well, but a compiler with a lot less work in it.
 

Offline peter-hTopic starter

  • Super Contributor
  • ***
  • Posts: 3694
  • Country: gb
  • Doing electronics since the 1960s...
Re: GCC ARM32 comparison question
« Reply #29 on: June 18, 2022, 12:46:14 pm »
Quote
I only need source code, I have a z80 compiler

Yes; a current one.

I did another project using an H8/323 in 1997. We used the Hitech C compiler then, which was about £350. That worked well. There was also an open source compiler around (GNU?) which Hitachi were giving away on a CD but it produced hugely bloated code; about 2x bigger than the Hitech one.

Further back I used some Zilog-distributed MUFOM tools. Mainly their z280 assembler (I was the first Z280 design-in in Europe, according to Zilog) but there was also a C compiler which was so bloated nobody dared use it for anything real. And a Z8000 compiler, similarly bloated. Reportedly those compilers were generated with YACC.

This discussion applies to current tools, and I accept that.
« Last Edit: June 18, 2022, 12:58:22 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online dietert1

  • Super Contributor
  • ***
  • Posts: 2059
  • Country: br
    • CADT Homepage
Re: GCC ARM32 comparison question
« Reply #30 on: June 18, 2022, 01:02:50 pm »
Some days ago i made a quick test for a STM32L476 board using IAR IDE. There were some float variables and it used them as intended, except the link map contained lots of double support functions. I found that for invocation of printf all float values got extended to doubles. Probably caused by some project default setting like MISRA or the like.

Regards, Dieter
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6239
  • Country: fi
    • My home page and email address
Re: GCC ARM32 comparison question
« Reply #31 on: June 18, 2022, 01:19:34 pm »
Some days ago i made a quick test for a STM32L476 board using IAR IDE. There were some float variables and it used them as intended, except the link map contained lots of double support functions. I found that for invocation of printf all float values got extended to doubles. Probably caused by some project default setting like MISRA or the like.
It's the argument promotion rules I mentioned above that causes that; printf() being a variadic function.

See if there is a flag to disable printf() float support altogether, and if the C library used in IAR IDE supports strfromf() (standard C since C99).  Since strfromf() is not a variadic function, floats are passed without promoting them to double.  It is not as powerful as printf(), because the format string must begin with a %, followed by optional size and/or precision, followed by one of the conversion specifiers valid for floats (AaEeFfGg).

The idea is that you use a small buffer on stack to convert the float to a string first, say

    char  tempbuf[12];
    if (strfromf(tempbuf, sizeof tempbuf, "%.3f", floatvar) >= (int)sizeof tempbuf) {
        /* Oops, tempbuf[] was not large enough! */
    } else {
        /* Print tempbuf as a string */
    }

It's not optimal for sure; just a workaround.

When using GCC, the -Wdouble-promotion parameter can be very useful: whenever the compiler decides to do this, this warning flag causes the compiler to emit a warning.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4028
  • Country: nz
Re: GCC ARM32 comparison question
« Reply #32 on: June 18, 2022, 01:31:14 pm »
Quote
I only need source code, I have a z80 compiler

Yes; a current one.

Naturally.

Quote
I did another project using an H8/323 in 1997. We used the Hitech C compiler then, which was about £350. That worked well. There was also an open source compiler around (GNU?) which Hitachi were giving away on a CD but it produced hugely bloated code; about 2x bigger than the Hitech one.

Further back I used some Zilog-distributed MUFOM tools. Mainly their z280 assembler (I was the first Z280 design-in in Europe, according to Zilog) but there was also a C compiler which was so bloated nobody dared use it for anything real. And a Z8000 compiler, similarly bloated. Reportedly those compilers were generated with YACC.

I used all kinds of awful compilers 40 years ago too. And companies thought they could charge a fortune for rubbish.

My first C programming was on a Z8000 machine running Unix, as it happens, in 1983 if I recall correctly. Prior to that any high level language programming I'd done on a microcomputer was in either an interpreter or a compiler to bytecode and then interpret that, so anything that compiled to native code was a huge speed improvement.

Quote
This discussion applies to current tools, and I accept that.

More to the point, it applies to what the C language permits or doesn't permit, not to the quality of any particular compiler, though I'm certainly prepared to take compilers producing reasonable code as existence proofs that C doesn't mandate bad code, even on such crude machines as a z80 or 6502.
 

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11236
  • Country: us
    • Personal site
Re: GCC ARM32 comparison question
« Reply #33 on: June 18, 2022, 04:43:30 pm »
Take the Z80. HL holds a uint16_t x and you are doing x << 4.

I very explicitly said that it is guaranteed to be faster on 16- and 32- bit CPUs. Z80 is not one of them.

Re your example, that is wrong code :) because anybody adding up even two bytes should know the result can be > 255.
It is just an example. Believe me, you will be bitten by the lack of promotion and you would create similar topic if they were not there.

How to implement promotions is not a simple topic, and in the end you just need to decide something. For concrete examples, you can look at how it is handled in more modern languages like Rust, Go, Swift. They all made their own slightly different decisions.
« Last Edit: June 18, 2022, 04:49:24 pm by ataradov »
Alex
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14445
  • Country: fr
Re: GCC ARM32 comparison question
« Reply #34 on: June 18, 2022, 05:58:43 pm »
Have you read the rationale document posted by newbrain?
 

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11236
  • Country: us
    • Personal site
Re: GCC ARM32 comparison question
« Reply #35 on: June 18, 2022, 06:07:18 pm »
Have you read the rationale document posted by newbrain?
Is this for me? If so, yes, I've read this many years ago.

From my point of view, the best way to approach promotions is to allow only non-losing widening promotions. There is a good discussion on that here https://internals.rust-lang.org/t/implicit-widening-polymorphic-indexing-and-similar-ideas/1141

Prohibiting everything results in a lot of pointless castings and doing what C did results in a lot of confusion.
Alex
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14445
  • Country: fr
Re: GCC ARM32 comparison question
« Reply #36 on: June 18, 2022, 06:11:40 pm »
Have you read the rationale document posted by newbrain?
Is this for me? If so, yes, I've read this many years ago.

No, sorry for not quoting the OP. That was for the OP.
And I'm again not saying that their decision was the right one, but at least we more or less know why C is defined this way.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8168
  • Country: fi
Re: GCC ARM32 comparison question
« Reply #37 on: June 18, 2022, 07:21:01 pm »
- on CPUs which are mostly not natively int sized (most of the 8/16 bit stuff like the Z80 etc) it bloats a lot of stuff, and slows it down, a lot

Of course not! You still don't get it. C is not a "portable assembler". C is a high level language, with the concept of abstract C machine. Compiler is totally allowed to produce optimum code. Compiler only has to prove the result is correct, according to the standard. The problem is you thinking "optimization" as some separate step, as if programmer writes "portable assembler" first, then compiler translates it 1:1 to bloated code, and then optimizer trying to do something about it. It's not like this. Really, programmer gives high-level description of program, using standard language called C. Then compiler produces machine code, the only requirement being it must produce exactly the correct result. It does not matter how the compiler achieves this. Of course compilers try to do this in optimum way.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14445
  • Country: fr
Re: GCC ARM32 comparison question
« Reply #38 on: June 18, 2022, 07:33:25 pm »
Compilers build an abstract description of the source code before translating that to assembly.
Only in the very old and simple compilers both steps were more or less combined (like say, with the original Turbo Pascal compiler.)

 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26892
  • Country: nl
    • NCT Developments
Re: GCC ARM32 comparison question
« Reply #39 on: June 24, 2022, 12:29:10 am »
OK, sure, but nobody answered my question :)

Except in terms of "it is the C standard, so accept it".

ISTM that this "promotion to int of almost everything shorter" is an attempt to prevent bad coding leading to what has since for ever been known in the embedded world (whether manually blowing fusible link PROMs one byte at a time, or asm, or C or whatever) as "integer overflow". The cost of doing this is

- on CPUs which are mostly not natively int sized (most of the 8/16 bit stuff like the Z80 etc) it bloats a lot of stuff, and slows it down, a lot
The designers of the C language already thought of that: the size of an int (integer) isn't fixed in C! On 16 bits CPUs an int is typically 16 bit. On 64 bit platforms an int usually is 32 bit.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf