Author Topic: No more code-size-limited version of IAR embedded workbench for ARM?  (Read 12853 times)

0 Members and 2 Guests are viewing this topic.

Offline mark03Topic starter

  • Frequent Contributor
  • **
  • Posts: 750
  • Country: us
Does anyone know if IAR got rid of the code-size-limited (32 kB) free version of embedded workbench for ARM?  I used the full version of IAR in a previous job and got in the [bad] habit of doing my personal projects on the free code-size-limited version for a number of years.  I should have just used gcc but I dislike Eclipse and the other free alternatives like VSCode hadn't yet gained popularity.

Now I'd like to install the free version of IAR on a new laptop, but as far as I can tell on their web site, there is no evaluation version except a 14-day time-limited license.  I'm 90% certain they've axed the code-size-limited "loophole" :(  but I guess it could still be buried on some page they've effectively hidden.  Anyone know for sure?

Edit:  It took 135 posts to get there, but @mwb1100 got the definitive answer from IAR:  this license type has been discontinued.  Therefore IAR is no longer an option for hobby / student / nonprofit projects.
« Last Edit: December 05, 2024, 12:56:01 am by mark03 »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #1 on: November 18, 2024, 01:35:01 am »
Does anyone know if IAR got rid of the code-size-limited (32 kB) free version of embedded workbench for ARM?  I used the full version of IAR in a previous job and got in the [bad] habit of doing my personal projects on the free code-size-limited version for a number of years.  I should have just used gcc but I dislike Eclipse and the other free alternatives like VSCode hadn't yet gained popularity.

Now I'd like to install the free version of IAR on a new laptop, but as far as I can tell on their web site, there is no evaluation version except a 14-day time-limited license.  I'm 90% certain they've axed the code-size-limited "loophole" :(  but I guess it could still be buried on some page they've effectively hidden.  Anyone know for sure?
I thought IAR only offered code size limited versions when they had a deal with a silicon vendor. Were you using the generic IAR for ARM, with support libraries for a wide range of vendors, or something vendor specific?
 

Offline mark03Topic starter

  • Frequent Contributor
  • **
  • Posts: 750
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #2 on: November 18, 2024, 04:49:54 pm »
I thought IAR only offered code size limited versions when they had a deal with a silicon vendor. Were you using the generic IAR for ARM, with support libraries for a wide range of vendors, or something vendor specific?
Yes, I believe this was the full-featured EW-ARM product (minus a few extras like their code safety checker, if I remember correctly).  It would have been 5-6 years ago now.
 

Offline neil555

  • Contributor
  • Posts: 42
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #3 on: November 18, 2024, 05:18:22 pm »
I would recommend Segger embedded studio, it's free for non commercial use and has no size limits.
 

Offline Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 4008
  • Country: nl
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #4 on: November 18, 2024, 05:29:17 pm »
Why use crippled software when you can use GCC?

Sure, it can be a nuisance to get started with GCC, but once you get though that, you've got a very wide landscape of options.
But that said, the commercial compiler vendors also need they need something to compete, and they tend to have bundled libraries for USB & Ethernet stacks, MP3 players LCD libraries and such.
 
The following users thanked this post: Siwastaja

Offline cgroen

  • Supporter
  • ****
  • Posts: 642
  • Country: dk
    • Carstens personal web
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #5 on: November 18, 2024, 05:34:02 pm »
Keil(ARM) has a community version of their tool free for hobby use
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #6 on: November 18, 2024, 05:40:53 pm »
they tend to have bundled libraries for USB & Ethernet stacks, MP3 players LCD libraries and such.

Enabling quick proof-of-concept and then total destruction of the company when no one knows how to continue to the actual saleable product.
 
The following users thanked this post: bson

Offline mark03Topic starter

  • Frequent Contributor
  • **
  • Posts: 750
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #7 on: November 18, 2024, 08:20:14 pm »
Without getting into a full blown discussion of IDE alternatives, I will only say that yes, I use gcc, and my go-to IDE solution at the moment would probably be Visual Studio Code with appropriate plugins.  I wish the process of getting that up and running was about 95% less painful than it is, but this seems to be a constant in the embedded universe (like printing from Linux---negligible probability of being satisfactorily resolved in my lifetime).

Some people like vendor code and vendor IDEs.  Maybe there will come a day when they don't all suck.  I'm still waiting.

Regarding gcc (and veering a bit off topic), I have been curious to see how gcc, clang/llvm, and proprietary compilers stack up, especially on the new SIMD instructions (Helium), and also on RISC-V and its equivalent extensions.  I have seen benchmarks which seem to indicate that gcc in particular is falling behind.  ARM apparently now targets clang for all of its improvements, and says that they may or may not ever make it into gcc.  (ARM's own commercial compiler is significantly better than both.)  Certainly, gcc is "good enough" for 99% of embedded work, but I do hope we are not regressing from the paid/free performance ratio that existed ten years ago; my impression is that at that time, it was pretty close to 1.0.  Also, are there sufficiently "big guns" behind compiler development for RISC-V?  Or will it end up with a performance penalty merely due to the lack of compilers as good as ARM has.
« Last Edit: November 18, 2024, 08:24:04 pm by mark03 »
 

Offline mikerj

  • Super Contributor
  • ***
  • Posts: 3398
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #8 on: November 18, 2024, 08:30:53 pm »
I would recommend Segger embedded studio, it's free for non commercial use and has no size limits.

I'd second this, ES is very good.  We used Keil for many years at work and transitioned over to ES with no regrets at all.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #9 on: November 18, 2024, 10:51:44 pm »
Why use crippled software when you can use GCC?

Sure, it can be a nuisance to get started with GCC, but once you get though that, you've got a very wide landscape of options.
But that said, the commercial compiler vendors also need they need something to compete, and they tend to have bundled libraries for USB & Ethernet stacks, MP3 players LCD libraries and such.
GCC itself is a fine tool. The problems come when you try debugging. That's kinda weak for a lot of embedded targets, compared to the better commercial tools.
 
The following users thanked this post: tooki

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #10 on: November 19, 2024, 07:15:22 am »
GCC itself is a fine tool. The problems come when you try debugging. That's kinda weak for a lot of embedded targets, compared to the better commercial tools.

Which forces you to use real, production-ready debugging strategies from day one, which becomes a huge timesaver in the long run.

Even if you are just a hobbyist, the time and grieve saved by the fact you don't have to learn new point&click tool every 3 years or for every microcontroller brand is invaluable. Not to even speak about the fact that you can't reproduce all bugs on a lab table with probe attached.

Start from "printf debugging" and extend from there as needed. The bottom line is, this is not the poor man's alternative, it's the other way around, these fancied "debug tools" are poor mans alternatives to real software practices. You do a colossal disservice to yourself by learning the wrong thing first, it is way more difficult then to unlearn.

Single stepping and watching memory in debugger trying to figure out what the code does is an absolutely archaic strategy and should be discouraged, not touted as some kind of professional way of working (even though professional software development processes are sometimes ineffective).

For example, pretty much the whole internet with all of its complexity (linux kernel, networking stacks etc.) is developed and managed using tools like GCC and not those "better commercial tools" (and trust me, if they were better, they would be used; e.g. Linus T. insisted on using commercial versioning system, bitkeeper, when suitable open source tool did not exist.) And one of the classic mistake young players (me in the past included) do is they think that developing embedded software is somehow fundamentally different to developing something like a linux kernel.

99% of your problems are higher level (than some peripheral register reacting unexpectedly to a write), therefore you should instrument and log on a higher level, but these fancy IDEs have no idea what your code means or is supposed to work. Single-stepping or adding breakpoints in a debugger is like digging a hole using a toothpick when you could use an excavator. Total stone age.

Validate function inputs. Log the calls and arguments. Log state changes. In your code, not depending on some tool and point&click because trying to reproduce a difficult problem on the lab with probe attached is colossal waste of time.

Trust me. You don't need any of that tool hell. gcc + binutils and learning suitable software practices is all you need.
« Last Edit: November 19, 2024, 07:31:25 am by Siwastaja »
 
The following users thanked this post: AndersJ

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28487
  • Country: nl
    • NCT Developments
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #11 on: November 19, 2024, 12:38:26 pm »
GCC itself is a fine tool. The problems come when you try debugging. That's kinda weak for a lot of embedded targets, compared to the better commercial tools.

Which forces you to use real, production-ready debugging strategies from day one, which becomes a huge timesaver in the long run.

Even if you are just a hobbyist, the time and grieve saved by the fact you don't have to learn new point&click tool every 3 years or for every microcontroller brand is invaluable. Not to even speak about the fact that you can't reproduce all bugs on a lab table with probe attached.

Start from "printf debugging" and extend from there as needed. The bottom line is, this is not the poor man's alternative, it's the other way around, these fancied "debug tools" are poor mans alternatives to real software practices. You do a colossal disservice to yourself by learning the wrong thing first, it is way more difficult then to unlearn.

Single stepping and watching memory in debugger trying to figure out what the code does is an absolutely archaic strategy and should be discouraged, not touted as some kind of professional way of working (even though professional software development processes are sometimes ineffective).
I disagree with not needing a debugger at all. Especially when I work on inherited code or third party libraries, I find a debugger a useful tool every now and then to just see where a crash occurs or how data flows through the code. For example: at the moment I have a project where I need to modify code for a product developed somewhere in China which has a boatload of communication layers stacked on top of eachother. Setting a breakpoint at the data reception point and stepping through the code gives a good insight in what the hell is going on. For other software issues in this product I use the communication interface to output status messages to check program flow.

All in all Iike to use both methods. In some cases using a debugger is more convenient and in other cases using printf is more convenient. But this also depends on how well the debugger works. For STM32 debugging from Eclipse (CubeIDE) works pretty well using ST's own ST-link. For ESP32 I'm more inclined to use printfs even for cases where using a debugger would be more efficient as debugging the ESP32 from the SDK (Eclipse / GCC based) provided by Espressif is super flaky. Which circles back to the (IMHO valid) point coppice made that having good software tools to begin with is beneficial. Still, GCC and the associated tools are very good, in my experience most of the problems when debugging microcontrollers are in the software layer & hardware (JTAG/SWD interface) between GDB and the microcontroller hardware.

I do strongly agree with you though that good software starts with good coding practises.
« Last Edit: November 19, 2024, 12:45:34 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #12 on: November 19, 2024, 12:49:53 pm »
I disagree with not needing a debugger at all.

Me too. I seem to use it approx. once a year. For that, I use gdb, console interface because it does not need much setup (but I have to spend an hour googling and trying to remember how it is used because in a year, I forget all other commands except run, print and quit).

Quote
Especially when I work on inherited code or third party libraries, I find a debugger a useful tool every now and then to just see where a crash occurs or how data flows through the code. For example: at the moment I have a project where I need to modify code for a product developed somewhere in China which has a boatload of communication layers stacked on top of eachother. Setting a breakpoint at the data reception point and stepping through the code gives a good insight in what the hell is going on. For other software issues in this product I use the communication interface to output status messages to check program flow.

Sure, yeah, but note this is more like reverse-engineering or coping with a failed process out of necessity, more than a description how you should usually develop your own projects, if you have the choice. As a pragmatist I understand very well how this is sometimes needed, of course.
« Last Edit: November 19, 2024, 12:55:04 pm by Siwastaja »
 

Offline elektryk

  • Regular Contributor
  • *
  • Posts: 144
  • Country: pl
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #13 on: November 19, 2024, 02:05:51 pm »
I use debugger almost everytime, the only exception are MCUs, which don't support it.
Code with printf() everywhere remind me some Arduino programs (especially those for AVR 8 bit) where it adds a lot of delay.
I experienced this, when the program stopped working after commenting out some printf() calls.

All in all Iike to use both methods. In some cases using a debugger is more convenient and in other cases using printf is more convenient. But this also depends on how well the debugger works. For STM32 debugging from Eclipse (CubeIDE) works pretty well using ST's own ST-link. For ESP32 I'm more inclined to use printfs even for cases where using a debugger would be more efficient as debugging the ESP32 from the SDK (Eclipse / GCC based) provided by Espressif is super flaky.

That's why I only use ESP32 when I really need wireless connectivity, as a general purpose MCU I still preffer STM32.
« Last Edit: November 19, 2024, 02:14:07 pm by elektryk »
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #14 on: November 19, 2024, 02:41:08 pm »
Code with printf() everywhere remind me some Arduino programs (especially those for AVR 8 bit) where it adds a lot of delay.

Log into memory, and it's just a few instructions...

Quote
That's why I only use ESP32 when I really need wireless connectivity, as a general purpose MCU I still preffer STM32.

... and define some sort of protocol to get that log through the radio. Bang, you are already much better off than using SWD probe.

It isn't rocket science but requires a little bit of creativity to get used to. Then you can do pretty much anything you need. I like to do simple wrappers which insert C file line number and ancillary data for a trace of events, then transferred through radio or ethernet.

And remember you have no truly non-interfering debugger available anyway. The debugger competes with memory access cycles on a single-port RAM, and worse, reads peripheral registers where read operation itself triggers an operation (e.g. FIFO pop) with user wondering what the fuck is happening, when all you really needed to do is to store the value you read into a variable and print that variable out when you have time.
« Last Edit: November 19, 2024, 02:44:54 pm by Siwastaja »
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #15 on: November 19, 2024, 02:48:18 pm »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.  That writeup actually mentions an issue I've run into on an RTOS-controlled device which, on critical errors, would drop into a while(1) until the watchdog restarted the system, thus providing rejuvenation-based error recovery.  Except that at some point gcc decided to silently remove the while(1) code so that on error it continued on in the error state.  There are plenty of related writeups that go into this, e.g. this one for security-specific issues and this for just outright WTF-ery.

Do the embedded-targeted compilers like Keil/Arm have this problem, or do they create object code that follows the programmer's intent?  Segger ES AFAIK is based on clang so would have the problems mentioned in the linked articles.
« Last Edit: November 19, 2024, 02:58:03 pm by 5U4GB »
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #16 on: November 19, 2024, 03:01:23 pm »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.

Pretty extreme opinion, given that GCC is used in mission-critical code (whatever that could mean) all the time. Not a huge fan of GCC myself; everything has downsides and defects, probably the alternatives seem better only because they are less scrutinized due to seeing much less use. Safety critical industry especially uses esoteric stuff all the time and carefully limit themselves to specific constructs, workarounds and known limitations. Most importantly, they don't update their tools overnight without extensive testing.

Besides, I fail to see what the link you provide has anything to with GCC. Plus clearly the author does not have the slightest clue about how high-level optimized languages like C and C++ are supposed to work. Did you post a wrong link accidentally, maybe?

C and C++ have clear defects as languages but many people and companies seem to be able to cope with them, but this is getting quite off-topic already. And suggested "better" languages would also optimize out the maybeStop example, and possibly offer a standardized way to make stop variable something hardware is allowed to modify, just like C does, so  :-//

this for just outright WTF-ery.

Do the embedded-targeted compilers like Keil/Arm have this problem, or do they create object code that follows the programmer's intent?  Segger ES AFAIK is based on clang so would have the problems mentioned in the linked articles.

Problems like this surface from time to time and GCC developer attitude is sometimes very shitty when it comes to this "this is UB, we can break existing code in any way because it's not standard compliant" phenomenon. But you can rest assured every other compiler has occasional issues as well. I don't agree problems like this completely prevent the usage of gcc for "mission critical stuff".
« Last Edit: November 19, 2024, 03:39:25 pm by Siwastaja »
 
The following users thanked this post: newbrain, SparkMark

Offline elektryk

  • Regular Contributor
  • *
  • Posts: 144
  • Country: pl
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #17 on: November 19, 2024, 03:45:48 pm »
... and define some sort of protocol to get that log through the radio. Bang, you are already much better off than using SWD probe.

Good idea but that's also not always possible.

And remember you have no truly non-interfering debugger available anyway. The debugger competes with memory access cycles on a single-port RAM, and worse, reads peripheral registers where read operation itself triggers an operation (e.g. FIFO pop) with user wondering what the fuck is happening, when all you really needed to do is to store the value you read into a variable and print that variable out when you have time.

Also code compiled with -Og/-O0 may behave different than with -Os/-O3 but it is nice to know limitations of various debuging methods.
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3577
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #18 on: November 19, 2024, 04:10:16 pm »
Why use crippled software when you can use GCC?

Sure, it can be a nuisance to get started with GCC, but once you get though that, you've got a very wide landscape of options.
But that said, the commercial compiler vendors also need they need something to compete, and they tend to have bundled libraries for USB & Ethernet stacks, MP3 players LCD libraries and such.

Wish CLang would get more love...
GCC is pretty weak these days when it comes to C. Microchip's new compiler come with clangd-server for the integration with VS Code it's such a welcomed addition. Much better and useful warnings, which you can't simply get from GCC because of how it's implemented... such as knowing which functions are actually never called (whereas gcc's linker option to remove unused sections may will backfire very badly on indirect calls) or typos in sizeof, pointer arithmetics, ...
Which we also had smilarly on commercial compilers, just not with GCC.

I wonder how many bugs we would not have if we used CLang + static analysis instead of GCC + Static analysis
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2111
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #19 on: November 19, 2024, 05:35:33 pm »
Problems like this surface from time to time and GCC developer attitude is sometimes very shitty when it comes to this "this is UB, we can break existing code in any way because it's not standard compliant" phenomenon. But you can rest assured every other compiler has occasional issues as well. I don't agree problems like this completely prevent the usage of gcc for "mission critical stuff".

It doesn't rule out gcc, but what it does do is make people compile at less-than-maximal levels of optimization, just to avoid being bitten by their "LOL that's UB we'll do whatever we want" attitude. 

At a time when safer code is called for, gcc has elected to compete by producing faster code, egged on by benchmark fanatics rather than actual customers.  Good job, guys... we'll all be forced to write Ada by the time you're done.  But hey, those elided while(1) loops will really run fast.
 
The following users thanked this post: 5U4GB

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #20 on: November 19, 2024, 05:40:19 pm »
I disagree with not needing a debugger at all.
I agree that debugging itself is something not everyone needs. However, these days the ability to get code in and out of an MCU is usually embedded in the debugger, so you at least need that.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28487
  • Country: nl
    • NCT Developments
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #21 on: November 19, 2024, 05:52:18 pm »
Problems like this surface from time to time and GCC developer attitude is sometimes very shitty when it comes to this "this is UB, we can break existing code in any way because it's not standard compliant" phenomenon. But you can rest assured every other compiler has occasional issues as well. I don't agree problems like this completely prevent the usage of gcc for "mission critical stuff".

It doesn't rule out gcc, but what it does do is make people compile at less-than-maximal levels of optimization, just to avoid being bitten by their "LOL that's UB we'll do whatever we want" attitude. 

At a time when safer code is called for, gcc has elected to compete by producing faster code, egged on by benchmark fanatics rather than actual customers.  Good job, guys... we'll all be forced to write Ada by the time you're done.  But hey, those elided while(1) loops will really run fast.
Mistakes like this happen at every level. People just don't oversee what effect some code has.

Years ago I reported a bug in the Linux kernel. I had a problem with a SOC which wouldn't always restart after a reset. It turned out that the reset code didn't reset the power management to supply the nominal voltage to the CPU (no de-init on reset). So when a reset happened during low frequency + low voltage, the voltage would remain too low and the CPU would not start reliable. The reply I got from one of the kernel maintainers was: Hmm, we removed the de-init calls before reset because we assumed this would be not needed. But it does explain some weird effects we see on various platforms (including PCs).
« Last Edit: November 19, 2024, 05:56:38 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #22 on: November 19, 2024, 07:22:54 pm »
It doesn't rule out gcc, but what it does do is make people compile at less-than-maximal levels of optimization, just to avoid being bitten by their "LOL that's UB we'll do whatever we want" attitude. 

At a time when safer code is called for, gcc has elected to compete by producing faster code, egged on by benchmark fanatics rather than actual customers.  Good job, guys... we'll all be forced to write Ada by the time you're done.  But hey, those elided while(1) loops will really run fast.

Typical alarmism. There is really no need to resort to defaulting to worse optimization levels due to this kind of fearmongering. I mean, bugs happen, this is why we have processes to write better software and testing to catch these bugs. It's not like we are talking about GCC being buggy, we are talking about gcc being unhelpful detecting some bugs caused by programming errors, a huge difference to begin with. And usually, gcc is quite helpful, but there have been a few cases where they have gone too far and being assholes about it, but still the right thing to do is to fix your broken code and go on with your life. And of course report bugs if you find actual compiler bugs but that is quite rare.

But really, usually the story is some horrible randomly cobbled together untested, unmaintained spaghetti codebase which can fail on any compiler and any optimization setting, and then instead of fixing it, it's easier to throw a temper tantrum at compiler optimizations, as compiling at -O0 seemingly fixes it, and this tantrum is easily fueled by some googling revealing blog posts with critique against GCC, some deserved, but this is all unnecessary rationalization, you should be spending the time fixing the code and not explaining.

Plus, of course the good old tale about C supposedly being a "portable assembler" still lives strong, regardless of such idea being completely dismissed by the C abstract machine concept already in the 1989 standard.

C is like any other high-level language, for example C will optimize away a non-volatile-qualified variable which holds a constant value like any other sensible modern language, deal with it, any other compiler will do it too except some super archaic one.

I have used GCC* for various projects for years and have never needed to decrease optimization level to solve a bug, and I'm not a particularly excellent programmer. I don't even use the "decrease the optimization just to see if bug goes away" faultfinding strategy, I think it's a horrible strategy because it never leads to any particular point in code. Just good old debugging strategies, think, validate, log, follow the leads, and you will find any bug. And if necessary, rewrite and simplify.

*) because I have been too lazy to give clang a try, I probably should
« Last Edit: November 19, 2024, 07:29:39 pm by Siwastaja »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #23 on: November 19, 2024, 07:27:34 pm »
But really, usually the story is some horrible randomly cobbled together untested, unmaintained spaghetti codebase which can fail on any compiler and any optimization setting, and then instead of fixing it, it's easier to throw a temper tantrum at compiler optimizations, as compiling at -O0 seemingly fixes it, and this tantrum is easily fueled by some googling revealing blog posts with well-deserved critique against GCC.
This is really common, and really annoying. So often, especially with open source projects for some reason, the only explanation for code that used to work not working with a newer compiler is the compiler is buggy. No verification at all. No introspection at all. Even when you submit a proper fix, they will often still be in denial, and reject the fix.
 
The following users thanked this post: Siwastaja

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #24 on: November 19, 2024, 07:32:41 pm »
This is really common, and really annoying. So often, especially with open source projects for some reason, the only explanation for code that used to work not working with a newer compiler is the compiler is buggy. No verification at all. No introspection at all. Even when you submit a proper fix, they will often still be in denial, and reject the fix.

This is similar to marginal electronic designs which work on a lab table but when you get a different batch of IC or transistor, it stops working. At that point engineer either accuses the supplier or manufacturer of supplying "bad parts", or looks in the mirror.

The "we must use -O0 from now on" tantrum equivalent would be noticing that running the thing in a fridge makes it work again and then complain all over the internets that manufacturer X is crap because they force us to run our electronics inside fridges.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2111
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #25 on: November 19, 2024, 09:03:18 pm »
This is really common, and really annoying. So often, especially with open source projects for some reason, the only explanation for code that used to work not working with a newer compiler is the compiler is buggy. No verification at all. No introspection at all. Even when you submit a proper fix, they will often still be in denial, and reject the fix.

Meanwhile, the code doesn't work.  But at least it's fast.
 

Offline temperance

  • Frequent Contributor
  • **
  • Posts: 714
  • Country: 00
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #26 on: November 19, 2024, 09:24:25 pm »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.  That writeup actually mentions an issue I've run into on an RTOS-controlled device which, on critical errors, would drop into a while(1) until the watchdog restarted the system, thus providing rejuvenation-based error recovery.  Except that at some point gcc decided to silently remove the while(1) code so that on error it continued on in the error state.  There are plenty of related writeups that go into this, e.g. this one for security-specific issues and this for just outright WTF-ery.

Do the embedded-targeted compilers like Keil/Arm have this problem, or do they create object code that follows the programmer's intent?  Segger ES AFAIK is based on clang so would have the problems mentioned in the linked articles.

Are you allowed or do you allow yourself to just replace/upgrade the compiler when writing code for a mission critical system such that while(1) loops mysteriously disappear at some point? I would think that as a developer of such code you would know and understand the tools you are working with inside out and that replacing the compiler with the latest version would require you to study the manual thoroughly before anything is being replaced to avoid such mysterious problems.
 
The following users thanked this post: Siwastaja

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #27 on: November 19, 2024, 09:29:06 pm »
This is really common, and really annoying. So often, especially with open source projects for some reason, the only explanation for code that used to work not working with a newer compiler is the compiler is buggy. No verification at all. No introspection at all. Even when you submit a proper fix, they will often still be in denial, and reject the fix.

Meanwhile, the code doesn't work.  But at least it's fast.
Of course not. They either insist on using an older compiler until the latest one is "fixed", or they set up certain components of the project to compile with a lower optimisation level which can be "trusted".
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #28 on: November 19, 2024, 09:36:10 pm »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.  That writeup actually mentions an issue I've run into on an RTOS-controlled device which, on critical errors, would drop into a while(1) until the watchdog restarted the system, thus providing rejuvenation-based error recovery.  Except that at some point gcc decided to silently remove the while(1) code so that on error it continued on in the error state.  There are plenty of related writeups that go into this, e.g. this one for security-specific issues and this for just outright WTF-ery.

Do the embedded-targeted compilers like Keil/Arm have this problem, or do they create object code that follows the programmer's intent?  Segger ES AFAIK is based on clang so would have the problems mentioned in the linked articles.
So, you are unhappy that a later compiler does a better job, and you didn't signpost the odd behaviour you expected, so it gets optimised away. This is like all the "bugs" people whine about when they didn't put "volatile" in all the right places, or hadn't read what volatile actually means in the C specs. This is not a GCC issue. Its a poor understanding of C you can get away with using a poorly performing compiler issue. Its fully possible to get the behaviour you want if you write the code properly.
 
The following users thanked this post: newbrain

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #29 on: November 19, 2024, 09:55:54 pm »
As the saying goes:
Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2111
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #30 on: November 19, 2024, 10:24:00 pm »
As the saying goes:
Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.

Or until he runs out of dynamite.
 
The following users thanked this post: 5U4GB

Offline temperance

  • Frequent Contributor
  • **
  • Posts: 714
  • Country: 00
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #31 on: November 20, 2024, 01:19:22 am »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.  That writeup actually mentions an issue I've run into on an RTOS-controlled device which, on critical errors, would drop into a while(1) until the watchdog restarted the system, thus providing rejuvenation-based error recovery.  Except that at some point gcc decided to silently remove the while(1) code so that on error it continued on in the error state.  There are plenty of related writeups that go into this, e.g. this one for security-specific issues and this for just outright WTF-ery.

Do the embedded-targeted compilers like Keil/Arm have this problem, or do they create object code that follows the programmer's intent?  Segger ES AFAIK is based on clang so would have the problems mentioned in the linked articles.
So, you are unhappy that a later compiler does a better job, and you didn't signpost the odd behaviour you expected, so it gets optimised away. This is like all the "bugs" people whine about when they didn't put "volatile" in all the right places, or hadn't read what volatile actually means in the C specs. This is not a GCC issue. Its a poor understanding of C you can get away with using a poorly performing compiler issue. Its fully possible to get the behaviour you want if you write the code properly.


A poor understanding of compilers and the subtleties of a programming language are the things which books on the subject and courses like to skip. After reading a book on C most people think they are ready to attack whichever problem. This is very far away from the truth. You have to understand the compiler, the machine architecture, it's instruction set and recognize it's limitations and learn to work with those. Without this insight anything can and will go wrong sooner or later. Maybe books and courses should not present you working code but carefully crafted broken code for which you have to make your hands dirty.
 
The following users thanked this post: rhodges

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #32 on: November 20, 2024, 04:33:52 am »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.

Pretty extreme opinion, given that GCC is used in mission-critical code (whatever that could mean) all the time.

Many embedded dev environments that use gcc use pretty ancient versions that wouldn't have done any of this stuff at the time.  It's an absolute minefield, code that was carefully written and tested to handle exceptional error conditions will appear to work as expected when compiled with a newer version, until one of the once-a-blue-moon exception conditions is hit at which point things blow up because the newer compiler release has removed the safety checks/error handling.

Some safety-critical stuff will actually specify exact compiler versions that the code is to be built with in order to avoid this problem.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #33 on: November 20, 2024, 04:41:47 am »
*) because I have been too lazy to give clang a try, I probably should

In many aspects it behaves pretty much like gcc, in particular the "Haha, gotcha! We've detected UB, we'll now do whatever we want", but I've found the diagnostics to be much better, vastly so for the clang analyzer which actually produces useful diagnostics vs. gcc's fifteen-page traces that end up telling you nothing.  I'd definitely give clang a try.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #34 on: November 20, 2024, 04:57:25 am »
This is not a GCC issue. Its a poor understanding of C you can get away with using a poorly performing compiler issue. Its fully possible to get the behaviour you want if you write the code properly.

There's always one.  Some time ago when someone did this, i.e. played the "it's everyone on earth's fault for writing bad code and not the compiler"*** card, I ran some of their code (they were a well-known open-source person) through an experimental SAST tool that tried to find UB, and it found quite a lot of UB in their code.  Glass houses and all that...

When I first ran into the array-bounds-check post I linked to earlier I tested it on a bunch of experienced C developers, at least one of whom started programming (or at least debugging) by toggling front-panel switches on a PDP-11.  Even though they knew, or at least suspected, that there was something hidden in there, none of them could figure out what the code would actually do when compiled, and that was with me more or less pointing at the thing and saying "find the booby-trap".  So if experienced devs can't find the booby-trap when it's waved under their noses, imagine how hard it must be when it's buried in 200kloc, and what else might be in that 200kloc.

*** I know there's an awful lot of crap code out there, but there's also quite a bit of very carefully-written code where the devs simply aren't expecting the compiler to break things based on "yippee, we found some UB, now you're in for it!".
 
The following users thanked this post: Siwastaja

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #35 on: November 20, 2024, 06:25:09 am »
Meanwhile, the code doesn't work.  But at least it's fast.

And somehow, fixing the code is out of question?
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #36 on: November 20, 2024, 06:35:42 am »
"Haha, gotcha! We've detected UB, we'll now do whatever we want"

I mean, that is the only thing to do, by very definition! They have to choose what they do, based on their own preferences and reasoning, because standard does not give even a suggestion what to do. I understand questioning their choices, but don't understand questioning the fact they are making choices.

Now the very question is what they want to do, and that determines if it is really a "gotcha" or not. Wanting to do a performance optimization which produces undesired (for the original programmer) outcome 99% of the time on real codebases will be clearly wrong. But what if the outcome on real codebases is right 50% of the time?

There is nothing wrong in this as long as the choice is driven by common sense and serves real-world programmers. And to me it seems this is the case 99% of the time. There are those "gotcha" incidents (in worst cases some innocent looking UB propagating into different part of code flow in a way which is difficult to understand even for an experienced programmer) but no proof these would be driving factor of all gcc development like you make it sound like.

Somehow most users of gcc, and there are a lot, survive with it without ever stepping onto these "mines" - "absolute minefield" sounds like alarmist exaggeration. This is including projects like linux kernel that will notice those crappy choices. And there have been a few events in the past where kernel developers have been quite mad with the choices by gcc folks, sure, but for you and me usually it's best to look into mirror and fix our code.
« Last Edit: November 20, 2024, 06:39:52 am by Siwastaja »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #37 on: November 20, 2024, 07:47:13 am »
The code "breaking" in observable/testable ways when it's not correct (/relies on UB behaving in a certain way with a certain toolchain with a certain version) is actually the best thing you can hope for.
 
The following users thanked this post: Siwastaja

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #38 on: November 20, 2024, 08:27:12 am »
The code "breaking" in observable/testable ways when it's not correct (/relies on UB behaving in a certain way with a certain toolchain with a certain version) is actually the best thing you can hope for.

And some while(1) either disappearing when expected to be there, or appearing when not expected to be there, or whole large parts of program completely disappearing, are usually obvious during testing, even poor quality testing. Sure, a diagnostics message would be better, but a large visible difference in program itself is nearly as good. That's why I'm wondering why these kind of effects get the most hate.

Now a bounds checks being optimized out and out-of-bounds access happening due to that is crazy. This would be hard to find in testing because programmers assume that since they already added the check (as part of the program) and it worked before, they don't need to add another unit test for the same thing.
« Last Edit: November 20, 2024, 08:28:59 am by Siwastaja »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #39 on: November 20, 2024, 08:34:30 am »
That being said, I'm wondering about this 'while (1)' thing. What kind of optimization could possibly optimize an infinite loop away, I don't know. An infinite loop is not something any optimization can get rid of, that I can think of right now.
Finite loops, OTOH, absolutely, if the body part of the loop has no effect.

So, unless you have a 'while (1)' loop that actually has an exit path in its body, and said path is statically determined by the compiler to be true at some iteration, while the rest of its body has no effect. Which would make it a finite loop.

 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #40 on: November 20, 2024, 08:57:34 am »
We've done this before in a long previous thread - stuff like optimisation detecting loop structures and replacing them with e.g. calls to memcpy. In embedded, there can be reasons why that is not desired. The hacks to prevent this are not always obvious. One such:
https://www.eevblog.com/forum/programming/gcc-arm32-compiler-too-clever-or-not-clever-enough/msg4121254/#msg4121254

The case of losing most of your code because you are not referencing main() anywhere (but instead locating it as a given address and jumping to it e.g. from a boot loader) is comical, and has comical work-arounds like inserting main() into an unused vector table location, which is asm code and asm never gets optimised.

What I don't like is successive compiler versions introducing new warnings, or changing the default settings for warnings. That creates work which you do not need if you have a working product which you are shipping! And since GCC is a continuously moving target, you cannot possibly - in any real-world business environment - chase the latest version for ever. At some point you have to freeze the tools (unless your employer is stupid and just keeps paying you for unproductive work). I use Cube IDE and froze it at 1.14.1, GCC v11, and that's it. I have not seen any advantage of any later version. But then I own my business, have done so since 1978, and have to run appropriate priorities, and while chasing new warnings would put bread on the table of an employee, it won't do so for me :)

Same goes for any modules like LWIP, MBEDTLS, you name it. All are moving targets. Often, like with MbedTLS (yes I am on their mailing list), they move mostly sideways, because the devs have run out of genuinely useful things to do, or they live in the standard "internet security groupthink" and don't actually build real products.

The real world is not perfect, no coder is perfect, and any significant chunk of code may have some UB, and a working product is relying on that being compiled in a certain way. Reality has no room for purists :)

So whatever tools works for you, use it...

The gotchas are

- will it run under later versions of windows (I address that using VMWARE, and run some c. 1995 tools that way)
- will it support later CPUs (probably no solution for that one)

and the drift in software is for floating licenses and such, which makes archiving a project nearly impossible. In the 1990s they used dongles which have the same problem (they eventually break) but patching them out was usually easy.
« Last Edit: November 20, 2024, 09:22:54 am by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #41 on: November 20, 2024, 12:02:55 pm »
Same goes for any modules like LWIP, MBEDTLS, you name it. All are moving targets. Often, like with MbedTLS (yes I am on their mailing list), they move mostly sideways, because the devs have run out of genuinely useful things to do, or they live in the standard "internet security groupthink" and don't actually build real products.

Security is a bit of a special case.  With LWIP once you've got your TCP stack running and reasonably tuned you can pretty much leave it alone modulo occasional bugfixes and maybe some fiddling for stability and reliability.  OTOH security is a Red Queen problem, you're constantly running to catch up with whatever random thing someone has dreamed up and decreed via Simon-Says, the most recent one being "Simon Says PQC!".  As Linus put it (although he was talking about schedulers vs. security rather than networking stacks vs. security), "the difference between them is simple: one is hard science. The other one is people wanking around with their opinions".

I've never seen IT security described so succinctly.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #42 on: November 20, 2024, 12:11:32 pm »
And some while(1) either disappearing when expected to be there, or appearing when not expected to be there, or whole large parts of program completely disappearing, are usually obvious during testing, even poor quality testing.

In the case of the stuff I was referring to they were reserved for should-never-occur situations that were often very difficult if not impossible to create during testing, for the very reason that they were should-never-occur conditions.  Typically it would require something like a hardware fault for the system to end up triggering a restart in this manner.  Apologies for being a bit vague but it's been awhile since I looked at that particular code base.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #43 on: November 20, 2024, 01:21:54 pm »
This is not a GCC issue. Its a poor understanding of C you can get away with using a poorly performing compiler issue. Its fully possible to get the behaviour you want if you write the code properly.

There's always one.  Some time ago when someone did this, i.e. played the "it's everyone on earth's fault for writing bad code and not the compiler"*** card, I ran some of their code (they were a well-known open-source person) through an experimental SAST tool that tried to find UB, and it found quite a lot of UB in their code.  Glass houses and all that...

When I first ran into the array-bounds-check post I linked to earlier I tested it on a bunch of experienced C developers, at least one of whom started programming (or at least debugging) by toggling front-panel switches on a PDP-11.  Even though they knew, or at least suspected, that there was something hidden in there, none of them could figure out what the code would actually do when compiled, and that was with me more or less pointing at the thing and saying "find the booby-trap".  So if experienced devs can't find the booby-trap when it's waved under their noses, imagine how hard it must be when it's buried in 200kloc, and what else might be in that 200kloc.

*** I know there's an awful lot of crap code out there, but there's also quite a bit of very carefully-written code where the devs simply aren't expecting the compiler to break things based on "yippee, we found some UB, now you're in for it!".
So, even good people, with a good understanding, have bugs in their code? I'm shocked. Shocked, I tell you. The world must be coming to an end.

This is one of the most BS arguments imaginable. If you need your code to be rock solid stable you don't change compilers. That's why certain versions of most tool chains are declared "long term support" versions. If you change versions, if you change tool chains, if you change ISAs you are back at square one with your system testing. Once you decide to move to a newer or different tool chain you should expect to do some very serious testing. When anything breaks, don't say the tool chain is broken, unless you are really sure. 30 years ago we used to find nasty compiler bugs a lot. Today we don't. They are actually quite rare, unless the compiler is at a very immature stage. What is really common is code that works by luck, rather than judgement. Simply using -O0 doesn't guarantee it will work. It just reduces the chances of breakage quite a bit. When I have investigated, and properly addressed, issues that break code with a high optimisation level I have mostly found things that were just about hanging together, which any change in the tool chain might have broken. They were just more likely to break with high optimisation.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #44 on: November 20, 2024, 01:40:42 pm »
What I don't like is successive compiler versions introducing new warnings, or changing the default settings for warnings. That creates work which you do not need if you have a working product which you are shipping! And since GCC is a continuously moving target, you cannot possibly - in any real-world business environment - chase the latest version for ever. At some point you have to freeze the tools (unless your employer is stupid and just keeps paying you for unproductive work). I use Cube IDE and froze it at 1.14.1, GCC v11, and that's it. I have not seen any advantage of any later version. But then I own my business, have done so since 1978, and have to run appropriate priorities, and while chasing new warnings would put bread on the table of an employee, it won't do so for me :)
What is wrong with adding new warnings? More information from the compiler is usually a good thing. Its specifically changing the meaning of settings where things gets nasty. If you break my make files you are a bad person. GCC has been guilty of this a few times, and the rest of the GNU tool chain a lot more. If you try to rebuild old code a newer tool chain typically throws out a bazillion lines of whining. The vast majority of this whining is not genuine problems, but constructs you probably wouldn't use in new code for reasons of clarity rather than correctness.

For embedded users there are a number of reasons for sticking with a version of GCC that works for you. They focus so much performance testing on powerful CPUs that they often release new versions that produce much worse code for simpler CPUs. This is an area where bit rot due to rampantly changing how things work has created numerous problems. GCC 3.x is a great choice for simpler cores, like AVR and MSP430. Nothing after that performs as well. If you try to build the GCC 3.x tools chains on a modern machine the number of failures would turn the rebuild into a major project. Their own tool chain is a great example of this problem of changing the meaning of settings.
 
The following users thanked this post: spostma

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #45 on: November 20, 2024, 01:59:29 pm »
Quote
OTOH security is a Red Queen problem, you're constantly running to catch up with whatever random thing someone has dreamed up and decreed via Simon-Says, the most recent one being "Simon Says PQC!".

This is gonna be a long off topic diversion ;) but - while I probably agree with you - IOT boxes should never be on an open port. They should always have an absolutely minimal attack surface. They should be clients, not servers. Yes we had a long thread on this too, with people correctly claiming that if you compromise that public-facing server to which all the IOT clients are connecting to, you have compromised all the clients. Well, yes, but that is another lever harder. That is why I think pretending MbedTLS is somehow "secure" is a waste of time. The whole box should never be on an open port in the first place! You just don't know if LWIP itself has some buffer overrun vulnerability, etc. Or even the CPU itself, with the ETH subsystem and its complicated chained buffers scheme.

Quote
What is wrong with adding new warnings? More information from the compiler is usually a good thing

I agree, but it still represents a time investment. You have to set up the project on an isolated PC (or inside a VM) and install the latest tools, and then chase down the warnings. If the product is working OK, those warnings will 99% likely be spurious.

Quote
GCC 3.x is a great choice for simpler cores, like AVR and MSP430.

Gosh, what year was that in?
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #46 on: November 20, 2024, 02:27:31 pm »
Quote
What is wrong with adding new warnings? More information from the compiler is usually a good thing
I agree, but it still represents a time investment. You have to set up the project on an isolated PC (or inside a VM) and install the latest tools, and then chase down the warnings. If the product is working OK, those warnings will 99% likely be spurious.
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls, apart from a blanket "Give me all the warnings you have" option, which no long term make files should use. The only good excuse for old stuff not building might be if they add some kind of "whine like its GCC x.y" command line option, so its easy to see how to restore clean building with older projects,
Quote
Quote
GCC 3.x is a great choice for simpler cores, like AVR and MSP430.
Gosh, what year was that in?
The year the GCC developers wed themselves to the x86 line (and perhaps anything else as complex as an x86) and everyone else had to just make do.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #47 on: November 20, 2024, 02:47:39 pm »
Quote
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls

Sorry - typing too fast...

I am pretty sure GCC did change some stuff recently. I also recall a linker change, complaining about executable segments in the ELF file, and fixing those was not trivial.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #48 on: November 20, 2024, 03:46:58 pm »
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls, apart from a blanket "Give me all the warnings you have" option, which no long term make files should use.

I would recommend adding new warnings to use, though. They could reveal bugs that have been always there.

"It's working" argument is bullshit, usually. I mean, how do you know it is working? Maybe 10000 customers are using it, and maybe they have intermittent problems they deal with, which decreases the product quality experience, but not enough for them to make formal bug reports? Maybe they don't know how to report. Maybe they are blaming themselves for "doing something wrong" and working around the bugs?

Enabling more/better warnings from newer tools and going through the warnings is, IMHO, time well spent. If you don't have resources to do that, then, obviously, don't touch anything, don't update your toolchain.

But if you have even a little bit of extra resources to spend on software quality improvements, enabling more warnings seems like a pretty low hanging fruit.
 
The following users thanked this post: SiliconWizard

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #49 on: November 20, 2024, 04:30:18 pm »
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls, apart from a blanket "Give me all the warnings you have" option, which no long term make files should use.

I would recommend adding new warnings to use, though. They could reveal bugs that have been always there.

"It's working" argument is bullshit, usually. I mean, how do you know it is working? Maybe 10000 customers are using it, and maybe they have intermittent problems they deal with, which decreases the product quality experience, but not enough for them to make formal bug reports? Maybe they don't know how to report. Maybe they are blaming themselves for "doing something wrong" and working around the bugs?

Enabling more/better warnings from newer tools and going through the warnings is, IMHO, time well spent. If you don't have resources to do that, then, obviously, don't touch anything, don't update your toolchain.

But if you have even a little bit of extra resources to spend on software quality improvements, enabling more warnings seems like a pretty low hanging fruit.
I agree. The whole point of the additional warnings is to highlight additional potential problems, and ultimately those warnings should be addressed if the project has a long term future. The problem is when you go from a clean compile to a flood of complaints from the tools its very hard to know where to start. If you make the thousands (No exaggeration. Thousands is typically on the low side) of source code changes needed to remove the warnings from an older project on a recent tool chain you are going to make at least a few errors, and the project will be broken. You need a way to get easily get back to a clean build, so you can move forwards incrementally with the necessary changes, and test along the way.
 
The following users thanked this post: Siwastaja

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 4059
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #50 on: November 20, 2024, 07:15:21 pm »
That being said, I'm wondering about this 'while (1)' thing. What kind of optimization could possibly optimize an infinite loop away, I don't know. An infinite loop is not something any optimization can get rid of, that I can think of right now.
Finite loops, OTOH, absolutely, if the body part of the loop has no effect.

So, unless you have a 'while (1)' loop that actually has an exit path in its body, and said path is statically determined by the compiler to be true at some iteration, while the rest of its body has no effect. Which would make it a finite loop.

There is no real reason to specifically optimize out an infinite loop.  However, it's valuable to optimize out an empty, finite loop.  For instance, a loop might have it's entire body hoisted out of the loop, resulting in an empty loop.  Furthermore, it's useful to optimize out an empty loop even if it can't be proven to terminate (this could happen due to the loop counter being a different type than the bounding expression).  To support this (and other optimizations related to concurrency) the C and C++ standards state that an infinite loop with no side effects is undefined behavior.  This allow compilers to assume that all loops with an empty body will eventually terminate and remove remove them.

Since people want to write infinite loops, or finite empty loops for time delays, compilers sometimes try to preserve these as a special case, but that always risk false positives and false negatives.  The correct way to implement this is to put a volatile access or volatile inline asm (even a nop) inside the loop body.  This will mark the loop body as observable behavior and prevent the loop from being removed.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28487
  • Country: nl
    • NCT Developments
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #51 on: November 20, 2024, 07:20:02 pm »
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls, apart from a blanket "Give me all the warnings you have" option, which no long term make files should use.

I would recommend adding new warnings to use, though. They could reveal bugs that have been always there.

"It's working" argument is bullshit, usually. I mean, how do you know it is working? Maybe 10000 customers are using it, and maybe they have intermittent problems they deal with, which decreases the product quality experience, but not enough for them to make formal bug reports? Maybe they don't know how to report. Maybe they are blaming themselves for "doing something wrong" and working around the bugs?

Enabling more/better warnings from newer tools and going through the warnings is, IMHO, time well spent. If you don't have resources to do that, then, obviously, don't touch anything, don't update your toolchain.

But if you have even a little bit of extra resources to spend on software quality improvements, enabling more warnings seems like a pretty low hanging fruit.
I agree. The whole point of the additional warnings is to highlight additional potential problems, and ultimately those warnings should be addressed if the project has a long term future. The problem is when you go from a clean compile to a flood of complaints from the tools its very hard to know where to start. If you make the thousands (No exaggeration. Thousands is typically on the low side) of source code changes needed to remove the warnings from an older project on a recent tool chain you are going to make at least a few errors, and the project will be broken. You need a way to get easily get back to a clean build, so you can move forwards incrementally with the necessary changes, and test along the way.
Yep. But it doesn't hurt to fix the most pressing warnings. In these cases I do take the time to go through the warning and see if there are possible casting (size), pointer arithmetic and buffer overrun issues.

But these are rather simple things to deal with. Things get worse when warnings become hard errors in newer compiler versions. A while ago I wanted to compile an older GCC (for a legacy project) with a relatively new GCC and that didn't work out of the box. The newer GCC doesn't allow re-definition of a function which older GCCs let slide with a warning. This meant patching the old GCC version in a few places. Nothing major; just cleaning up definitions which should have been done correctly right from the start.
« Last Edit: November 20, 2024, 07:42:58 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #52 on: November 21, 2024, 09:56:41 am »
This is gonna be a long off topic diversion ;) but - while I probably agree with you - IOT boxes should never be on an open port. They should always have an absolutely minimal attack surface.

Oh, absolutely.  Some of the most secure devices I've ever audited were written by embedded-systems developers (by which I mean RTOSes and bare-metal, not Linux shovelled onto a generic SoC-based device) who had little to no security experience but created implementations with almost no attack surface.  The usual quick-start of fuzzing the TCP stack or TLS implementation or whatever to justify further investigation never worked because everything just bounced off, it wasn't until we went to line-by-line checking of the code that we found a few minor and easily-remedied issues.  So you had code that definitely wasn't the best quality stuff ever created that was nevertheless almost immune to any kind of attack because there was nothing there to attack - when 20,000 lines of near-incomprehensible X.509 parsing and processing code are replaced by a memcpy() it eliminates an awful lot of attack surface.
 
The following users thanked this post: peter-h

Offline mark03Topic starter

  • Frequent Contributor
  • **
  • Posts: 750
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #53 on: November 21, 2024, 08:27:11 pm »
So, unless you have a 'while (1)' loop that actually has an exit path in its body, and said path is statically determined by the compiler to be true at some iteration, while the rest of its body has no effect. Which would make it a finite loop.

Since people want to write infinite loops, or finite empty loops for time delays, compilers sometimes try to preserve these as a special case, but that always risk false positives and false negatives.  The correct way to implement this is to put a volatile access or volatile inline asm (even a nop) inside the loop body.  This will mark the loop body as observable behavior and prevent the loop from being removed.

Since we are completely off topic now, I may as well join in :)

Finite empty loops for time delays are handy while debugging other issues but I would not use them in production code.  And while debugging I would not be compiling with optimizations.

OTOH, busy-waiting on a status register bit (again with an empty loop body) is a pretty standard use case for things like clock configuration while waiting for a PLL to lock.  It wouldn't make any sense to use a fancier approach with interrupts for init code which only runs at startup.  Is the claim that an optimizing compiler *should* be able to remove such a loop?  What if the register is marked volatile in the CMSIS header file (pretty sure it is)?  If the standard says that something like "while (periph->reg & 0x20) {}" (when periph is a pointer to volatile uint32 e.g.) is undefined behavior and fair game for elimination, then I would respectfully submit that the standard needs improvement. :box:

But I don't think this is what is being claimed...  is it??  Just trying to understand since apparently, I'm being told that even the experts are getting this wrong ::)
« Last Edit: November 21, 2024, 08:29:06 pm by mark03 »
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #54 on: November 21, 2024, 10:10:43 pm »
I have used infinite loops on register bits in unusual cases where it cannot hang unless the silicon is faulty e.g.

Code: [Select]
// Start a conversion
ADC1->SR = ~(ADC_FLAG_EOC | ADC_FLAG_OVR);
ADC1->CR2 |= (uint32_t)ADC_CR2_SWSTART;
// Wait for End of conversion flag (bit 1 in SR)
// This could HANG but only if the silicon is defective or perhaps if the ADC was not enabled
while (((ADC1->SR) & ADC_FLAG_EOC)==0) {}

Never seen these optimised away, AFAICT. But then would I know? ;)

Yes indeed registers are all declared volatile in the .h files.

AIUI, a loop containing a volatile variable should never be optimised away... or can it?

I know you can have fun with optimisation stripping out code which then stops debugging working, so I have used e.g.
asm("nop");   
to create code on which a breakpoint can be safely set.
« Last Edit: November 21, 2024, 10:13:18 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #55 on: November 21, 2024, 10:25:43 pm »
I think my earlier point was still not fully understood.

Finite loops and infinite loops have nothing to do with each other.

An infinite loop (such as 'while (1)' with no exit path inside its body, or equivalently 'for (; ; )' ) can't be optimized out, as it's blocking the execution flow, making it impossible for any compiler to optimize it out. That would be such a severe compiling bug that it would need to be thrown away.

As long as you have a loop with an exit path, this is NOT an infinite loop. But from experience and what can be assumed in this thread, there is some common misnaming of finite and infinite here. It seems that many people will call a loop with a simple condition based on a counter with a fixed range a "finite" loop, and anything else an "infinite" loop. Which isn't correct IMO. And that leads to confusions ("unfortunate shortcuts") regarding optimizations.

'while (1) {}' is an infinite loop, has no exit condition whatsoever, and can never be optimized out, as it blocks execution flow. What happens inside the body loop doesn't matter - it can't get out.

'for (int i = 0; i < 100; i++) {}' can definitely be optimized out. It has an exit condition that can be determined statically and no effect.
What confuses some developers is that execution time itself is NEVER considered an observable effect in the language specification in C, and many (if not most) other languages higher level than assembly. (Well, there are even some assemblers that can optimize instructions, so...) For embedded developers in particular, this point often sounds confusing, as timing in embedded development is key. But C and other similar or higher-level languages have no notion of timing whatsoever.

The only "right" way of writing a delay loop in C is to either use a volatile-qualified counter, such as: 'for (volatile int i = 0; i < 100; i++) {}' or do something that is considered having an effect in the body loop - which may not be trivial. When coding for MCUs, a relatively common example of this is to use some kind of 'nop' (which is usually defined as inline assembly with a 'nop' instruction for the given target, qualified volatile, so that it itself can't be optimized out - something like 'asm volatile ("nop")' ). So that would look like: 'for (int i = 0; i < 100; i++) { nop(); }'. In either case, it will still be a 'hack' when it comes to obtaining a particular delay, but it will spend some execution time for sure. For the record, the version with a volatile-qualified counter is usually the more expensive one, as it's most often implemented with a counter placed on the stack, and read and written at each iteration, which is more expensive usuallly than a typical "nop" instructtion, while the version without the volatile qualifier will usually be implemented with a register.

As to busy loops reading some MCU "register" (not to be mixed with the CPU registers themselves), yes, as long as said register is declared volatile, the loop will never be optimized out either. That's guaranteed to work, and that's the reason register definitions are all marked volatile in well-written C: all accesses are guaranteed to be honored by the compiler.

Note that with some compilers, dereferencing pointers that are cast from integer values (which can usually be considered direct "addresses" on many targets), even when not qualified volatile, does act as though it was. That may be one reason some developers have seen that not using volatile for "register definitions" works, and so using "volatile" can be done without. I wouldn't recommend that, as there is no such guarantee that I've seen in any standard revision, so you'd be playing with the particular behavior of some compiler. What I mean is for instance:

'*(uint32_t *) 0x10000' would never be optimized out, and so would be equivalent to '*(volatile uint32_t *) 0x10000'. But that would be a particular case with a particular compiler and should not be considered a rule.

The typical case where a loop is very likely to be optimized out is the following:

Code: [Select]
uint32_t n;

void Foo(void)
{
    while (n > 0) {}
}

The reason is that the compiler, here, can assume that n never changes when Foo() is being executed.

Adding the volatile qualifier to the declaration of n will prevent this optimization, and will compile as a loop which reads and compares 'n' at each iteration.

I think that does sum it up reasonably (let me know if I missed some case) and there is no black magic involved.
« Last Edit: November 21, 2024, 10:31:11 pm by SiliconWizard »
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3577
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #56 on: November 22, 2024, 09:43:49 am »
So, unless you have a 'while (1)' loop that actually has an exit path in its body, and said path is statically determined by the compiler to be true at some iteration, while the rest of its body has no effect. Which would make it a finite loop.

Since people want to write infinite loops, or finite empty loops for time delays, compilers sometimes try to preserve these as a special case, but that always risk false positives and false negatives.  The correct way to implement this is to put a volatile access or volatile inline asm (even a nop) inside the loop body.  This will mark the loop body as observable behavior and prevent the loop from being removed.

Since we are completely off topic now, I may as well join in :)

Finite empty loops for time delays are handy while debugging other issues but I would not use them in production code.  And while debugging I would not be compiling with optimizations.

OTOH, busy-waiting on a status register bit (again with an empty loop body) is a pretty standard use case for things like clock configuration while waiting for a PLL to lock.  It wouldn't make any sense to use a fancier approach with interrupts for init code which only runs at startup.  Is the claim that an optimizing compiler *should* be able to remove such a loop?  What if the register is marked volatile in the CMSIS header file (pretty sure it is)?  If the standard says that something like "while (periph->reg & 0x20) {}" (when periph is a pointer to volatile uint32 e.g.) is undefined behavior and fair game for elimination, then I would respectfully submit that the standard needs improvement. :box:

But I don't think this is what is being claimed...  is it??  Just trying to understand since apparently, I'm being told that even the experts are getting this wrong ::)

- You may want delays for whatever reason, using code. There are compilers, or libraries, that provide delay functions a la __delay_ms(x) and you should be using those as the compiler expects those for delays.
- You should be debugging using the same optimization level as production code. Sucks when a bunch of expressions that produce bad results are optimized and you can't see step by step, but you can get around it while debugging anyway. Otherwise you debug different code than the one in production. It can be a really bad problem.
- Peripheral registers are indeed marked volatile, because they operate on "their own thread" so the compiler must never assume their value. That's why waiting on a peripheral bit is never optimized away.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #57 on: November 22, 2024, 11:06:25 am »
Quote
for (int i = 0; i < 100; i++) {}' can definitely be optimized out. It has an exit condition that can be determined statically and no effect.

How about

for (int i = 0; i < 100; i++) {asm("nop");}

That can also be determined statically, but asm should never be optimised out (unless the whole block of code is never referenced, etc, in which case the linker will dump it).

Quote
You should be debugging using the same optimization level as production code

Couldn't agree more, but lots of people differ :) I use -Og. -O1/2/3 produce marginal differences, on arm32.

Quote
There are compilers, or libraries, that provide delay functions a la __delay_ms(x) and you should be using those as the compiler expects those for delays.

IME, the bigger case for loops is for very short delays, ns or us. The compiler is unlikely to provide that, because it can't be done with a normal "tick".
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline jfiresto

  • Frequent Contributor
  • **
  • Posts: 900
  • Country: de
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #58 on: November 22, 2024, 11:39:40 am »
How about

for (int i = 0; i < 100; i++) {asm("nop");}

That can also be determined statically, but asm should never be optimised out (unless the whole block of code is never referenced, etc, in which case the linker will dump it).

To be extra safe, you might want to add "volatile" to that

Code: [Select]
#define asm(text) asm volatile (text)

– for avr-gcc, at least.
« Last Edit: November 22, 2024, 11:47:26 am by jfiresto »
-John
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #59 on: November 22, 2024, 03:11:53 pm »
That I don't understand at all. A volatile NOP??
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline jfiresto

  • Frequent Contributor
  • **
  • Posts: 900
  • Country: de
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #60 on: November 22, 2024, 04:09:10 pm »
That I don't understand at all. A volatile NOP??

The volatile is meant to stop the optimizer from removing code, that from its perspective, does nothing, and raising constant code out of a loop. The avr-gcc 3.3 inline assembler cookbook mentions its use. It may not be a problem with later versions after the avr backend lost the plot.
-John
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #61 on: November 22, 2024, 04:39:00 pm »
I always assumed (wrongly?) that any assembler anywhere is never optimised away.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2111
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #62 on: November 22, 2024, 04:47:22 pm »
The fact that we all have to either assume, guess, or spend years doing nothing but studying massive, ever-changing specs in order to write safe, secure, and error-free code is the problem.

It's a failure of the toolmakers to meet the needs of the market.  Instead, compiler authors pursue their own imaginary needs, trying to save every last CPU cycle at any cost, even though no embedded developer in his/her right mind relies on the compiler to do that. 

The language isn't blameless but the real problem is the mentality behind the compilers.
 
The following users thanked this post: spostma, 5U4GB

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #63 on: November 22, 2024, 04:54:31 pm »
The fact that we all have to either assume, guess, or spend years doing nothing but studying massive, ever-changing specs in order to write safe, secure, and error-free code is the problem.

It's a failure of the toolmakers to meet the needs of the market.  Instead, compiler authors pursue their own imaginary needs, trying to save every last CPU cycle at any cost, even though no embedded developer in his/her right mind relies on the compiler to do that. 

The language isn't blameless but the real problem is the mentality behind the compilers.
The real problem is the language specs don't cater for a lot of corner cases. Especially ones really important for effective embedded development. The compiler developers find things that can't be expressed properly, and add ways to express them that go beyond the language spec. Things like special attributes that will get the right behaviour in interrupt routines. None of this is portable. If the tool developers move on, and new ones come in who don't fully appreciate why these things have been done, they often break them. We need languages properly defined for the needs of embedded development.
 
The following users thanked this post: cfbsoftware

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #64 on: November 23, 2024, 08:34:42 am »
How about

for (int i = 0; i < 100; i++) {asm("nop");}

That can also be determined statically, but asm should never be optimised out (unless the whole block of code is never referenced, etc, in which case the linker will dump it).

This was very useful because gcc used to not mess with functions with any asm() parts in them***, which was a quick hack for getting around optimiser bugs.  Or, alternatively, a "fix" after you'd spend several hours trying to figure out how the compiler could possibly be generating the code it did for the C input it was getting.

*** "Used to" being the operative word, who knows what it does today.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #65 on: November 23, 2024, 08:42:53 am »
The language isn't blameless but the real problem is the mentality behind the compilers.

This is exactly my beef with gcc.  In almost no other industry would the blame-the-user attitude of the compiler developers be tolerated.  Look at something like table saws, they have fences, sleds, riving knives, blade guards, sawstop (if you can afford it), ... .  If the gcc developers made table saws their approach would be "it was UB cutting timber that way so it's your fault.  Oh, and good luck getting those fingers reattached".
 
The following users thanked this post: cfbsoftware

Offline cfbsoftware

  • Regular Contributor
  • *
  • Posts: 137
  • Country: au
    • Astrobe: Oberon IDE for Cortex-M and FPGA Development
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #66 on: November 23, 2024, 09:46:32 pm »
Instead, compiler authors pursue their own imaginary needs, trying to save every last CPU cycle at any cost, even though no embedded developer in his/her right mind relies on the compiler to do that. 
Fortunately not all compiler authors can be tarred with the same brush. Prof Wirth's followed his own advice in his Compiler Construction book when he later developed his Oberon language compiler for Arm:
Quote
Furthermore, we must distinguish between optimizations whose effects could also be obtained by a more appropriate formulation of the source program, and those where this is impossible. The first kind of optimization mainly serves the untalented or sloppy programmer, but merely burdens all the other users through the increased size and decreased speed of the compiler.

https://www.amazon.com/exec/obidos/ASIN/0201403536/acmorg-20

You can download an official copy from the ETH website:

https://people.inf.ethz.ch/wirth/CompilerConstruction/index.html
Chris Burrows
CFB Software
https://www.astrobe.com
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #67 on: November 24, 2024, 08:35:14 am »
There's also a C compiler called CompCert which comes "with a mathematical, machine-checked proof that the generated executable code behaves exactly as prescribed by the semantics of the source program".  Unfortunately the code it produces is pretty poor, a bit like gcc -O0 the last time I checked.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #68 on: November 24, 2024, 05:56:06 pm »
This is exactly my beef with gcc.  In almost no other industry would the blame-the-user attitude of the compiler developers be tolerated.  Look at something like table saws, they have fences, sleds, riving knives, blade guards, sawstop (if you can afford it), ... .  If the gcc developers made table saws their approach would be "it was UB cutting timber that way so it's your fault.  Oh, and good luck getting those fingers reattached".

And yet, in the same real world where GCC is used all the time without any major problems, fingers also get cut regardless of safety features - which are insufficient to prevent most accidents because you have to keep the blade exposed to allow cutting materials - or because some products are non-compliant to safety regulations due to human error or even purposeful cost-cutting. And as you mention, not everyone can afford "sawstop". Compare this to free-of-cost static analyzer tools that can easily catch UB and thus prevent GCC's tricks, all you need is the will to use these tools. Some choose to cut their fingers instead because they don't care.

During last two decades or so, GCC has introduced huge loads of useful warnings and static code analysis to catch real-world bugs in code. These warnings are equivalent to your blade guards, sawstops etc. Only in your twisted worldview, some failures to deliver exactly what users need negate all of this work, and only in your twisted worldview, occasional defects are taken as malice or extreme unsuitability for the work role.
« Last Edit: November 24, 2024, 05:58:59 pm by Siwastaja »
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #69 on: November 24, 2024, 06:01:29 pm »
There's also a C compiler called CompCert which comes "with a mathematical, machine-checked proof that the generated executable code behaves exactly as prescribed by the semantics of the source program".

You realize this "perfect" compiler also won't be able to implement your intended meaning when you write UB?
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #70 on: November 24, 2024, 06:14:16 pm »
During last two decades or so, GCC has introduced huge loads of useful warnings and static code analysis to catch real-world bugs in code. These warnings are equivalent to your blade guards, sawstops etc. Only in your twisted worldview, some failures to deliver exactly what users need negate all of this work, and only in your twisted worldview, occasional defects are taken as malice or extreme unsuitability for the work role.
Has it really had a positive effect, though? As the years have gone by I have spent many hours getting my long term code to compile cleanly with the latest C compilers in maximum warnings mode. Very seldom have those extra warnings picked up anything useful, and I can't remember once actually clearing out a real bug through that work. My code isn't perfect, though. The improved analysis just hasn't been hitting the actual problems that show up from time to time, and get addressed by other means. I wonder just how many fewer bugs we have because of all this extra analysis?
 
The following users thanked this post: 5U4GB

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #71 on: November 25, 2024, 02:24:53 am »
There's also a C compiler called CompCert which comes "with a mathematical, machine-checked proof that the generated executable code behaves exactly as prescribed by the semantics of the source program".

You realize this "perfect" compiler also won't be able to implement your intended meaning when you write UB?

I realise that UB seems to be some sort of pet hobby horse of yours but the major problem with gcc that CompCert doesn't have is that gcc will silently rewrite code to change its semantics, which is something that CompCert is guaranteed never to do.

In the specific case of UB though, it depends on what you count as UB.  For example the assumption that you're running on a two's-complement machine, as approximately 100% of all systems from the last half century are (I think the last one's-complement machine was the CDC 6600, which predates the existence of C) is technically UB because you could be running your C compiler on an ENIAC and therefore we can't assume two's-complement.  However any sane compiler will apply two's-complement semantics by default since that's what it's compiling for.  gcc won't, obviously, because it's gcc.
« Last Edit: November 25, 2024, 02:44:07 am by 5U4GB »
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #72 on: November 25, 2024, 02:28:24 am »
Has it really had a positive effect, though? As the years have gone by I have spent many hours getting my long term code to compile cleanly with the latest C compilers in maximum warnings mode. Very seldom have those extra warnings picked up anything useful, and I can't remember once actually clearing out a real bug through that work.

I would say it's actually gone backwards, because more recent versions will silently do things like remove null-pointer checks without emitting any warnings.  Like you, I don't think I've found anything useful in gcc warnings for a long, long time.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #73 on: November 25, 2024, 04:07:50 am »
Quote
for (int i = 0; i < 100; i++) {}' can definitely be optimized out. It has an exit condition that can be determined statically and no effect.

How about

for (int i = 0; i < 100; i++) {asm("nop");}

That can also be determined statically, but asm should never be optimised out (unless the whole block of code is never referenced, etc, in which case the linker will dump it).

I mentioned that very approach in my (probably TLDR) post. Note a few things:

- Inline assembly is not part of the C standard and so, well, there is no standard behavior. The standard actually lists it as a 'common extension' but does not detail anything about it.
- So it's up to the compiler's implementation. That's typically what would be called "implementation-defined" (and not 'undefined behavior" here). Your best bet would be to read about this in its manual. Not always easy to find, but more reliable than guessing.
- Wether the compiler can do any optimization on inline assembly is of course completely implementation-defined as well. There is no guarantee that all compilers will behave similarly, even if this extension is, as the standard recognizes, a "common" one.

What can be said about GCC (which is what many use these days due to it being the #1 compiler for ARM Cortex M and now RISC-V development) is the following, found in its manual (references below):
- For inline assembly without operands (that would be the case for a typical "nop"), it's never optimized out.
- For inline assembly with "C" operands, it *can* be optilmized out depending on how the operands are used in the rest of the code. In that case, to prevent optimization, you must add the "volatile" keyword to the "asm" one.

I seem to remember that older versions of GCC could optimize out even inline assembly without operands, which is why it has become common to always use "asm volatile" to use assembly that must be inlined verbatim, even when not using operands. That also allows not having to think about it - always use volatile, and you have your guarantee. (It's not necessary again for recent versions of GCC and assembly with no operands, but it's accepted, so you can always use it.)
That's why I mentioned the 'asm volatile ("nop")'  sequence before as an example.

To understand what inline assembly is and how it works, you can refer to the links below.

https://gcc.gnu.org/onlinedocs/gcc/Basic-Asm.html
https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html

The syntax for inline assembly with operands is not trivial. Takes a while to figure out.

For other compilers, Clang (LLVM) tries to mimick GCC's behavior as much as possible (to be a drop-in replacement), so it should behave similarly for inline assembly. For other, commercial compilers, quite a few probably have a similar syntax but I would recommend reading about it in their respective manual to make sure. Compilers that are mainly for embedded targets (which GCC isn't) are likely to favor more "tamed" behavior and so not require "volatile" for any inline assembly sequence, but that would just be a general thought. That doesn't mean that they do it better, just that they do it in a way that's more likely to be what expects their intended audience. GCC and Clang/LLVM are general purpose compilers that support dozens of targets, from small MCUs to servers and supercomputers.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #74 on: November 25, 2024, 08:09:45 am »
Quote
For inline assembly with "C" operands, it *can* be optilmized out depending on how the operands are used in the rest of the code. In that case, to prevent optimization, you must add the "volatile" keyword to the "asm" one.

Does that mean that this might get optimised out?

Code: [Select]

// Hang around for delay in ms. Approximate but doesn't need interrupts etc working.

__attribute__((noinline))
static void hang_around(uint32_t delay)
{
extern uint32_t SystemCoreClock;
delay *= (SystemCoreClock/4100);

asm volatile (
"1: subs %[delay], %[delay], #1 \n"
"   nop \n"
"   bne 1b \n"
: [delay] "+l"(delay)
);
}

if the "volatile" was not there?
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #75 on: November 25, 2024, 09:21:24 am »
I think a better question might be "how recent is that advice" or "what version of gcc does it apply to" since its validity can change over time.  For a definitive answer, could I suggest the incredibly useful Godbolt compiler explorer, where you can select something like a hundred different compilers and compiler versions to see what each one does.  For its default selection of gcc 14.2 for x86-64 and no optimisation it's telling me that there's no difference between 'asm volatile' and 'asm', the same asm is present (once you add the necessary include of stdint.h).  As soon as you get to -O1 though the whole function vanishes without the 'volatile' present. 
Code: [Select]
asm volatile ("nop"); leaves the nop in place which would otherwise be removed.

Other compilers handle it differently, e.g:

Code: [Select]
#if defined __SUNPRO_C
asm("");
#endif // Bypass Sun compiler bug

so in that case just the presence of the asm(), at any optimisation level, is sufficient.
« Last Edit: November 25, 2024, 09:35:34 am by 5U4GB »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #76 on: November 25, 2024, 09:40:08 pm »
Quote
For inline assembly with "C" operands, it *can* be optilmized out depending on how the operands are used in the rest of the code. In that case, to prevent optimization, you must add the "volatile" keyword to the "asm" one.

Does that mean that this might get optimised out?

Code: [Select]

// Hang around for delay in ms. Approximate but doesn't need interrupts etc working.

__attribute__((noinline))
static void hang_around(uint32_t delay)
{
extern uint32_t SystemCoreClock;
delay *= (SystemCoreClock/4100);

asm volatile (
"1: subs %[delay], %[delay], #1 \n"
"   nop \n"
"   bne 1b \n"
: [delay] "+l"(delay)
);
}

if the "volatile" was not there?

According to what the GCC manual page says and my experience, with that piece of code, I would say that GCC might indeed optimize this out if you omit the 'volatile' keyword, given that this is a finite loop that only modifies the delay local variable which is never used afterwards. As I said, obviously that may vary depending on how exactly the compiler handles optimizations in a given version, which is why I recommend always using the volatile qualifier when using inline assembly with GCC and Clang if said assembly must be inlined verbatim (and that should be accepted - if possibly ignored - by most other compilers too these days). At worst, it's not necessary, at best, it will do what you intended.

So that's again the reason why you're likely to almost always see "asm volatile" in vendor source code.

Note that for such a delay as above (which given the SystemCoreClock variable, I assume this is for STM32 with the HAL), I recommend this instead, which will give you exact delays (down to a few cycles and assuming it's not interrupted) and using the same SystemCoreClock global:

Code: [Select]
static inline void delay_us(uint32_t nDelay_us)
{
uint32_t nStart = DWT->CYCCNT;

nDelay_us *= (SystemCoreClock / 1000000);

while ((DWT->CYCCNT - nStart) < nDelay_us) {}
}

No need for hard-coded tweaked constants and inline assembly.

If the DWT is not enabled in your code, you may first need to enable it (at initialization):

Code: [Select]
void DWT_Init(void)
{
CoreDebug->DEMCR |= CoreDebug_DEMCR_TRCENA_Msk;
DWT->CYCCNT = 0;
DWT->CTRL |= DWT_CTRL_CYCCNTENA_Msk;
}
« Last Edit: November 25, 2024, 09:46:57 pm by SiliconWizard »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #77 on: November 25, 2024, 10:06:34 pm »
Quote
For inline assembly with "C" operands, it *can* be optilmized out depending on how the operands are used in the rest of the code. In that case, to prevent optimization, you must add the "volatile" keyword to the "asm" one.

Does that mean that this might get optimised out?

Code: [Select]

// Hang around for delay in ms. Approximate but doesn't need interrupts etc working.

__attribute__((noinline))
static void hang_around(uint32_t delay)
{
extern uint32_t SystemCoreClock;
delay *= (SystemCoreClock/4100);

asm volatile (
"1: subs %[delay], %[delay], #1 \n"
"   nop \n"
"   bne 1b \n"
: [delay] "+l"(delay)
);
}

if the "volatile" was not there?
Yes, and that has been true for years. I met that issue at least a decade ago. Its perfectly reasonably behaviour on the part of GCC, as the code does nothing functional. Its literally just a time waster, and GCC needs to be signalled not to eliminate time wasting. Volatile works for that, just as it works to stop interrupt routines being tinkered with, because they also don't do anything useful the compiler can detect.
 
The following users thanked this post: Siwastaja

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #78 on: November 26, 2024, 07:06:44 am »
Remember that busy loops or special hardware register access patterns are not the only uses for inline assembly. You could, for example, want to use a specific instruction the compiler does not have knowledge about in part of calculation. Or you looked at compiler output and concluded you can do better with some manual inline assembly tuning. And in such cases you most definitely do want it to be part of the usual optimization. For example, you might change some constants after which the operation becomes completely unnecessary. If the original reason for the asm was performance optimization, then it now became a burden if not allowed to be optimized out.

Actually hand-optimization for performance is pretty classic reason to use inline asm, and this works well when the compiler knows the inputs and outputs of the asm and is allowed to optimize it out.

Semantics of the volatile qualifier follows the same logic as everywhere - you use it to force a variable access in memory, or in this case, force the injection of asm instructions into program code, even when they have no effects on the C abstract machine.

If you have seen asm busy loop implemented without volatile qualifier, that is just poor programming, an error which needs to be fixed. Mistakes happen, learn and go on.
« Last Edit: November 26, 2024, 07:08:42 am by Siwastaja »
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #79 on: November 26, 2024, 08:54:38 am »
Indeed; I do use CYCCNT:

Code: [Select]

// Delay for ms. Uses CPU clock counter CYCNT.
// Uses two loops to prevent due to delay*SystemCoreClock being too big for uint32_t.
// Max delay is uint32_t ms.
// This is a precise delay. It uses special code to deal with uint32_t overflow.
// DO NOT USE THIS before CYCCNT has been enabled!
// If this function is called before the PLL is set up to wind up the CPU to 168MHz, the
// delay will be 168/16 longer than the ms value, because the CPU starts up at 16MHz.

void hang_around(uint32_t delay)
{

volatile uint32_t max_count = SystemCoreClock/1000L;  // 168M = 1 sec
volatile uint32_t start_time;

do
{
start_time = DWT->CYCCNT;
while((DWT->CYCCNT-start_time) < max_count) ; // this counts milliseconds
delay--;
} while (delay>0);

}

As an interesting aside, this function appears to be re-entrant too. It only ever reads CYCCNT.

And yes indeed if I leave out the "volatile" on that asm version I get an empty function



No warning, the code won't crash, but the wait will be close to zero, which is gonna surprise somebody ;)
« Last Edit: November 26, 2024, 09:30:33 am by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3301
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #80 on: November 27, 2024, 02:46:24 am »
If you have seen asm busy loop implemented without volatile qualifier, that is just poor programming, an error which needs to be fixed. Mistakes happen, learn and go on.

The asm keyword is not a part of the standard, it can be found in Annex J (C11, don't know if that chaned later) which is informative only (sic!). It says:

Quote
J.5.10 The asm keyword

1 The asm keyword may be used to insert assembly language directly into the translator
output. The most common implementation is via a statement of the form:

asm (character-string-literal );

Requiring a volatile qualifier for it is a very bad idea. Chances that someone writes assembler code with the intention of such code being deleted during the optimization process are slim to none. It would be much better to assume that any asm code must stay (unless the whole function is deleted of course).

Of course, you need to follow all the idiosyncrasies of the compiler you use.  This is just a grim reality which you can do nothing about.

 
The following users thanked this post: KE5FX

Offline mark03Topic starter

  • Frequent Contributor
  • **
  • Posts: 750
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #81 on: November 27, 2024, 03:33:23 am »
Indeed; I do use CYCCNT:

Code: [Select]
// Delay for ms. Uses CPU clock counter CYCNT.
// Uses two loops to prevent due to delay*SystemCoreClock being too big for uint32_t.
// Max delay is uint32_t ms.
// This is a precise delay. It uses special code to deal with uint32_t overflow.
// DO NOT USE THIS before CYCCNT has been enabled!
// If this function is called before the PLL is set up to wind up the CPU to 168MHz, the
// delay will be 168/16 longer than the ms value, because the CPU starts up at 16MHz.

void hang_around(uint32_t delay)
{
volatile uint32_t max_count = SystemCoreClock/1000L;  // 168M = 1 sec
volatile uint32_t start_time;

do
{
start_time = DWT->CYCCNT;
while((DWT->CYCCNT-start_time) < max_count) ; // this counts milliseconds
delay--;
} while (delay>0);
}

Something doesn't add up here.  DWT->CYCCNT is declared volatile, so this shouldn't be optimized away.  Also, I see no reason why max_count and start_time would need to be declared volatile.  What's going on?
 
The following users thanked this post: newbrain

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #82 on: November 29, 2024, 02:26:52 pm »
Yes indeed the 2x volatile should not be needed because DWT->CYCCNT is volatile so loading stuff from it should work ok. No idea where that code came from.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #83 on: November 29, 2024, 11:01:48 pm »
Yes indeed the 2x volatile should not be needed because DWT->CYCCNT is volatile so loading stuff from it should work ok. No idea where that code came from.

They aren't needed indeed.

For 'start_time', it won't make a difference, but it's not needed.
For 'max_count', which doesn't change within the loop, adding the 'volatile' quailifier actually makes it "semantically" odd, as this is a value that is precisely never supposed to change in the rest of the function.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #84 on: November 30, 2024, 07:59:43 am »
This code removal business is confusing not just me but many others. In this area, C has been a moving target for all the years. For years, people have been working on the assumption that asm is never optimised but this is now clearly wrong. The result is that a whole load of stuff is vulnerable to a new compiler version etc.

I've been writing documentation on my project all along and now have hundreds of pages but I struggle to document this aspect. Fortunately the job is now done :)

It is not even clear whether there is a global "don't remove code" option. -Og certainly can remove code. Maybe -O0 (zero opt) but then you get some 30-50% more code.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #85 on: November 30, 2024, 08:11:19 am »
This code removal business is confusing not just me but many others. In this area, C has been a moving target for all the years. For years, people have been working on the assumption that asm is never optimised but this is now clearly wrong.

It's attitude problem, working with assumptions instead of facts. Really, just RTFM. asm is not standard C, it's compiler extension. Read the compiler manual.

Really, 10 seconds in google: "gcc asm keyword", first result leads to https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html , where mention of volatile qualifier fits in the first screenful of information.

All this rationalization and explaining takes 100x the effort compared to just checking instead of assuming. Really, the same principle applies to every field of engineering. RTFM, check, double-check, never assume.

I have never seen asm keyword used without volatile (except in rare context where optimizing it out is allowable, i.e. part of hand-optimizing a calculation). It never ever crossed my mind to not use the volatile qualifier. Maybe I have been lucky, or maybe I did read the manual already 20 years ago; I don't remember. And it's not a moving target, I remember this asm volatile from 1990's.

Admitting mistakes and doing better next time is fastest way forward.
« Last Edit: November 30, 2024, 08:16:01 am by Siwastaja »
 
The following users thanked this post: newbrain, JPortici

Offline cfbsoftware

  • Regular Contributor
  • *
  • Posts: 137
  • Country: au
    • Astrobe: Oberon IDE for Cortex-M and FPGA Development
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #86 on: November 30, 2024, 08:46:33 am »
It's attitude problem, working with assumptions instead of facts. Really, just RTFM. asm is not standard C, it's compiler extension. Read the compiler manual.

Really, 10 seconds in google: "gcc asm keyword", first result leads to https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html , where mention of volatile qualifier fits in the first screenful of information.
Note that the behaviour of Extended Asm is inconsistent with Basic Asm:
Quote
The optional volatile qualifier has no effect. All basic asm blocks are implicitly volatile.
https://gcc.gnu.org/onlinedocs/gcc/Basic-Asm.html
Chris Burrows
CFB Software
https://www.astrobe.com
 
The following users thanked this post: Siwastaja

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #87 on: November 30, 2024, 08:55:48 am »
Yes; this was posted earlier. Asm without C operands is not removed.

The GCC reference posted above by Siwastaja is IMHO really complicated. I'd say in close to 100% of cases of somebody using asm, there is an expectation of non removal ever. Is there an attribute which can be used on a function which prevents any optimisation? I am sure we have done this before.

I have been using __attribute__((optimize("O0"))) to prevent replacement of a loop structure with a call to memcpy etc. This should also stop optimisation of asm, surely? I will test it and report. EDIT: yes that does it perfectly.

Quote
just as it works to stop interrupt routines being tinkered with, because they also don't do anything useful the compiler can detect.

I haven't used volatile on ISRs and it works presumably because there is a pointer to them in the vector table. The same method works to preserve main() in an "overlay" which you jump to.
« Last Edit: November 30, 2024, 12:44:04 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #88 on: November 30, 2024, 02:32:43 pm »
Note that the behaviour of Extended Asm is inconsistent with Basic Asm:

If you think about it, it makes sense: the whole point of extended asm is that compiler is told what are inputs and what are outputs. The obvious reason is: optimization. Without this information, basic asm cannot be optimized away.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #89 on: November 30, 2024, 02:34:41 pm »
Is there an attribute which can be used on a function which prevents any optimisation?

This is fundamentally a wrong question; a loaded question. Because C is not a portable macro assembler but an abstract language, there is no direct mapping from input to output. Therefore, what even is optimization cannot be clearly defined. So what would you want to disable? Do you want to prevent literals to be pre-calculated (e.g. 1+1 replaced with 2)? Even assemblers do that much optimization.

If you need exactly certain machine code output, write it in asm instead, but as said, even those do optimizations and abstract away stuff so in extreme cases you may want to write binary 1 and 0's directly.
« Last Edit: November 30, 2024, 02:36:19 pm by Siwastaja »
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3577
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #90 on: November 30, 2024, 03:16:56 pm »
It's attitude problem, working with assumptions instead of facts. Really, just RTFM.
This.
Quote
Read the compiler manual.
This.

Quote
All this rationalization and explaining takes 100x the effort compared to just checking instead of assuming. Really, the same principle applies to every field of engineering.
This

Quote
RTFM, check, double-check, never assume.
and THIS.

To 90% or more of the questions about C's """quirks"""
 
The following users thanked this post: Siwastaja

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #91 on: November 30, 2024, 03:21:08 pm »
To 90% or more of the questions about C's """quirks"""

The stupidest thing here is that 99% of the time complaints about "C"'s "quirks" apply to every imaginable alternative and replacement as well.

For example, every "recommended" "new" "cool" replacement language, be it Rust or Elixir or whatever, is also defined through some sort of abstract machine, and will do optimization. "Portable macroassemblers" are nearly non-existent and trying to turn C into one is just stupid. Maybe there is a reason for why portable macro assemblers are nonexistent, maybe that's just a stupid way to develop software projects.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #92 on: November 30, 2024, 05:13:08 pm »
Quote
This is fundamentally a wrong question; a loaded question

Don't be silly. It was in the asm context.

And the -O0 function attribute I posted above does work to preserve asm code.

It does have the predictable quirk: the C code in the same function is obviously also not optimised, and is a lot bigger. But here it is just one line
delay *= (SystemCoreClock/4100000L);
so it doesn't matter.

-Og:

Code: [Select]
  delay *= (B_SystemCoreClock/4100000L);
 80012ac: eb00 0080 add.w r0, r0, r0, lsl #2
 80012b0: 00c0      lsls r0, r0, #3

-O0:

Code: [Select]
delay *= (B_SystemCoreClock/4100000L);
 80000ac: 687a      ldr r2, [r7, #4]
 80000ae: 4613      mov r3, r2
 80000b0: 009b      lsls r3, r3, #2
 80000b2: 4413      add r3, r2
 80000b4: 00db      lsls r3, r3, #3
 80000b6: 607b      str r3, [r7, #4]
« Last Edit: November 30, 2024, 05:20:28 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #93 on: November 30, 2024, 05:26:56 pm »
Quote
This is fundamentally a wrong question; a loaded question

Don't be silly. It was in the asm context.

And the -O0 function attribute I posted above does work to preserve asm code.

For god's sake, how about the freaking volatile qualifier which is documented to prevent it being optimized out? What's wrong with a simple solution to a simple problem?
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #94 on: November 30, 2024, 06:12:19 pm »
Curiosity :)
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2111
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #95 on: November 30, 2024, 06:24:23 pm »
For god's sake, how about the freaking volatile qualifier which is documented to prevent it being optimized out? What's wrong with a simple solution to a simple problem?

Abusing a language keyword to fix a problem they created themselves isn't the triumphal feat of software engineering that you (and the GCC authors) seem to think it is.

But ...  :-//  that's what we have to work with.
 
The following users thanked this post: peter-h, cfbsoftware, 5U4GB

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #96 on: November 30, 2024, 06:49:06 pm »
Abusing a language keyword to fix a problem they created themselves isn't the triumphal feat of software engineering that you (and the GCC authors) seem to think it is.

So using a keyword the purpose of which is to tell the compiler that code has side effects, to tell the compiler that the code has side effects, is abuse. OK.
 
The following users thanked this post: newbrain

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2111
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #97 on: November 30, 2024, 07:17:13 pm »
Abusing a language keyword to fix a problem they created themselves isn't the triumphal feat of software engineering that you (and the GCC authors) seem to think it is.

So using a keyword the purpose of which is to tell the compiler that code has side effects, to tell the compiler that the code has side effects, is abuse. OK.

The purpose of the volatile keyword is to tell the compiler that the value of a variable may be modified at any time. 

That's it.  Anything beyond that is something nonstandard that somebody made up.
 
The following users thanked this post: cfbsoftware

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #98 on: November 30, 2024, 07:25:31 pm »
That's it.  Anything beyond that is something nonstandard that somebody made up.

Of course. Every usable real-world C compiler is full of non-standard extensions. The standard allows extensions, and standard does not forbid using of standard keywords within extensions - why would it do that. This might be news to you, but everything around you in this world is made up by somebody.

Luckily, you are free not to use these extensions.
« Last Edit: November 30, 2024, 07:27:41 pm by Siwastaja »
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #99 on: November 30, 2024, 07:41:31 pm »
The context here is asm code, and it is hard to think of any case where the coder wants asm code modified in any way.

Perhaps this (asm preservation) is hard for the compiler writer to do because optimisation is done on the (intermediate) asm output. It could be done by marking asm code in the source, obviously.

I would also argue that having to declare variables (particularly static ones) as volatile is daft since the coder clearly intended these to be maintained. The C compilers I recall using in the 1980s (IAR) did work like that. The volatile keyword was not necessary. I was working on projects where someone else was doing C and I was doing asm, and the hardware.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3301
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #100 on: November 30, 2024, 08:46:25 pm »
I remember this asm volatile from 1990's.

Back then the compilers simply printed the asm text to the output without trying to parse or interpret it. They wouldn't know what to do with volatile.

The optimization obsession hasn't started until much later - ca 2005 - 2010.
 
The following users thanked this post: 5U4GB

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #101 on: November 30, 2024, 10:50:43 pm »
There are much bigger experts than me here but my impression is that the compiler business ran out of stuff to do and started going sideways. What clever gimmicks can we do?

Same as the MbedTLS development for example. So many changes, to upgrade to the latest whizz bang version, which does hardly anything useful and nothing useful that needed such changes.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 
The following users thanked this post: 5U4GB

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #102 on: November 30, 2024, 10:53:31 pm »
There are much bigger experts than me here but my impression is that the compiler business ran out of stuff to do and started going sideways. What clever gimmicks can we do?
Unless a compiler can consistently match what an assembly language programmer can achieve, especially for parallel code, I think compiler designers still have plenty to do.
 
The following users thanked this post: neil555

Offline cfbsoftware

  • Regular Contributor
  • *
  • Posts: 137
  • Country: au
    • Astrobe: Oberon IDE for Cortex-M and FPGA Development
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #103 on: November 30, 2024, 11:46:11 pm »
The stupidest thing here is that 99% of the time complaints about "C"'s "quirks" apply to every imaginable alternative and replacement as well.
Somewhat of an exaggeration I believe. A more accurate observation might be:
Quote
It should be noted that complaints about C's quirks often apply to other popular alternatives and replacements as well.
Chris Burrows
CFB Software
https://www.astrobe.com
 

Offline rfindley

  • Contributor
  • Posts: 20
  • Country: us
  • Embedded Systems Contractor
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #104 on: December 01, 2024, 03:00:01 am »
Getting back to the OP's question...
Do a google search for "IAR Kickstart" plus whichever processor family you want to use.  IAR doesn't talk about the Kickstart versions on their website, at least not that I've seen, so it's understandable to wonder whether it's still available.

On another note, if you're a Visual Studio user and you've never seen VisualGDB (a companion for Visual Studio), it's worth checking out. They have a lot of pre-built gcc toolchains, and it takes a lot of the pain out of setting up a gcc workflow.  I used it on a project a few years ago to write my code on Windows/Visual Studio, cross-compile on a Linux build server, deploy to target hardware, and remote-debug, all from inside Visual Studio + VisualGDB.

Honestly, though, I rarely use VisualGDB (or Visual Studio, for that matter).  For debugging, I mostly use a printf-style 'dlog' library that I've built over the years.  It lets me dynamically change what I log by topic and log-level, using whatever log destination I have available on a given target board (memory, serial port, file, log server, etc), and you can use it on almost any processor.  Once you get used to troubleshooting that way, an actual debugger starts to feel cumbersome.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #105 on: December 01, 2024, 04:47:04 am »
Abusing a language keyword to fix a problem they created themselves isn't the triumphal feat of software engineering that you (and the GCC authors) seem to think it is.

Another example of this is gcc's -fno-delete-null-pointer-checks.  It'd be like having a car where, each time to start it, you have to remember to specify -fno-ignore-brake-pedal.  It could be straight out of an episode of Wellington Paranormal:

"Once again we've come up with a perfect solution to the problem".
"That we caused".
"Job well done".
"Good result".
 
The following users thanked this post: KE5FX

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #106 on: December 01, 2024, 08:28:40 am »
Quote
Unless a compiler can consistently match what an assembly language programmer can achieve, especially for parallel code, I think compiler designers still have plenty to do.

Sure, but trying to catch weird stuff like variables which the coder explicitly declared but which have no effect (resulting in the whole "volatile" business) is a waste of resources which in most/all cases will have a negligible impact on speed or code size.

I've been coding asm since 1975 or so and anybody who has done a lot of asm will know where the tricks are. It is stuff like analysing the whole bit of code and stuffing the values into registers. And using various tricks. But in the end it makes little difference since "tricks" can also be coded in C, and in most projects the CPU spends most of its time running a very small part of the code.
« Last Edit: December 01, 2024, 08:51:29 am by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3577
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #107 on: December 01, 2024, 10:16:46 am »
I wonder if i would quote myself from earlier in this thread: who knows how many of these discussions we would not have if the world stopped using GCC altogether and moved to CLANG
one that peter, and many others, probably don't know, in their never ending quest of beating the compiler with tricks and asm level optimization: GCC can't possibly know which non-static functions are never called (though clang can), go enable --gc-sections and see what remains when you start using function pointers

Incredibly fun if you need to detect and remove dead code, makes a case for very expensive libraries for run time profiling, with all their side effects
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #108 on: December 01, 2024, 10:19:19 am »
Same as the MbedTLS development for example. So many changes, to upgrade to the latest whizz bang version, which does hardly anything useful and nothing useful that needed such changes.

What were/are the issues with MbedTLS?  It's always useful to get experience-based details on pain points.
« Last Edit: December 01, 2024, 10:22:38 am by 5U4GB »
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #109 on: December 01, 2024, 01:27:02 pm »
Quote
one that peter, and many others, probably don't know, in their never ending quest of beating the compiler with tricks and asm level optimization: GCC can't possibly know which non-static functions are never called (though clang can), go enable --gc-sections and see what remains when you start using function pointers

I use almost no asm with arm32 - no need for it, given 7ns cycle time :) I've used it only for "min time delay" timing loops, especially for microsecond-level waits.

Unused functions are basically irrelevant when you have 1MB FLASH and even after years you have filled only half of it.

And, to make you laugh, I never use function pointers in C :) In asm, yes, they are commonly used. They are used in ST's ETH and USB code to invoke different bits of code but they are really pointless (when selecting one of about 4 options) and just make the code more opaque, especially given their taste for typedefs of typedefs of typedefs ;)

Quote
Incredibly fun if you need to detect and remove dead code, makes a case for very expensive libraries for run time profiling, with all their side effects
One can do profiling for nothing if there is a "tick" or an RTOS
https://www.eevblog.com/forum/microcontrollers/freertos-where-should-the-cpu-be-spending-most-of-its-time/msg4965922/#msg4965922

Quote
What were/are the issues with MbedTLS?  It's always useful to get experience-based details on pain points.

The MbedTLS integration in my project was done by someone else (at version 2.16.2) but I am on their mailing list. If you want TLS 1.3 then you have to use MbedTLS v3+ but AFAICT nobody currently uses TLS 1.3 in the industrial control sphere. They added stuff like zeroing malloc'ed buffers before freeing them, which is "good practice" but only if somebody is already poking about inside your product...  Then they stripped out "deprecated" (always a good word to impress internet security experts) crypto suites like DES/3DES (which AFAICT nobody at all uses anywhere) only for you to have to add some back in because some of the certificates in cacert.pem have signatures done with old hashes :) They have added ChaChaPoly, which is ok but AFAICT nobody insists on that if you present AES256. So, some handy bit perhaps and OK if you are implementing it now, but basically years of going sideways, IMHO, especially given that IMHO an IOT box should be a client and not on an open port. They have not done really handy stuff like processing the certificates file one cert at a time so it needs ~300k of spare RAM (there is a 3rd party mod for this which my box has).
« Last Edit: December 01, 2024, 01:45:29 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #110 on: December 01, 2024, 05:41:35 pm »
I wonder if i would quote myself from earlier in this thread: who knows how many of these discussions we would not have if the world stopped using GCC altogether and moved to CLANG
Why would CLANG, or any other compiler, affect the discussion much. All modern compilers have reached the stage where they optimise well enough for all sorts of things to show up, which rarely used to.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #111 on: December 01, 2024, 05:47:11 pm »
Quote
Unless a compiler can consistently match what an assembly language programmer can achieve, especially for parallel code, I think compiler designers still have plenty to do.

Sure, but trying to catch weird stuff like variables which the coder explicitly declared but which have no effect (resulting in the whole "volatile" business) is a waste of resources which in most/all cases will have a negligible impact on speed or code size.
Are you saying you think volatile is globally an annoyance, or just in the cases where it directly annoys you? Thinking that code should always go back to the memory location would be a interesting performance hole for a wide range of code. Much of what happens when you turn on optimisation is using registers to avoid going back to memory too often.
 

Offline mark03Topic starter

  • Frequent Contributor
  • **
  • Posts: 750
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #112 on: December 01, 2024, 05:53:48 pm »
Getting back to the OP's question...
Do a google search for "IAR Kickstart" plus whichever processor family you want to use.  IAR doesn't talk about the Kickstart versions on their website, at least not that I've seen, so it's understandable to wonder whether it's still available.
That search doesn't return anything useful when I do it.  Some vendor-specific pages and the occasional IAR press release, all of them old.  Are you seeing anything different?
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #113 on: December 01, 2024, 05:57:13 pm »
Getting back to the OP's question...
Do a google search for "IAR Kickstart" plus whichever processor family you want to use.  IAR doesn't talk about the Kickstart versions on their website, at least not that I've seen, so it's understandable to wonder whether it's still available.
I thought IAR Kickstart was a specific deal IAR has with TI for a limited version of IAR for the MSP430.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2111
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #114 on: December 01, 2024, 06:59:37 pm »
It lets me dynamically change what I log by topic

Hmm.  I like this idea.  Seems obvious enough but I've never seen it done. 

A scalar 'message threshold' is never really granular enough in a non-trivial application, but tagging debug messages with arbitrary keywords could change that.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #115 on: December 01, 2024, 07:10:38 pm »
Honestly, though, I rarely use VisualGDB (or Visual Studio, for that matter).  For debugging, I mostly use a printf-style 'dlog' library that I've built over the years.  It lets me dynamically change what I log by topic and log-level, using whatever log destination I have available on a given target board (memory, serial port, file, log server, etc), and you can use it on almost any processor.  Once you get used to troubleshooting that way, an actual debugger starts to feel cumbersome.
For desktop or server debugging I agree with you. Debuggers have their place when things get really weird, but they are a last resort rather than a first.... although they will often bring their own quirks, making you regret resorting to them. Embedded is a bit different, though. Your limited ability to interact with an embedded target often makes a debugger more valuable. You'll probably need one just to get code into the target, anyway.
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3577
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #116 on: December 01, 2024, 10:54:16 pm »
I wonder if i would quote myself from earlier in this thread: who knows how many of these discussions we would not have if the world stopped using GCC altogether and moved to CLANG
Why would CLANG, or any other compiler, affect the discussion much. All modern compilers have reached the stage where they optimise well enough for all sorts of things to show up, which rarely used to.

though GCC by the way the way it's written can't find unused functions reliably, can't, or won't, warn about obvious bugs a la "sizeof(x - 2)" when the programmer obviously meant "sizeof(x) - 2", signal that statements are always true/false, hence will be optimized away, and such other things. I say Clang, but i wouldn't be surprised that any other compiler frontend is more effective than GCC in finding bugs and static analysis

@peter, well function pointers is something i rarely use as well, only when i really want them. i.e.: i have like a dozen different sets of functions with the same interface, think of it as device drivers, i didn't want to pollute the code with a set of ever growing switch statements, function pointers were a much more elegant solution IMHO. But function pointers + -gc-sections is guaranteed to break your day. Also, profiling is one thing, and very important, but the compiler should be able to determine dead code from just compiling (after all, it does know all the call trees, all the function entries, and even if functions that are called indirectly are actually set to be called. GCC won't and the garbage collector results in a mess deleting parts of used code. Profiling comes afterwards, functions and statements that *should* execute, but are never due to conditions that never present themselves.
Another problem is when you write functions, you think they are being called and they're not. Waste a day chasing a bug only because stupid GCC won't tell that function X is unused
 

Offline mark03Topic starter

  • Frequent Contributor
  • **
  • Posts: 750
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #117 on: December 01, 2024, 10:54:33 pm »
Getting back to the OP's question...
Do a google search for "IAR Kickstart" plus whichever processor family you want to use.  IAR doesn't talk about the Kickstart versions on their website, at least not that I've seen, so it's understandable to wonder whether it's still available.
I thought IAR Kickstart was a specific deal IAR has with TI for a limited version of IAR for the MSP430.
It used to be a generic thing (at least for EW-ARM), and you could select a time-limited OR code-size-limited version, your choice.  But they may have had special deals with silicon vendors as well.

Anyhow, I'm 99% sure this is no more, so the original question is answered.  You may now return to the compiler wars ;)
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #118 on: December 02, 2024, 05:30:06 am »
The MbedTLS integration in my project was done by someone else (at version 2.16.2) but I am on their mailing list. If you want TLS 1.3 then you have to use MbedTLS v3+ but AFAICT nobody currently uses TLS 1.3 in the industrial control sphere. They added stuff like zeroing malloc'ed buffers before freeing them, which is "good practice" but only if somebody is already poking about inside your product...  Then they stripped out "deprecated" (always a good word to impress internet security experts) crypto suites like DES/3DES (which AFAICT nobody at all uses anywhere) only for you to have to add some back in because some of the certificates in cacert.pem have signatures done with old hashes :) They have added ChaChaPoly, which is ok but AFAICT nobody insists on that if you present AES256. So, some handy bit perhaps and OK if you are implementing it now, but basically years of going sideways, IMHO, especially given that IMHO an IOT box should be a client and not on an open port. They have not done really handy stuff like processing the certificates file one cert at a time so it needs ~300k of spare RAM (there is a 3rd party mod for this which my box has).

Thanks, good to know.  I've found the same with TLS 1.3 in embedded/SCADA, it essentially doesn't exist, at least one reason being that despite its name it's a completely new protocol that needs a second protocol stack alongside the pre-1.3 one.  Heck, there are still a lot of systems using TLS 1.0, but people are slowly battling to get their successor systems to TLS 1.2.

The reason for the ChaChaPoly nonsense is that Google insists on presenting oddball non-MTI (mandatory to implement) algorithms in its TLS 1.3 handshake so you have to go back and tell it to use MTI algorithms instead, for an extra round trip that completely defeats all the compromises they made in TLS 1.3 to try and reduce round trips.  Specifically, they use Curve25519 instead of ECDH/ECDSA for the crypto, but since Google is bigger than the Internet everyone has to change their code to copy what Google does.  So that one isn't really mbedTLS' fault, you can blame Google for that one.

More generally, the TLS folks never stop churning the specs to accommodate whatever flashy thing is coming down the road.  If you think the TLS 1.3 changeover was painful, wait until all the post-quantum crypto bollocks starts hitting.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #119 on: December 02, 2024, 10:32:44 am »
Sure, but trying to catch weird stuff like variables which the coder explicitly declared but which have no effect (resulting in the whole "volatile" business) is a waste of resources which in most/all cases will have a negligible impact on speed or code size.

Bullshit. Try qualifying every variable in your program volatile and see what happens. I can assure you the impact on speed and code size is not "negligible". Volatile is in C from the very beginning and for a very good reason. You will fail miserably even at -O0, and even with 1990's compiler if you do not understand when to use volatile.


BTW, did you guys already find any real-world example where GCC has started stripping out asm statements so that the intended behavior broke? Or are you complaining just in case? It would seem quite odd to me that one learns to use GCC Extended Asm, learns the concepts and syntax of defining Inputs, Outputs and Clobbers, but misses the part about volatile, then writes a program using extended asm using those inputs, outputs and clobbers, the whole purpose of which is to enable optimizations, and then get surprised that GCC indeed did use inputs, outputs and clobbers to perform said optimization?
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #120 on: December 02, 2024, 10:45:43 am »
Another example of this is gcc's -fno-delete-null-pointer-checks.  It'd be like having a car where, each time to start it, you have to remember to specify -fno-ignore-brake-pedal.

Another alarmistic exaggeration: for example, I have never used -fno-delete-null-pointer-checks, and did not even know about it before this thread. I have never seen any project use it. And I have never seen any issues caused by this feature.

Compare to your car example: I'm quite sure that if cars habitually ignored brake pedals, I would have seen it.

I would also expect to see more discussion about delete-null-pointer-checks if it was a serious problem.

As usually, correct explanation can be found in documentation:
"these assume that a memory access to address zero always results in a trap, so that if a pointer is checked after it has already been dereferenced, it cannot be null. ... Note however that in some environments this assumption is not true. Use -fno-delete-null-pointer-checks to disable this optimization for programs that depend on that behavior. "

So the correct car analogy is: "brake pedal can break after the car crashes, because brakes are not normally needed anymore. If reinforced brake structure which can withstand a full crash is needed in special environments, use -fno-allow-brakes-that-cannot-withstand-crashes".

But I'm sure you are not happy with the correct analogy and prefer your own alarmistic one instad.

And, BTW, performance differences can be significant. If you try to write robust code which is aimed at the human reader, you may as well place a NULL pointer check inside a loop which is iterated millions of times. If a compiler can prove that the check has been done already and cannot fail, I'm very happy to have that optimization. It's a huge timesaver, and I don't want to go back to the 1990's where compilers needed micromanaging statements out of loops for decent performance. And really, it was just easier not to check. Now the compiler often knows if the check is needed and can remove the check if statically deemed unnecessary.
« Last Edit: December 02, 2024, 10:50:53 am by Siwastaja »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #121 on: December 02, 2024, 03:07:21 pm »
Volatile is in C from the very beginning and for a very good reason.
No it wasn't. It was a high priority requirement, yet still didn't make it in until 2001. There were lots of fudgy approaches, like "attribute" constructs, used before volatile was settled on.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #122 on: December 02, 2024, 03:36:45 pm »
No it wasn't. It was a high priority requirement, yet still didn't make it in until 2001

Wat? You can definitely see that it was present in C89 / ANSI C in 1990 (see e.g. https://www.yodaiken.com/wp-content/uploads/2021/05/ansi-iso-9899-1990-1.pdf )

Or do you mean compiler support for it was lacking before 2001?
 
The following users thanked this post: newbrain

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #123 on: December 02, 2024, 03:40:06 pm »
No it wasn't. It was a high priority requirement, yet still didn't make it in until 2001

Wat? You can definitely see that it was present in C89 / ANSI C in 1990 (see e.g. https://www.yodaiken.com/wp-content/uploads/2021/05/ansi-iso-9899-1990-1.pdf )

Or do you mean compiler support for it was lacking before 2001?
Interesting. It certainly wasn't in K&R, so I knew it wasn't there from the start. I just Googled when it was added, and I got 2001 as the result. Maybe that was given with the precision of an AI result. :)
 
The following users thanked this post: Siwastaja

Offline jfiresto

  • Frequent Contributor
  • **
  • Posts: 900
  • Country: de
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #124 on: December 02, 2024, 04:17:24 pm »
No it wasn't. It was a high priority requirement, yet still didn't make it in until 2001

Wat? You can definitely see that it was present in C89 / ANSI C in 1990 (see e.g. https://www.yodaiken.com/wp-content/uploads/2021/05/ansi-iso-9899-1990-1.pdf )

Or do you mean compiler support for it was lacking before 2001?
Interesting. It certainly wasn't in K&R, so I knew it wasn't there from the start. I just Googled when it was added, and I got 2001 as the result. Maybe that was given with the precision of an AI result. :)

It is in the second edition of K&R, even if some may consider it a new testament heresy.
-John
 
The following users thanked this post: Siwastaja

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #125 on: December 02, 2024, 04:28:34 pm »
It is in the second edition of K&R, even if some may consider it a new testament heresy.
The second edition of K&R is from around the time of the C89 spec, and basically documents what is in that spec. So, if volatile went into the C89 spec I would expect it to be in the second edition of K&R. I had been writing in C based on the original K&R for over a decade by then.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #126 on: December 02, 2024, 05:17:15 pm »
Quote
Bullshit. Try qualifying every variable in your program volatile and see what happens. I can assure you the impact on speed and code size is not "negligible".

I can assure you that it will be negligible, in 99% of cases of statics/globals. I would expect locals, for loop counters, etc, to be in registers anyway if possible.

And that is how C was in the 1980s. Variables explicitly expected in RAM didn't get optimised away.

It's not a big thing, especially since globals (declared extern in other .c files) are inherently not optimised away. Good idea to keep RTOS tasks which share RAM variables to be in separate .c files :)

BTW, re my old MbedTLS query, I can see from a test it is running that it supports "Using TLS ciphersuite: TLS-ECDHE-ECDSA-WITH-CHACHA20-POLY1305-SHA256" so no need for MbedTLS 3 even for Chacha20.
« Last Edit: December 02, 2024, 05:35:19 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #127 on: December 02, 2024, 05:43:31 pm »
I can assure you that it will be negligible, in 99% of cases of statics/globals. I would expect locals, for loop counters, etc, to be in registers anyway if possible.

I encourage you to try. Try it for example on mbedtls state variables.

Quite obviously the need for the volatile keyword, which was already realized into official standard in the 80's - meaning it was discussed many years before that - stems from the optimizations. Even early compilers did obvious optimizations like "caching" close-by operations and reducing loads/stores, such that global/static was loaded, then operated on in CPU registers, then stored back to memory. It makes perfect sense, because computers back then had little storage space and memory, pretty similar to today's embedded  targets.
« Last Edit: December 02, 2024, 07:27:21 pm by Siwastaja »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #128 on: December 02, 2024, 05:50:01 pm »
Quite obviously the need for the volatile keyword, which was already realized into official standard in the 80's - meaning it was discussed many years before that - stems from the optimizations. Even early compilers did obvious optimizations like "caching" close-by operations and reducing loads/stores, such that global/static was loaded, then operated on in CPU registers, then stored back to memory. It makes perfect sense, because computers back then had little storage space and memory, pretty similar to today's embedded  targets.
The original motivation for volatile was for SMP and interrupts. You can't have two CPUs, or two separate threads of processing, working on a variable without it being signalled as volatile. So, it was nothing to do with advanced code optimisation. It was about very basic "you must go back to main memory every time" requirements.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #129 on: December 02, 2024, 07:17:55 pm »
The original motivation for volatile was for SMP and interrupts. You can't have two CPUs, or two separate threads of processing, working on a variable without it being signalled as volatile. So, it was nothing to do with advanced code optimisation. It was about very basic "you must go back to main memory every time" requirements.

It is exactly because of optimizations (as peter-h defines them). The reason why interrupts do not work without the volatile qualifier is exactly because compiler "caches" the value of a memory-stored variable in a register instead of repeatedly loading and storing it. This is a very primitive form of optimization which was obvious already in 1980's.

The first C standard from 1980's I linked to above already defines the volatile keyword as such:
"A volatile declaration may be used to describe an object corresponding to a memory-mapped
input/output port or an object accessed by an asynchronous interrupting function Actions on objects
so declared shall not be “optimized out” by an implementation or reordered except as permitted by the
rules for evaluation expressions"
(emphasis added)

Your misconception lives strong: people assume volatile does more than prevent that type of optimization, being some kind of "make shared data work" keyword, which it isn't and never was. All it does is prevent said type of optimization, which may or may not be sufficient for interrupt signalling. More than just volatile may be needed if underlying accesses are not atomic, as they often are not (e.g., barrier of disabling interrupts).
« Last Edit: December 02, 2024, 07:30:35 pm by Siwastaja »
 
The following users thanked this post: newbrain, SiliconWizard

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #130 on: December 02, 2024, 07:28:18 pm »
I can assure you that it will be negligible, in 99% of cases of statics/globals. I would expect locals, for loop counters, etc, to be in registers anyway if possible.

I encourage you to try. Try it for example on mbedtls state variables.

I know peter-h won't bother, so I did a quick test. Picked up a random code module, a (patent-pending) algorithm which detects patterns on ADC data, calculating stuff like power factors, real powers, rms currents etc., while doing usual housekeeping on an embedded system.

Let's ignore performance and look at code size:
-Os: .text 5632 bytes
All globals and function-statics qualified volatile, no other changes: 9822 bytes (74% size increase).

I won't call that "negligible". This is not even mentioning performance, which does matter, too.

And not mentioning fact that limiting this to globals and statics only makes little sense. volatile should be added to everything that does not fit CPU registers to see the full extent of peter-h's idea.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #131 on: December 02, 2024, 07:47:29 pm »
The original motivation for volatile was for SMP and interrupts. You can't have two CPUs, or two separate threads of processing, working on a variable without it being signalled as volatile. So, it was nothing to do with advanced code optimisation. It was about very basic "you must go back to main memory every time" requirements.

It is exactly because of optimizations (as peter-h defines them). The reason why interrupts do not work without the volatile qualifier is exactly because compiler "caches" the value of a memory-stored variable in a register instead of repeatedly loading and storing it. This is a very primitive form of optimization which was obvious already in 1980's.

The first C standard from 1980's I linked to above already defines the volatile keyword as such:
"A volatile declaration may be used to describe an object corresponding to a memory-mapped
input/output port or an object accessed by an asynchronous interrupting function Actions on objects
so declared shall not be “optimized out” by an implementation or reordered except as permitted by the
rules for evaluation expressions"
(emphasis added)

Your misconception lives strong: people assume volatile does more than prevent that type of optimization, being some kind of "make shared data work" keyword, which it isn't and never was. All it does is prevent said type of optimization, which may or may not be sufficient for interrupt signalling. More than just volatile may be needed if underlying accesses are not atomic, as they often are not (e.g., barrier of disabling interrupts).
I have no misconception. Volatile doesn't work for SMP these days, but it did in the 1980s, and that's a key reason we initially had it. People didn't really think in terms of the multi-layered caches we have today, and the complexity that causes for just what "memory mapped" actually means. Today we have instructions like CAS and DCAS in complex processors, and things like threading won't work properly without them, and a full understanding of the memory ordering behaviour OOO processors bring. Its a different time now.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #132 on: December 02, 2024, 07:53:51 pm »
I have no misconception. Volatile doesn't work for SMP these days, but it did in the 1980s, and that's a key reason we initially had it. People didn't really think in terms of the multi-layered caches we have today, and the complexity that causes for just what "memory mapped" actually means.

You are mixing things up and pouring more alphabet soup into the mix.

Just volatile alone does not work for shared data today even in simple, cacheless CPUs like AVRs or PICs, just like it did not work back in 1980's on computers of similar complexity. Volatile is only part of the solution: other types of guards are needed when memory accesses are not atomic, as will be the case when the object size is larger than the memory bus width; for example, in 1980's, 16-bit systems were popular target for C language, and 32-bit long ints needed more than just volatile: for example, disabling interrupts during the update, or using atomic types as mutexes.

If you think nothing more than volatile was needed for shared data, your programs worked by sheer luck in the 1980's. Just like many programs work by sheer luck today. Bugs related to shared data (e.g. in interrupts) are really PITA to find, and if the variables update rarely and ISRs are triggered rarely, it can take weeks of runtime to see the effect of the bug (then finding it is much more difficult). Random wrong behavior is the result.

But really, internetz is full of good information and tutorials about this whole thing, I should not be lecturing such basics here.
« Last Edit: December 02, 2024, 07:56:44 pm by Siwastaja »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #133 on: December 02, 2024, 07:58:04 pm »
I have no misconception. Volatile doesn't work for SMP these days, but it did in the 1980s, and that's a key reason we initially had it. People didn't really think in terms of the multi-layered caches we have today, and the complexity that causes for just what "memory mapped" actually means.

You are mixing things up and pouring more alphabet soup into the mix.

Just volatile alone does not work for shared data today even in simple, cacheless CPUs like AVRs or PICs, just like it did not work back in 1980's on computers of similar complexity. Volatile is only part of the solution: other types of guards are needed when memory accesses are not atomic, as will be the case when the object size is larger than the memory bus width; for example, in 1980's, 16-bit systems were popular target for C language, and 32-bit long ints needed more than just volatile: for example, disabling interrupts during the update, or using atomic types as mutexes.

If you think nothing more than volatile was needed for shared data, your programs worked by sheer luck in the 1980's. Just like many programs work by sheer luck today. Bugs related to shared data (e.g. in interrupts) are really PITA to find, and if the variables update rarely and ISRs are triggered rarely, it can take weeks of runtime to see the effect of the bug (then finding it is much more difficult). Random wrong behavior is the result.

But really, internetz is full of good information and tutorials about this whole thing, I should not be lecturing such basics here.
Do you expect every post to point out the blatantly obvious? I write assuming I am writing to someone with some basic knowledge of the topic.
 
The following users thanked this post: Siwastaja

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #134 on: December 02, 2024, 10:40:28 pm »
Quote
Let's ignore performance and look at code size:
-Os: .text 5632 bytes
All globals and function-statics qualified volatile, no other changes: 9822 bytes (74% size increase).

That is some rare and tightly written (and very small) piece of code. The vast majority of code in a working product is nothing like that.

That, in turn, is why C took over the bulk of the coding in most products in the mid-1980s onwards, with just small parts written in asm. The fact that the £1500 (that's 1500 quid in old money!) IAR Z180 compiler generated crap code, probably 5x bigger and 10x slower than hand-crafted asm, didn't matter, because the CPU spent probably 99% of its cycles running 1% of the code, not to mention spending most of that 99% waiting for a keystroke :) What mattered was that the box worked and you got decent coder productivity.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline mwb1100

  • Frequent Contributor
  • **
  • Posts: 614
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #135 on: December 02, 2024, 11:16:42 pm »
I don't know if definitive information was posted about the the status of KickStart/code-size-limited  versions of the IAR toolchains (the bulk of the thread spun off into comparing compiler optimizations, etc.), but here's the dope straight from IAR (emphasis added):

Quote
Hope you’re doing well! I am the account manager that covers Washington for IAR, so I’m happy to assist you. I saw your note about the kickstart/code-size limited version of our Embedded Workbench licenses. Unfortunately, we stopped providing the type of license earlier this year. We do have options for purchasing a perpetual license if you’d like to discuss that.

The OP might want to edit that into the opening post so that anyone who stumbles onto this thread wondering about the status of IAR's free/hobbyist/student oriented toolchains will actually get an answer instead of having to search through 6 or more pages of compiler wars.
« Last Edit: December 02, 2024, 11:21:17 pm by mwb1100 »
 
The following users thanked this post: mark03, cfbsoftware

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #136 on: December 03, 2024, 06:37:43 am »
Do you expect every post to point out the blatantly obvious? I write assuming I am writing to someone with some basic knowledge of the topic.

It may be obvious to you, but it really isn't obvious to everyone. Getting the basics right is the first stepping stone to understanding more advanced stuff.

That is some rare and tightly written (and very small) piece of code. The vast majority of code in a working product is nothing like that.

That, in turn, is why C took over the bulk of the coding in most products in the mid-1980s onwards, with just small parts written in asm. The fact that the £1500 (that's 1500 quid in old money!) IAR Z180 compiler generated crap code, probably 5x bigger and 10x slower than hand-crafted asm, didn't matter, because the CPU spent probably 99% of its cycles running 1% of the code, not to mention spending most of that 99% waiting for a keystroke :) What mattered was that the box worked and you got decent coder productivity.

I invite you to try it out with some other piece of code. What you say about "tightly written" is weird, because for non-tight the result would be even worse.

I mean, just look at how CMSIS or STM32 libraries are written; they regularly take a temporary copy of a IO variable to manipulate it. This is even when they do not care much about performance. But the difference is just so big, easily 2-3x.

CPU indeed spends 99% of the time running 1% of the code which is exactly why it is so important to optimize unnecessary shuffling of data back and forth between CPU registers and memory within that small 1% loop. This was already realized in 1980's and C compilers already did this optimization back then because it was so necessary, and because compilers did that already, standardization body included volatile and const qualifiers from the beginning.

The difference on simple processors is maybe just 2-3x in execution time; add caches to the mix and we are talking possibly 100x difference.

What you propose was not feasible in the 1980's, and is even less feasible today.

Not having a volatile qualifier is possible, and that is what many modern C replacements do, but I can assure you they don't choose ensuring memory access for every variable access; instead, they just always optimize and do not allow users to intervene that in any way. Which means the languages need some different, higher level construct for multiprocessing/memory mapping - which is of course a better idea for a typical programmer.
« Last Edit: December 03, 2024, 06:44:18 am by Siwastaja »
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3577
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #137 on: December 03, 2024, 08:47:53 am »
Quote
Let's ignore performance and look at code size:
-Os: .text 5632 bytes
All globals and function-statics qualified volatile, no other changes: 9822 bytes (74% size increase).

That is some rare and tightly written (and very small) piece of code. The vast majority of code in a working product is nothing like that.

That, in turn, is why C took over the bulk of the coding in most products in the mid-1980s onwards, with just small parts written in asm. The fact that the £1500 (that's 1500 quid in old money!) IAR Z180 compiler generated crap code, probably 5x bigger and 10x slower than hand-crafted asm, didn't matter, because the CPU spent probably 99% of its cycles running 1% of the code, not to mention spending most of that 99% waiting for a keystroke :) What mattered was that the box worked and you got decent coder productivity.

from time to time i take some of my ooold projects (working products, as you say) and rewrite them to my current standards. In the past when i was way less experienced i used few big files with everything in it, big structures holding everything, do everything volatile because some things needed it, and i wanted the compiler to shut up about casting. Current projects have many, many smaller files with dedicated functions and scope, getters and setters to private member of structures, volatile only where actually needed (multithreaded/interrupt), assembly modules instead of trickery or walls of volatile asm i had to micromanage. On most of them code size down about 40%, and cosiderable speed gains

I don't know if definitive information was posted about the the status of KickStart/code-size-limited  versions of the IAR toolchains (the bulk of the thread spun off into comparing compiler optimizations, etc.), but here's the dope straight from IAR (emphasis added):

Quote
Hope you’re doing well! I am the account manager that covers Washington for IAR, so I’m happy to assist you. I saw your note about the kickstart/code-size limited version of our Embedded Workbench licenses. Unfortunately, we stopped providing the type of license earlier this year. We do have options for purchasing a perpetual license if you’d like to discuss that.

The OP might want to edit that into the opening post so that anyone who stumbles onto this thread wondering about the status of IAR's free/hobbyist/student oriented toolchains will actually get an answer instead of having to search through 6 or more pages of compiler wars.

Sorry, rule 25 of the internet :)
« Last Edit: December 03, 2024, 08:51:11 am by JPortici »
 
The following users thanked this post: Siwastaja

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #138 on: December 03, 2024, 12:05:06 pm »
Quote
Sorry, rule 25 of the internet

I am a mod/admin on a tech forum (not electronics) and I would have simply moved the compiler discussion into a general "compiler optimisation discussion" thread. But clearly on EEVBLOG there isn't the time available for doing that; it is about 10x bigger in daily post count than mine.

Quote
from time to time i take some of my ooold projects (working products, as you say) and rewrite them to my current standards. In the past when i was way less experienced i used few big files with everything in it, big structures holding everything, do everything volatile because some things needed it, and i wanted the compiler to shut up about casting. Current projects have many, many smaller files with dedicated functions and scope, getters and setters to private member of structures, volatile only where actually needed (multithreaded/interrupt), assembly modules instead of trickery or walls of volatile asm i had to micromanage. On most of them code size down about 40%, and cosiderable speed gains

I don't doubt that, but if a product sells and is proven reliable in the marketplace over years, I would not change anything on it unless necessary. I don't even change the brand of a capacitor (used in the output filter of a SMPS) until I have built a few circuits with it, tested them over temperature etc, and sent them out, and nothing has come back after a year.

Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #139 on: December 03, 2024, 12:54:17 pm »
I don't doubt that, but if a product sells and is proven reliable in the marketplace over years, I would not change anything on it unless necessary. I don't even change the brand of a capacitor (used in the output filter of a SMPS) until I have built a few circuits with it, tested them over temperature etc, and sent them out, and nothing has come back after a year.

Well, if your design flow does not handle switching of compiler versions, which is perfectly normal, then you simply freeze your tools. That's a sensible thing to do and works especially well with open-source tools. I mean, you can find as old version of gcc online as you wish, and use it on any system where it originally worked and it still works. And with virtual machines, stuff like this is easier than ever. At least old software stays the same bit-by-bit, compare this to capacitors where you cannot keep using it if manufacturer stops making them, and they would have batch-to-batch variations anyway...

But, sometimes doing design refresh cycle might be a good idea even if the product sells well as it is. Listen to the market needs for improvements; do a fresh design so that youngsters in the company can take over it. Although doing that haphazardly is not a good idea, it is easy to step into some kind of mine of using $current_trend_tool_of_the_year, which has much shorter lifetime than e.g. K&R C before C89 had, which is still workable, quite an achievement. For example, if you rewrite your stuff in that New C Everybody Will Be Using Because Google Uses It language which everybody talked about just 3-4 years ago and the name of which I forgot - you are going to rewrite it now in Rust - and again in something else in just 5 years.
« Last Edit: December 03, 2024, 12:56:10 pm by Siwastaja »
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3577
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #140 on: December 03, 2024, 01:25:05 pm »
Disagree. I don't keep crappy code just because "it works", this is how you build house of cards, i do have comments in old projects a la "do not move this statement around" or "don't remove this variable" and that is 99.9% crappy design on me (again, i was inexperienced in both C and development in general back then), or things cobbled together, or incremental additions without any structure. it may be working, but it's shit code. Almost all my projects over time have been refactored, or rewritten from the ground up, and it's much, much, much easier to add new features without the castle falling down.

We do have projects from before my time which are frozen, and a series of comments at the beginning which state what to change to go from behaviour X to Y. They are still there because old applications that we can't or don't want to properly test so they stay as it it, but i refuse to touch. Most projects were like this, it was a mess. One of the things i did is make a parameter out of everything that made sense, and make it programmable so it was ONE firmware i had to keep updating instead of managing, say, ten, times X for every project. That was the spawn of the "don't change anything" mentality which was actually incremental builds with lack of planning, don't look at it wrong or it won't work.

Some times i do find actual compiler bugs (which get reported, then fixed in the following release) so whenever there is an update i run my tests on some projects, measure the changes, see if the bugs have been effectively solved, and then i update. Every update in the compiler is a breath of fresh air because better diagnostic and/or better code generation, and i get to remove the workarounds that ultimately would become shit code
« Last Edit: December 03, 2024, 01:31:51 pm by JPortici »
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #141 on: December 03, 2024, 03:34:11 pm »
Disagree. I don't keep crappy code just because "it works", this is how you build house of cards, i do have comments in old projects a la "do not move this statement around" or "don't remove this variable" and that is 99.9% crappy design on me (again, i was inexperienced in both C and development in general back then), or things cobbled together, or incremental additions without any structure. it may be working, but it's shit code. Almost all my projects over time have been refactored, or rewritten from the ground up, and it's much, much, much easier to add new features without the castle falling down.

Of course we agree on this, but reality is, some embedded device manufacturers and developers do not think they are dealing with a software project at all and will not consider any sound software development practices, some even just program the same binary decade after decade. It's not different to freezing a mold for plastic manufacturing or something. Freezing tools to be able to do minor fixes is just a small step further from that.

Besides, old incremental cost does not matter to future decisions. It is easy to fool oneself to not make necessary investments as they look more expensive than "just fixing this one little thing". Then again there is also considerable risk in starting a big renovation. V2.0 rarely makes financial sense, second-system syndrome is very real too, and the fact you succeed in major rewrites tells more about you than about software (let alone hardware) industry as a whole. How are the companies supposed to find people like you and do it reliably?

If they have something which works and needs a little patch every now and then, even if its band-aid over band-aid and ugly, carries some risk (mostly related to someone who knows how it works retiring or poorly kept backups getting destroyed etc.), but starting a major overhaul carries a potentially much biffer risk, becoming a massive time sink which finally is less reliable than the old system and in need of being replaced again in only just a few years. Publicly funded software (e.g. healthcare information systems) being a typical example, at least here. Fearing that, I'm not surprised that sensible boards of directors are not too fond of the idea of rewriting software systems, even if we engineers would prefer it, and describe the old systems with very strong words.
« Last Edit: December 03, 2024, 03:37:00 pm by Siwastaja »
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #142 on: December 03, 2024, 04:41:53 pm »
This topic is much bigger than freezing a C compiler. What about the PCB tools? Yes it may be that you can find old GCC versions online but you were dumb to not archive your tools originally. Today's PCB tools are often rented, which probably means no chance of archiving. Then you have schematic tools, though nowadays usually integrated with PCB tools.

So there is a whole philosophy of whether to freeze a selling project or not.

Spinning off a new and improved version is a totally different discussion.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 
The following users thanked this post: Siwastaja

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3301
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #143 on: December 03, 2024, 05:12:41 pm »
I don't keep crappy code just because "it works"

How do you classify the code into crappy and not crappy?

I think the fact that the code does what it is designed to do is of foremost importance.

Although many others would disagree and tell you that the good code must be politically correct, and what it does is secondary.
 
The following users thanked this post: peter-h, Siwastaja

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10121
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #144 on: December 03, 2024, 05:33:13 pm »
I don't keep crappy code just because "it works"

How do you classify the code into crappy and not crappy?

I think the fact that the code does what it is designed to do is of foremost importance.

Although many others would disagree and tell you that the good code must be politically correct, and what it does is secondary.
I think that depends how you see the word crappy. There is plenty of crappy code dealing with broken hardware and other quirks, which, while crappy, has no known better alternative. There's crappy code for something short term, where you are monitoring for any unfortunate side effects, and it gets the job done. Then there's the genuine garbage, that's for the long term, and really ought to be properly addressed.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3301
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #145 on: December 03, 2024, 05:36:13 pm »
What about the PCB tools?

I wrote my own. First needed it to make single-layer TH PCBs, then, as technologies evolved added features here and there when I needed. Now it can do multi-layer, length matching, other useful things.
 
The following users thanked this post: Siwastaja

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3577
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #146 on: December 04, 2024, 09:47:09 am »
I don't keep crappy code just because "it works"

How do you classify the code into crappy and not crappy?

I think the fact that the code does what it is designed to do is of foremost importance.

Although many others would disagree and tell you that the good code must be politically correct, and what it does is secondary.

A measure for crappiness can be absence of structuring, consistency, presence of hacks to coerce the compiler into doing what you think it should do, like  abusing globals and volatile, mixing C and assembly when there is almost always a better/proper way to do things, difficulty in adding functionality as it will have side effects on other parts of code that are difficult to change because of all the above.
In the last few years i rewrote several of my old firmwares from the ground up. About 80-85% of the time spent to define the actual behaviour to replicate (i.e.: specification), 10% for A/B testing and 5% actual coding, since then adding new features has been much, much easier.
The embedded muse was full of such cases and examples, a rewrite of problematic software can have a "high" initial cost (which, again, is defining in detail the actual specifications of the current firmware), but pays off almost immediately.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #147 on: December 04, 2024, 09:49:05 am »
What about the PCB tools?

I wrote my own. First needed it to make single-layer TH PCBs, then, as technologies evolved added features here and there when I needed. Now it can do multi-layer, length matching, other useful things.

I'd be curious to have a look, if you have some screenshots.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #148 on: December 04, 2024, 09:55:47 am »
Another example of this is gcc's -fno-delete-null-pointer-checks.  It'd be like having a car where, each time to start it, you have to remember to specify -fno-ignore-brake-pedal.

Another alarmistic exaggeration: for example, I have never used -fno-delete-null-pointer-checks, and did not even know about it before this thread. I have never seen any project use it. And I have never seen any issues caused by this feature.

I know of several projects that have used it to prevent gcc from deleting null pointer checks.  This is presumably why it was added to gcc, I'm pretty sure they wouldn't just throw it in on a dare.
« Last Edit: December 04, 2024, 10:00:50 am by 5U4GB »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3301
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #149 on: December 04, 2024, 02:25:35 pm »
I'd be curious to have a look, if you have some screenshots.

Sure

 
The following users thanked this post: Siwastaja

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3301
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #150 on: December 04, 2024, 02:43:44 pm »
A measure for crappiness can be absence of structuring, consistency, presence of hacks to coerce the compiler into doing what you think it should do, like  abusing globals and volatile, mixing C and assembly when there is almost always a better/proper way to do things, difficulty in adding functionality as it will have side effects on other parts of code that are difficult to change because of all the above.
In the last few years i rewrote several of my old firmwares from the ground up. About 80-85% of the time spent to define the actual behaviour to replicate (i.e.: specification), 10% for A/B testing and 5% actual coding, since then adding new features has been much, much easier.
The embedded muse was full of such cases and examples, a rewrite of problematic software can have a "high" initial cost (which, again, is defining in detail the actual specifications of the current firmware), but pays off almost immediately.

I also spend most of the time on designing, and relatively little time on coding.

However, I don't think a "proper" way of doing things exists, and, if necessary, I would do whatever it takes to coerce the compiler to implement my design exactly as I want it. But, most of the time (like 99.9%) I simply write to C99 standard and it works.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #151 on: December 04, 2024, 03:53:19 pm »
But, most of the time (like 99.9%) I simply write to C99 standard and it works.

This is the key. It seems that very small percentage of programmers seem to be in constant fight against GCC, yet most do fine.

And the latter group includes the most demanding users of GCC in projects like linux kernel, who indeed have had fights with GCC, but still understand that they do not want to go back to 1980's or switch compilers.

And if asked to enumerate their actual gripes, as in this thread, it's always the same story: 1% actual GCC stupidness, 99% of unnecessary complaining (often complaining Just In Case even when everything works normally).

If working with gcc is so difficult as it is to a few individuals here, I would suggest looking in the mirror for a chance. I mean, blaming others by default is a natural human coping mechanism but it really isn't a way forward. Not only will these people lose their internet fights, they will also not be able to deliver good software because they spent on their time inefficiently blaming others and never learning.
« Last Edit: December 04, 2024, 03:55:29 pm by Siwastaja »
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4411
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #152 on: December 04, 2024, 05:42:12 pm »
Time to get out of reading this thread, with the repeated insinuations by Siwastaja that somebody is some kind of a retard.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline cfbsoftware

  • Regular Contributor
  • *
  • Posts: 137
  • Country: au
    • Astrobe: Oberon IDE for Cortex-M and FPGA Development
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #153 on: December 04, 2024, 09:19:21 pm »
Time to get out of reading this thread, with the repeated insinuations by Siwastaja that somebody is some kind of a retard.
Excellent idea. This might help to put it into perspective:

https://www.xkcd.com/1048/
Chris Burrows
CFB Software
https://www.astrobe.com
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #154 on: December 05, 2024, 01:19:55 am »
Time to get out of reading this thread, with the repeated insinuations by Siwastaja that somebody is some kind of a retard.

They're either a gcc maintainer or have the attitude of the gcc maintainers, "we're soooo much cleverer than you and anything that happens is your fault because you're an idiot".

Maybe we need a new thread, "Discussion about safe compilers for use with mission-critical code" or similar?
 
The following users thanked this post: cfbsoftware

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #155 on: December 05, 2024, 07:21:42 am »
They're either a gcc maintainer or have the attitude of the gcc maintainers, "we're soooo much cleverer than you and anything that happens is your fault because you're an idiot".

You miss one: have blamed others for own problems in the past, looked into mirror, started using common sense, and stopped panicking.

By common sense I mean this: gcc is used in mission critical software all the time. If this is so, can it really be possible that using GCC is equivalent to a car with no brake pedal? Of course not. So reconsider and check your arguments more carefully until things match with reality.

After blaming my tools (GCC and others) so many times and seeing later I was wrong, I feel like the least I can do is to investigate properly before blaming others. And as have been shown in this thread already, your complaints can be mostly proved wrong too; the only issue is that you are not willing to admit it. This does not negate the fact though that you are partially right.

If you are in search of a perfect tool / perfect compiler / perfect language, I have very sad news for you: it does not exist.

And don't forget what I already mentioned: I totally agree with you about gcc maintainer's attitude problems, it's a well documented phenomenon. You are just blowing it up to proportions that prevent you from going forward with your life and projects. You are creating excuses for yourself and others. And you are misleading people like peter-h who do not need any more rationalization of their made-up problems. You complaining and feeding others with poorly laid out, mostly made-up complaints helps no one. If you find gcc developers attitude causes problems to others, how about questioning what your attitude does to yourself - and others.
« Last Edit: December 05, 2024, 07:37:36 am by Siwastaja »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28487
  • Country: nl
    • NCT Developments
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #156 on: December 05, 2024, 10:27:03 am »
Time to get out of reading this thread, with the repeated insinuations by Siwastaja that somebody is some kind of a retard.

They're either a gcc maintainer or have the attitude of the gcc maintainers, "we're soooo much cleverer than you and anything that happens is your fault because you're an idiot".

Maybe we need a new thread, "Discussion about safe compilers for use with mission-critical code" or similar?
You should add to that: 'for programming languages people actually want to use'. Ada has been around for a long time and it was developed for doing mission critical stuff. For some reason people wanted to re-invent this wheel and call it Rust. In the meantime microcontrollers have become fast enough to run Python; a language half the world knows how to program in and doesn't have all the pitfalls C/C++ expose a software developer to.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #157 on: December 05, 2024, 12:08:15 pm »
It depends on what you see as the biggest problem you're dealing with.  Ada for example is a well-designed language but has very little tool or industry support compared to what C has.  I think something like Eiffel or the early Modulas before they went off into the weeds are even nicer but have even less support.  Also in some cases you want C as a high-level assembler with very precise control over what's going on, which high-level features like Rust's memory management that hide all of that can't give you - it's a feature for some but an anti-feature for others.

Another big issue is that if your entire environment is C then you can't afford to be the one pushing for your favourite language, or at least not if you expect to still have customers.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28487
  • Country: nl
    • NCT Developments
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #158 on: December 05, 2024, 01:20:51 pm »
Another big issue is that if your entire environment is C then you can't afford to be the one pushing for your favourite language, or at least not if you expect to still have customers.
Interestingly it has been the other way around for several projects I have made. I have implemented Lua and (more recently) Python scripting support to allow customers to modify the high level function in an easy way. The customers didn't want to mess with C and compilers at all. Just upload a new script. IMHO you can't say C / C++ are the defacto programming languages in general nowadays. And I think this has been the case for at least 10 years. At the same time I'm quite sure C/ C++ will stick around for a very long time.
« Last Edit: December 05, 2024, 01:30:07 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9431
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #159 on: December 05, 2024, 02:45:07 pm »
I have implemented Lua and (more recently) Python scripting support to allow customers to modify the high level function in an easy way. The customers didn't want to mess with C and compilers at all. Just upload a new script.

If you have technical resources to enable this, it's a winner recipe. But it does require quite some resources for the interpreter, and specification how it interfaces with the lower level stuff. But if you have something like a general-purpose single-board computer already running linux then this is pretty easy to pull off.

There is something to be learned from the game industry: they already came up with simple scripting languages in mid-90's, while the game engine itself was really tightly optimized (and closed-source) C++. But the scripting extension allowed even quite complex "mods" resembling a new unique game altogether to be made.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28487
  • Country: nl
    • NCT Developments
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #160 on: December 05, 2024, 03:16:44 pm »
I have implemented Lua and (more recently) Python scripting support to allow customers to modify the high level function in an easy way. The customers didn't want to mess with C and compilers at all. Just upload a new script.

If you have technical resources to enable this, it's a winner recipe. But it does require quite some resources for the interpreter, and specification how it interfaces with the lower level stuff. But if you have something like a general-purpose single-board computer already running linux then this is pretty easy to pull off.
Actually, Python and Lua are designed to be add-on languages to C. So interfacing between scripted and none-scripted code is very easy to implement. And no, you don't need a single board computer. A microcontroller with 128k flash and 64k RAM is enough to run Lua scripts + compiler on. Double that for Micropython. There is also the option to pre-compile the scripts into byte code and run that so you can leave the compiler out but I never bothered to do that.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3301
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #161 on: December 05, 2024, 06:58:55 pm »
I have implemented Lua and (more recently) Python scripting support to allow customers to modify the high level function in an easy way. The customers didn't want to mess with C and compilers at all.

Of course. If someone cannot write in C they have to use something else.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15901
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #162 on: December 05, 2024, 10:35:56 pm »
Lua is great for scripting with a low footprint, but still definitely requires some code and RAM space - not a fit for very small MCUs. Also, "embedded" Lua almost always requires not using Lua tables (as these are allocated dynamically and can be quite "hungry" in terms of RAM), and frankly, to me (a now long-time and happy user of Lua), Lua without tables is like, uh, C without pointers. Tables are IMO really the core of Lua. But certainly, a subset of Lua can still be used for simple scripting. I'm curious to know what users of embedded Lua think about that.

Regarding selling products with "programmability", I agree that, unless you target only pro users in a specific niche, giving programmability with C and at the firmware level (I mean, you could always provide a C interface and only allow some kind of "plugins" while not giving access to the full firmware, which is already a bit less painful) can be a nightmare, and simple scripting is much, much easier both for the customers and for the vendor.

 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 634
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #163 on: December 06, 2024, 05:15:57 am »
Interestingly it has been the other way around for several projects I have made. I have implemented Lua and (more recently) Python scripting support to allow customers to modify the high level function in an easy way.

Lua is pretty nice as a C add-on, but that also requires customers who want to do the programming themselves rather than paying the vendor to do it for them.  I've been involved in a couple of projects where the techies had a lot of influence on development and decided to make it programmable so it'd be really flexible and cool and extendable, and then after spending a fortune on it found that what users wanted was an out-of-the-box solution which their competitors with their non-programmable devices were selling for a fraction of the price.

To put it another way, given the choice between Home Assistant ("welcome to your new hobby!") and, say, HomeKit or WeMo ("welcome to your smart home"), most people would choose the latter.
 
The following users thanked this post: Siwastaja

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28487
  • Country: nl
    • NCT Developments
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #164 on: December 10, 2024, 11:03:22 pm »
Interestingly it has been the other way around for several projects I have made. I have implemented Lua and (more recently) Python scripting support to allow customers to modify the high level function in an easy way.

Lua is pretty nice as a C add-on, but that also requires customers who want to do the programming themselves rather than paying the vendor to do it for them.  I've been involved in a couple of projects where the techies had a lot of influence on development and decided to make it programmable so it'd be really flexible and cool and extendable, and then after spending a fortune on it found that what users wanted was an out-of-the-box solution which their competitors with their non-programmable devices were selling for a fraction of the price.
That is called 'feature creep'; a different problem.

The trick to succesfully embed a form of scripting is not to implement it as an add-on but use it to implement the high level logic by design. That way you don't lose any development time AND satisfy customer requirements at the same time. Also, the customer isn't required to program anything by themselves. They can use the manufacturer provided script but they have the to option to modify the script if they want to.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf