Author Topic: No more code-size-limited version of IAR embedded workbench for ARM?  (Read 13909 times)

0 Members and 1 Guest are viewing this topic.

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2130
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #25 on: November 19, 2024, 09:03:18 pm »
This is really common, and really annoying. So often, especially with open source projects for some reason, the only explanation for code that used to work not working with a newer compiler is the compiler is buggy. No verification at all. No introspection at all. Even when you submit a proper fix, they will often still be in denial, and reject the fix.

Meanwhile, the code doesn't work.  But at least it's fast.
 

Offline temperance

  • Frequent Contributor
  • **
  • Posts: 756
  • Country: 00
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #26 on: November 19, 2024, 09:24:25 pm »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.  That writeup actually mentions an issue I've run into on an RTOS-controlled device which, on critical errors, would drop into a while(1) until the watchdog restarted the system, thus providing rejuvenation-based error recovery.  Except that at some point gcc decided to silently remove the while(1) code so that on error it continued on in the error state.  There are plenty of related writeups that go into this, e.g. this one for security-specific issues and this for just outright WTF-ery.

Do the embedded-targeted compilers like Keil/Arm have this problem, or do they create object code that follows the programmer's intent?  Segger ES AFAIK is based on clang so would have the problems mentioned in the linked articles.

Are you allowed or do you allow yourself to just replace/upgrade the compiler when writing code for a mission critical system such that while(1) loops mysteriously disappear at some point? I would think that as a developer of such code you would know and understand the tools you are working with inside out and that replacing the compiler with the latest version would require you to study the manual thoroughly before anything is being replaced to avoid such mysterious problems.
 
The following users thanked this post: Siwastaja

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10289
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #27 on: November 19, 2024, 09:29:06 pm »
This is really common, and really annoying. So often, especially with open source projects for some reason, the only explanation for code that used to work not working with a newer compiler is the compiler is buggy. No verification at all. No introspection at all. Even when you submit a proper fix, they will often still be in denial, and reject the fix.

Meanwhile, the code doesn't work.  But at least it's fast.
Of course not. They either insist on using an older compiler until the latest one is "fixed", or they set up certain components of the project to compile with a lower optimisation level which can be "trusted".
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10289
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #28 on: November 19, 2024, 09:36:10 pm »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.  That writeup actually mentions an issue I've run into on an RTOS-controlled device which, on critical errors, would drop into a while(1) until the watchdog restarted the system, thus providing rejuvenation-based error recovery.  Except that at some point gcc decided to silently remove the while(1) code so that on error it continued on in the error state.  There are plenty of related writeups that go into this, e.g. this one for security-specific issues and this for just outright WTF-ery.

Do the embedded-targeted compilers like Keil/Arm have this problem, or do they create object code that follows the programmer's intent?  Segger ES AFAIK is based on clang so would have the problems mentioned in the linked articles.
So, you are unhappy that a later compiler does a better job, and you didn't signpost the odd behaviour you expected, so it gets optimised away. This is like all the "bugs" people whine about when they didn't put "volatile" in all the right places, or hadn't read what volatile actually means in the C specs. This is not a GCC issue. Its a poor understanding of C you can get away with using a poorly performing compiler issue. Its fully possible to get the behaviour you want if you write the code properly.
 
The following users thanked this post: newbrain

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 16097
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #29 on: November 19, 2024, 09:55:54 pm »
As the saying goes:
Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2130
  • Country: us
    • KE5FX.COM
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #30 on: November 19, 2024, 10:24:00 pm »
As the saying goes:
Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.

Or until he runs out of dynamite.
 
The following users thanked this post: 5U4GB

Offline temperance

  • Frequent Contributor
  • **
  • Posts: 756
  • Country: 00
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #31 on: November 20, 2024, 01:19:22 am »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.  That writeup actually mentions an issue I've run into on an RTOS-controlled device which, on critical errors, would drop into a while(1) until the watchdog restarted the system, thus providing rejuvenation-based error recovery.  Except that at some point gcc decided to silently remove the while(1) code so that on error it continued on in the error state.  There are plenty of related writeups that go into this, e.g. this one for security-specific issues and this for just outright WTF-ery.

Do the embedded-targeted compilers like Keil/Arm have this problem, or do they create object code that follows the programmer's intent?  Segger ES AFAIK is based on clang so would have the problems mentioned in the linked articles.
So, you are unhappy that a later compiler does a better job, and you didn't signpost the odd behaviour you expected, so it gets optimised away. This is like all the "bugs" people whine about when they didn't put "volatile" in all the right places, or hadn't read what volatile actually means in the C specs. This is not a GCC issue. Its a poor understanding of C you can get away with using a poorly performing compiler issue. Its fully possible to get the behaviour you want if you write the code properly.


A poor understanding of compilers and the subtleties of a programming language are the things which books on the subject and courses like to skip. After reading a book on C most people think they are ready to attack whichever problem. This is very far away from the truth. You have to understand the compiler, the machine architecture, it's instruction set and recognize it's limitations and learn to work with those. Without this insight anything can and will go wrong sooner or later. Maybe books and courses should not present you working code but carefully crafted broken code for which you have to make your hands dirty.
 
The following users thanked this post: rhodges

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 678
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #32 on: November 20, 2024, 04:33:52 am »
GCC itself is a fine tool.

... as long as you don't use it for any kind of mission-critical code.

Pretty extreme opinion, given that GCC is used in mission-critical code (whatever that could mean) all the time.

Many embedded dev environments that use gcc use pretty ancient versions that wouldn't have done any of this stuff at the time.  It's an absolute minefield, code that was carefully written and tested to handle exceptional error conditions will appear to work as expected when compiled with a newer version, until one of the once-a-blue-moon exception conditions is hit at which point things blow up because the newer compiler release has removed the safety checks/error handling.

Some safety-critical stuff will actually specify exact compiler versions that the code is to be built with in order to avoid this problem.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 678
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #33 on: November 20, 2024, 04:41:47 am »
*) because I have been too lazy to give clang a try, I probably should

In many aspects it behaves pretty much like gcc, in particular the "Haha, gotcha! We've detected UB, we'll now do whatever we want", but I've found the diagnostics to be much better, vastly so for the clang analyzer which actually produces useful diagnostics vs. gcc's fifteen-page traces that end up telling you nothing.  I'd definitely give clang a try.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 678
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #34 on: November 20, 2024, 04:57:25 am »
This is not a GCC issue. Its a poor understanding of C you can get away with using a poorly performing compiler issue. Its fully possible to get the behaviour you want if you write the code properly.

There's always one.  Some time ago when someone did this, i.e. played the "it's everyone on earth's fault for writing bad code and not the compiler"*** card, I ran some of their code (they were a well-known open-source person) through an experimental SAST tool that tried to find UB, and it found quite a lot of UB in their code.  Glass houses and all that...

When I first ran into the array-bounds-check post I linked to earlier I tested it on a bunch of experienced C developers, at least one of whom started programming (or at least debugging) by toggling front-panel switches on a PDP-11.  Even though they knew, or at least suspected, that there was something hidden in there, none of them could figure out what the code would actually do when compiled, and that was with me more or less pointing at the thing and saying "find the booby-trap".  So if experienced devs can't find the booby-trap when it's waved under their noses, imagine how hard it must be when it's buried in 200kloc, and what else might be in that 200kloc.

*** I know there's an awful lot of crap code out there, but there's also quite a bit of very carefully-written code where the devs simply aren't expecting the compiler to break things based on "yippee, we found some UB, now you're in for it!".
 
The following users thanked this post: Siwastaja

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9528
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #35 on: November 20, 2024, 06:25:09 am »
Meanwhile, the code doesn't work.  But at least it's fast.

And somehow, fixing the code is out of question?
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9528
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #36 on: November 20, 2024, 06:35:42 am »
"Haha, gotcha! We've detected UB, we'll now do whatever we want"

I mean, that is the only thing to do, by very definition! They have to choose what they do, based on their own preferences and reasoning, because standard does not give even a suggestion what to do. I understand questioning their choices, but don't understand questioning the fact they are making choices.

Now the very question is what they want to do, and that determines if it is really a "gotcha" or not. Wanting to do a performance optimization which produces undesired (for the original programmer) outcome 99% of the time on real codebases will be clearly wrong. But what if the outcome on real codebases is right 50% of the time?

There is nothing wrong in this as long as the choice is driven by common sense and serves real-world programmers. And to me it seems this is the case 99% of the time. There are those "gotcha" incidents (in worst cases some innocent looking UB propagating into different part of code flow in a way which is difficult to understand even for an experienced programmer) but no proof these would be driving factor of all gcc development like you make it sound like.

Somehow most users of gcc, and there are a lot, survive with it without ever stepping onto these "mines" - "absolute minefield" sounds like alarmist exaggeration. This is including projects like linux kernel that will notice those crappy choices. And there have been a few events in the past where kernel developers have been quite mad with the choices by gcc folks, sure, but for you and me usually it's best to look into mirror and fix our code.
« Last Edit: November 20, 2024, 06:39:52 am by Siwastaja »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 16097
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #37 on: November 20, 2024, 07:47:13 am »
The code "breaking" in observable/testable ways when it's not correct (/relies on UB behaving in a certain way with a certain toolchain with a certain version) is actually the best thing you can hope for.
 
The following users thanked this post: Siwastaja

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9528
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #38 on: November 20, 2024, 08:27:12 am »
The code "breaking" in observable/testable ways when it's not correct (/relies on UB behaving in a certain way with a certain toolchain with a certain version) is actually the best thing you can hope for.

And some while(1) either disappearing when expected to be there, or appearing when not expected to be there, or whole large parts of program completely disappearing, are usually obvious during testing, even poor quality testing. Sure, a diagnostics message would be better, but a large visible difference in program itself is nearly as good. That's why I'm wondering why these kind of effects get the most hate.

Now a bounds checks being optimized out and out-of-bounds access happening due to that is crazy. This would be hard to find in testing because programmers assume that since they already added the check (as part of the program) and it worked before, they don't need to add another unit test for the same thing.
« Last Edit: November 20, 2024, 08:28:59 am by Siwastaja »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 16097
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #39 on: November 20, 2024, 08:34:30 am »
That being said, I'm wondering about this 'while (1)' thing. What kind of optimization could possibly optimize an infinite loop away, I don't know. An infinite loop is not something any optimization can get rid of, that I can think of right now.
Finite loops, OTOH, absolutely, if the body part of the loop has no effect.

So, unless you have a 'while (1)' loop that actually has an exit path in its body, and said path is statically determined by the compiler to be true at some iteration, while the rest of its body has no effect. Which would make it a finite loop.

 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4501
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #40 on: November 20, 2024, 08:57:34 am »
We've done this before in a long previous thread - stuff like optimisation detecting loop structures and replacing them with e.g. calls to memcpy. In embedded, there can be reasons why that is not desired. The hacks to prevent this are not always obvious. One such:
https://www.eevblog.com/forum/programming/gcc-arm32-compiler-too-clever-or-not-clever-enough/msg4121254/#msg4121254

The case of losing most of your code because you are not referencing main() anywhere (but instead locating it as a given address and jumping to it e.g. from a boot loader) is comical, and has comical work-arounds like inserting main() into an unused vector table location, which is asm code and asm never gets optimised.

What I don't like is successive compiler versions introducing new warnings, or changing the default settings for warnings. That creates work which you do not need if you have a working product which you are shipping! And since GCC is a continuously moving target, you cannot possibly - in any real-world business environment - chase the latest version for ever. At some point you have to freeze the tools (unless your employer is stupid and just keeps paying you for unproductive work). I use Cube IDE and froze it at 1.14.1, GCC v11, and that's it. I have not seen any advantage of any later version. But then I own my business, have done so since 1978, and have to run appropriate priorities, and while chasing new warnings would put bread on the table of an employee, it won't do so for me :)

Same goes for any modules like LWIP, MBEDTLS, you name it. All are moving targets. Often, like with MbedTLS (yes I am on their mailing list), they move mostly sideways, because the devs have run out of genuinely useful things to do, or they live in the standard "internet security groupthink" and don't actually build real products.

The real world is not perfect, no coder is perfect, and any significant chunk of code may have some UB, and a working product is relying on that being compiled in a certain way. Reality has no room for purists :)

So whatever tools works for you, use it...

The gotchas are

- will it run under later versions of windows (I address that using VMWARE, and run some c. 1995 tools that way)
- will it support later CPUs (probably no solution for that one)

and the drift in software is for floating licenses and such, which makes archiving a project nearly impossible. In the 1990s they used dongles which have the same problem (they eventually break) but patching them out was usually easy.
« Last Edit: November 20, 2024, 09:22:54 am by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 678
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #41 on: November 20, 2024, 12:02:55 pm »
Same goes for any modules like LWIP, MBEDTLS, you name it. All are moving targets. Often, like with MbedTLS (yes I am on their mailing list), they move mostly sideways, because the devs have run out of genuinely useful things to do, or they live in the standard "internet security groupthink" and don't actually build real products.

Security is a bit of a special case.  With LWIP once you've got your TCP stack running and reasonably tuned you can pretty much leave it alone modulo occasional bugfixes and maybe some fiddling for stability and reliability.  OTOH security is a Red Queen problem, you're constantly running to catch up with whatever random thing someone has dreamed up and decreed via Simon-Says, the most recent one being "Simon Says PQC!".  As Linus put it (although he was talking about schedulers vs. security rather than networking stacks vs. security), "the difference between them is simple: one is hard science. The other one is people wanking around with their opinions".

I've never seen IT security described so succinctly.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 678
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #42 on: November 20, 2024, 12:11:32 pm »
And some while(1) either disappearing when expected to be there, or appearing when not expected to be there, or whole large parts of program completely disappearing, are usually obvious during testing, even poor quality testing.

In the case of the stuff I was referring to they were reserved for should-never-occur situations that were often very difficult if not impossible to create during testing, for the very reason that they were should-never-occur conditions.  Typically it would require something like a hardware fault for the system to end up triggering a restart in this manner.  Apologies for being a bit vague but it's been awhile since I looked at that particular code base.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10289
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #43 on: November 20, 2024, 01:21:54 pm »
This is not a GCC issue. Its a poor understanding of C you can get away with using a poorly performing compiler issue. Its fully possible to get the behaviour you want if you write the code properly.

There's always one.  Some time ago when someone did this, i.e. played the "it's everyone on earth's fault for writing bad code and not the compiler"*** card, I ran some of their code (they were a well-known open-source person) through an experimental SAST tool that tried to find UB, and it found quite a lot of UB in their code.  Glass houses and all that...

When I first ran into the array-bounds-check post I linked to earlier I tested it on a bunch of experienced C developers, at least one of whom started programming (or at least debugging) by toggling front-panel switches on a PDP-11.  Even though they knew, or at least suspected, that there was something hidden in there, none of them could figure out what the code would actually do when compiled, and that was with me more or less pointing at the thing and saying "find the booby-trap".  So if experienced devs can't find the booby-trap when it's waved under their noses, imagine how hard it must be when it's buried in 200kloc, and what else might be in that 200kloc.

*** I know there's an awful lot of crap code out there, but there's also quite a bit of very carefully-written code where the devs simply aren't expecting the compiler to break things based on "yippee, we found some UB, now you're in for it!".
So, even good people, with a good understanding, have bugs in their code? I'm shocked. Shocked, I tell you. The world must be coming to an end.

This is one of the most BS arguments imaginable. If you need your code to be rock solid stable you don't change compilers. That's why certain versions of most tool chains are declared "long term support" versions. If you change versions, if you change tool chains, if you change ISAs you are back at square one with your system testing. Once you decide to move to a newer or different tool chain you should expect to do some very serious testing. When anything breaks, don't say the tool chain is broken, unless you are really sure. 30 years ago we used to find nasty compiler bugs a lot. Today we don't. They are actually quite rare, unless the compiler is at a very immature stage. What is really common is code that works by luck, rather than judgement. Simply using -O0 doesn't guarantee it will work. It just reduces the chances of breakage quite a bit. When I have investigated, and properly addressed, issues that break code with a high optimisation level I have mostly found things that were just about hanging together, which any change in the tool chain might have broken. They were just more likely to break with high optimisation.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10289
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #44 on: November 20, 2024, 01:40:42 pm »
What I don't like is successive compiler versions introducing new warnings, or changing the default settings for warnings. That creates work which you do not need if you have a working product which you are shipping! And since GCC is a continuously moving target, you cannot possibly - in any real-world business environment - chase the latest version for ever. At some point you have to freeze the tools (unless your employer is stupid and just keeps paying you for unproductive work). I use Cube IDE and froze it at 1.14.1, GCC v11, and that's it. I have not seen any advantage of any later version. But then I own my business, have done so since 1978, and have to run appropriate priorities, and while chasing new warnings would put bread on the table of an employee, it won't do so for me :)
What is wrong with adding new warnings? More information from the compiler is usually a good thing. Its specifically changing the meaning of settings where things gets nasty. If you break my make files you are a bad person. GCC has been guilty of this a few times, and the rest of the GNU tool chain a lot more. If you try to rebuild old code a newer tool chain typically throws out a bazillion lines of whining. The vast majority of this whining is not genuine problems, but constructs you probably wouldn't use in new code for reasons of clarity rather than correctness.

For embedded users there are a number of reasons for sticking with a version of GCC that works for you. They focus so much performance testing on powerful CPUs that they often release new versions that produce much worse code for simpler CPUs. This is an area where bit rot due to rampantly changing how things work has created numerous problems. GCC 3.x is a great choice for simpler cores, like AVR and MSP430. Nothing after that performs as well. If you try to build the GCC 3.x tools chains on a modern machine the number of failures would turn the rebuild into a major project. Their own tool chain is a great example of this problem of changing the meaning of settings.
 
The following users thanked this post: spostma

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4501
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #45 on: November 20, 2024, 01:59:29 pm »
Quote
OTOH security is a Red Queen problem, you're constantly running to catch up with whatever random thing someone has dreamed up and decreed via Simon-Says, the most recent one being "Simon Says PQC!".

This is gonna be a long off topic diversion ;) but - while I probably agree with you - IOT boxes should never be on an open port. They should always have an absolutely minimal attack surface. They should be clients, not servers. Yes we had a long thread on this too, with people correctly claiming that if you compromise that public-facing server to which all the IOT clients are connecting to, you have compromised all the clients. Well, yes, but that is another lever harder. That is why I think pretending MbedTLS is somehow "secure" is a waste of time. The whole box should never be on an open port in the first place! You just don't know if LWIP itself has some buffer overrun vulnerability, etc. Or even the CPU itself, with the ETH subsystem and its complicated chained buffers scheme.

Quote
What is wrong with adding new warnings? More information from the compiler is usually a good thing

I agree, but it still represents a time investment. You have to set up the project on an isolated PC (or inside a VM) and install the latest tools, and then chase down the warnings. If the product is working OK, those warnings will 99% likely be spurious.

Quote
GCC 3.x is a great choice for simpler cores, like AVR and MSP430.

Gosh, what year was that in?
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10289
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #46 on: November 20, 2024, 02:27:31 pm »
Quote
What is wrong with adding new warnings? More information from the compiler is usually a good thing
I agree, but it still represents a time investment. You have to set up the project on an isolated PC (or inside a VM) and install the latest tools, and then chase down the warnings. If the product is working OK, those warnings will 99% likely be spurious.
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls, apart from a blanket "Give me all the warnings you have" option, which no long term make files should use. The only good excuse for old stuff not building might be if they add some kind of "whine like its GCC x.y" command line option, so its easy to see how to restore clean building with older projects,
Quote
Quote
GCC 3.x is a great choice for simpler cores, like AVR and MSP430.
Gosh, what year was that in?
The year the GCC developers wed themselves to the x86 line (and perhaps anything else as complex as an x86) and everyone else had to just make do.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4501
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #47 on: November 20, 2024, 02:47:39 pm »
Quote
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls

Sorry - typing too fast...

I am pretty sure GCC did change some stuff recently. I also recall a linker change, complaining about executable segments in the ELF file, and fixing those was not trivial.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 9528
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #48 on: November 20, 2024, 03:46:58 pm »
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls, apart from a blanket "Give me all the warnings you have" option, which no long term make files should use.

I would recommend adding new warnings to use, though. They could reveal bugs that have been always there.

"It's working" argument is bullshit, usually. I mean, how do you know it is working? Maybe 10000 customers are using it, and maybe they have intermittent problems they deal with, which decreases the product quality experience, but not enough for them to make formal bug reports? Maybe they don't know how to report. Maybe they are blaming themselves for "doing something wrong" and working around the bugs?

Enabling more/better warnings from newer tools and going through the warnings is, IMHO, time well spent. If you don't have resources to do that, then, obviously, don't touch anything, don't update your toolchain.

But if you have even a little bit of extra resources to spend on software quality improvements, enabling more warnings seems like a pretty low hanging fruit.
 
The following users thanked this post: SiliconWizard

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10289
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #49 on: November 20, 2024, 04:30:18 pm »
You cut off the bit that said the control for warnings should not change, so older projects still build, completely changing my meaning. To be clearer, new warnings need to have new controls, apart from a blanket "Give me all the warnings you have" option, which no long term make files should use.

I would recommend adding new warnings to use, though. They could reveal bugs that have been always there.

"It's working" argument is bullshit, usually. I mean, how do you know it is working? Maybe 10000 customers are using it, and maybe they have intermittent problems they deal with, which decreases the product quality experience, but not enough for them to make formal bug reports? Maybe they don't know how to report. Maybe they are blaming themselves for "doing something wrong" and working around the bugs?

Enabling more/better warnings from newer tools and going through the warnings is, IMHO, time well spent. If you don't have resources to do that, then, obviously, don't touch anything, don't update your toolchain.

But if you have even a little bit of extra resources to spend on software quality improvements, enabling more warnings seems like a pretty low hanging fruit.
I agree. The whole point of the additional warnings is to highlight additional potential problems, and ultimately those warnings should be addressed if the project has a long term future. The problem is when you go from a clean compile to a flood of complaints from the tools its very hard to know where to start. If you make the thousands (No exaggeration. Thousands is typically on the low side) of source code changes needed to remove the warnings from an older project on a recent tool chain you are going to make at least a few errors, and the project will be broken. You need a way to get easily get back to a clean build, so you can move forwards incrementally with the necessary changes, and test along the way.
 
The following users thanked this post: Siwastaja


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf