Author Topic: No more code-size-limited version of IAR embedded workbench for ARM?  (Read 12879 times)

0 Members and 1 Guest are viewing this topic.

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10141
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #125 on: December 02, 2024, 04:28:34 pm »
It is in the second edition of K&R, even if some may consider it a new testament heresy.
The second edition of K&R is from around the time of the C89 spec, and basically documents what is in that spec. So, if volatile went into the C89 spec I would expect it to be in the second edition of K&R. I had been writing in C based on the original K&R for over a decade by then.
 

Online peter-h

  • Super Contributor
  • ***
  • Posts: 4414
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #126 on: December 02, 2024, 05:17:15 pm »
Quote
Bullshit. Try qualifying every variable in your program volatile and see what happens. I can assure you the impact on speed and code size is not "negligible".

I can assure you that it will be negligible, in 99% of cases of statics/globals. I would expect locals, for loop counters, etc, to be in registers anyway if possible.

And that is how C was in the 1980s. Variables explicitly expected in RAM didn't get optimised away.

It's not a big thing, especially since globals (declared extern in other .c files) are inherently not optimised away. Good idea to keep RTOS tasks which share RAM variables to be in separate .c files :)

BTW, re my old MbedTLS query, I can see from a test it is running that it supports "Using TLS ciphersuite: TLS-ECDHE-ECDSA-WITH-CHACHA20-POLY1305-SHA256" so no need for MbedTLS 3 even for Chacha20.
« Last Edit: December 02, 2024, 05:35:19 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9439
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #127 on: December 02, 2024, 05:43:31 pm »
I can assure you that it will be negligible, in 99% of cases of statics/globals. I would expect locals, for loop counters, etc, to be in registers anyway if possible.

I encourage you to try. Try it for example on mbedtls state variables.

Quite obviously the need for the volatile keyword, which was already realized into official standard in the 80's - meaning it was discussed many years before that - stems from the optimizations. Even early compilers did obvious optimizations like "caching" close-by operations and reducing loads/stores, such that global/static was loaded, then operated on in CPU registers, then stored back to memory. It makes perfect sense, because computers back then had little storage space and memory, pretty similar to today's embedded  targets.
« Last Edit: December 02, 2024, 07:27:21 pm by Siwastaja »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10141
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #128 on: December 02, 2024, 05:50:01 pm »
Quite obviously the need for the volatile keyword, which was already realized into official standard in the 80's - meaning it was discussed many years before that - stems from the optimizations. Even early compilers did obvious optimizations like "caching" close-by operations and reducing loads/stores, such that global/static was loaded, then operated on in CPU registers, then stored back to memory. It makes perfect sense, because computers back then had little storage space and memory, pretty similar to today's embedded  targets.
The original motivation for volatile was for SMP and interrupts. You can't have two CPUs, or two separate threads of processing, working on a variable without it being signalled as volatile. So, it was nothing to do with advanced code optimisation. It was about very basic "you must go back to main memory every time" requirements.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9439
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #129 on: December 02, 2024, 07:17:55 pm »
The original motivation for volatile was for SMP and interrupts. You can't have two CPUs, or two separate threads of processing, working on a variable without it being signalled as volatile. So, it was nothing to do with advanced code optimisation. It was about very basic "you must go back to main memory every time" requirements.

It is exactly because of optimizations (as peter-h defines them). The reason why interrupts do not work without the volatile qualifier is exactly because compiler "caches" the value of a memory-stored variable in a register instead of repeatedly loading and storing it. This is a very primitive form of optimization which was obvious already in 1980's.

The first C standard from 1980's I linked to above already defines the volatile keyword as such:
"A volatile declaration may be used to describe an object corresponding to a memory-mapped
input/output port or an object accessed by an asynchronous interrupting function Actions on objects
so declared shall not be “optimized out” by an implementation or reordered except as permitted by the
rules for evaluation expressions"
(emphasis added)

Your misconception lives strong: people assume volatile does more than prevent that type of optimization, being some kind of "make shared data work" keyword, which it isn't and never was. All it does is prevent said type of optimization, which may or may not be sufficient for interrupt signalling. More than just volatile may be needed if underlying accesses are not atomic, as they often are not (e.g., barrier of disabling interrupts).
« Last Edit: December 02, 2024, 07:30:35 pm by Siwastaja »
 
The following users thanked this post: newbrain, SiliconWizard

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9439
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #130 on: December 02, 2024, 07:28:18 pm »
I can assure you that it will be negligible, in 99% of cases of statics/globals. I would expect locals, for loop counters, etc, to be in registers anyway if possible.

I encourage you to try. Try it for example on mbedtls state variables.

I know peter-h won't bother, so I did a quick test. Picked up a random code module, a (patent-pending) algorithm which detects patterns on ADC data, calculating stuff like power factors, real powers, rms currents etc., while doing usual housekeeping on an embedded system.

Let's ignore performance and look at code size:
-Os: .text 5632 bytes
All globals and function-statics qualified volatile, no other changes: 9822 bytes (74% size increase).

I won't call that "negligible". This is not even mentioning performance, which does matter, too.

And not mentioning fact that limiting this to globals and statics only makes little sense. volatile should be added to everything that does not fit CPU registers to see the full extent of peter-h's idea.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10141
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #131 on: December 02, 2024, 07:47:29 pm »
The original motivation for volatile was for SMP and interrupts. You can't have two CPUs, or two separate threads of processing, working on a variable without it being signalled as volatile. So, it was nothing to do with advanced code optimisation. It was about very basic "you must go back to main memory every time" requirements.

It is exactly because of optimizations (as peter-h defines them). The reason why interrupts do not work without the volatile qualifier is exactly because compiler "caches" the value of a memory-stored variable in a register instead of repeatedly loading and storing it. This is a very primitive form of optimization which was obvious already in 1980's.

The first C standard from 1980's I linked to above already defines the volatile keyword as such:
"A volatile declaration may be used to describe an object corresponding to a memory-mapped
input/output port or an object accessed by an asynchronous interrupting function Actions on objects
so declared shall not be “optimized out” by an implementation or reordered except as permitted by the
rules for evaluation expressions"
(emphasis added)

Your misconception lives strong: people assume volatile does more than prevent that type of optimization, being some kind of "make shared data work" keyword, which it isn't and never was. All it does is prevent said type of optimization, which may or may not be sufficient for interrupt signalling. More than just volatile may be needed if underlying accesses are not atomic, as they often are not (e.g., barrier of disabling interrupts).
I have no misconception. Volatile doesn't work for SMP these days, but it did in the 1980s, and that's a key reason we initially had it. People didn't really think in terms of the multi-layered caches we have today, and the complexity that causes for just what "memory mapped" actually means. Today we have instructions like CAS and DCAS in complex processors, and things like threading won't work properly without them, and a full understanding of the memory ordering behaviour OOO processors bring. Its a different time now.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9439
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #132 on: December 02, 2024, 07:53:51 pm »
I have no misconception. Volatile doesn't work for SMP these days, but it did in the 1980s, and that's a key reason we initially had it. People didn't really think in terms of the multi-layered caches we have today, and the complexity that causes for just what "memory mapped" actually means.

You are mixing things up and pouring more alphabet soup into the mix.

Just volatile alone does not work for shared data today even in simple, cacheless CPUs like AVRs or PICs, just like it did not work back in 1980's on computers of similar complexity. Volatile is only part of the solution: other types of guards are needed when memory accesses are not atomic, as will be the case when the object size is larger than the memory bus width; for example, in 1980's, 16-bit systems were popular target for C language, and 32-bit long ints needed more than just volatile: for example, disabling interrupts during the update, or using atomic types as mutexes.

If you think nothing more than volatile was needed for shared data, your programs worked by sheer luck in the 1980's. Just like many programs work by sheer luck today. Bugs related to shared data (e.g. in interrupts) are really PITA to find, and if the variables update rarely and ISRs are triggered rarely, it can take weeks of runtime to see the effect of the bug (then finding it is much more difficult). Random wrong behavior is the result.

But really, internetz is full of good information and tutorials about this whole thing, I should not be lecturing such basics here.
« Last Edit: December 02, 2024, 07:56:44 pm by Siwastaja »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10141
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #133 on: December 02, 2024, 07:58:04 pm »
I have no misconception. Volatile doesn't work for SMP these days, but it did in the 1980s, and that's a key reason we initially had it. People didn't really think in terms of the multi-layered caches we have today, and the complexity that causes for just what "memory mapped" actually means.

You are mixing things up and pouring more alphabet soup into the mix.

Just volatile alone does not work for shared data today even in simple, cacheless CPUs like AVRs or PICs, just like it did not work back in 1980's on computers of similar complexity. Volatile is only part of the solution: other types of guards are needed when memory accesses are not atomic, as will be the case when the object size is larger than the memory bus width; for example, in 1980's, 16-bit systems were popular target for C language, and 32-bit long ints needed more than just volatile: for example, disabling interrupts during the update, or using atomic types as mutexes.

If you think nothing more than volatile was needed for shared data, your programs worked by sheer luck in the 1980's. Just like many programs work by sheer luck today. Bugs related to shared data (e.g. in interrupts) are really PITA to find, and if the variables update rarely and ISRs are triggered rarely, it can take weeks of runtime to see the effect of the bug (then finding it is much more difficult). Random wrong behavior is the result.

But really, internetz is full of good information and tutorials about this whole thing, I should not be lecturing such basics here.
Do you expect every post to point out the blatantly obvious? I write assuming I am writing to someone with some basic knowledge of the topic.
 
The following users thanked this post: Siwastaja

Online peter-h

  • Super Contributor
  • ***
  • Posts: 4414
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #134 on: December 02, 2024, 10:40:28 pm »
Quote
Let's ignore performance and look at code size:
-Os: .text 5632 bytes
All globals and function-statics qualified volatile, no other changes: 9822 bytes (74% size increase).

That is some rare and tightly written (and very small) piece of code. The vast majority of code in a working product is nothing like that.

That, in turn, is why C took over the bulk of the coding in most products in the mid-1980s onwards, with just small parts written in asm. The fact that the £1500 (that's 1500 quid in old money!) IAR Z180 compiler generated crap code, probably 5x bigger and 10x slower than hand-crafted asm, didn't matter, because the CPU spent probably 99% of its cycles running 1% of the code, not to mention spending most of that 99% waiting for a keystroke :) What mattered was that the box worked and you got decent coder productivity.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline mwb1100

  • Frequent Contributor
  • **
  • Posts: 618
  • Country: us
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #135 on: December 02, 2024, 11:16:42 pm »
I don't know if definitive information was posted about the the status of KickStart/code-size-limited  versions of the IAR toolchains (the bulk of the thread spun off into comparing compiler optimizations, etc.), but here's the dope straight from IAR (emphasis added):

Quote
Hope you’re doing well! I am the account manager that covers Washington for IAR, so I’m happy to assist you. I saw your note about the kickstart/code-size limited version of our Embedded Workbench licenses. Unfortunately, we stopped providing the type of license earlier this year. We do have options for purchasing a perpetual license if you’d like to discuss that.

The OP might want to edit that into the opening post so that anyone who stumbles onto this thread wondering about the status of IAR's free/hobbyist/student oriented toolchains will actually get an answer instead of having to search through 6 or more pages of compiler wars.
« Last Edit: December 02, 2024, 11:21:17 pm by mwb1100 »
 
The following users thanked this post: mark03, cfbsoftware

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9439
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #136 on: December 03, 2024, 06:37:43 am »
Do you expect every post to point out the blatantly obvious? I write assuming I am writing to someone with some basic knowledge of the topic.

It may be obvious to you, but it really isn't obvious to everyone. Getting the basics right is the first stepping stone to understanding more advanced stuff.

That is some rare and tightly written (and very small) piece of code. The vast majority of code in a working product is nothing like that.

That, in turn, is why C took over the bulk of the coding in most products in the mid-1980s onwards, with just small parts written in asm. The fact that the £1500 (that's 1500 quid in old money!) IAR Z180 compiler generated crap code, probably 5x bigger and 10x slower than hand-crafted asm, didn't matter, because the CPU spent probably 99% of its cycles running 1% of the code, not to mention spending most of that 99% waiting for a keystroke :) What mattered was that the box worked and you got decent coder productivity.

I invite you to try it out with some other piece of code. What you say about "tightly written" is weird, because for non-tight the result would be even worse.

I mean, just look at how CMSIS or STM32 libraries are written; they regularly take a temporary copy of a IO variable to manipulate it. This is even when they do not care much about performance. But the difference is just so big, easily 2-3x.

CPU indeed spends 99% of the time running 1% of the code which is exactly why it is so important to optimize unnecessary shuffling of data back and forth between CPU registers and memory within that small 1% loop. This was already realized in 1980's and C compilers already did this optimization back then because it was so necessary, and because compilers did that already, standardization body included volatile and const qualifiers from the beginning.

The difference on simple processors is maybe just 2-3x in execution time; add caches to the mix and we are talking possibly 100x difference.

What you propose was not feasible in the 1980's, and is even less feasible today.

Not having a volatile qualifier is possible, and that is what many modern C replacements do, but I can assure you they don't choose ensuring memory access for every variable access; instead, they just always optimize and do not allow users to intervene that in any way. Which means the languages need some different, higher level construct for multiprocessing/memory mapping - which is of course a better idea for a typical programmer.
« Last Edit: December 03, 2024, 06:44:18 am by Siwastaja »
 

Online JPortici

  • Super Contributor
  • ***
  • Posts: 3578
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #137 on: December 03, 2024, 08:47:53 am »
Quote
Let's ignore performance and look at code size:
-Os: .text 5632 bytes
All globals and function-statics qualified volatile, no other changes: 9822 bytes (74% size increase).

That is some rare and tightly written (and very small) piece of code. The vast majority of code in a working product is nothing like that.

That, in turn, is why C took over the bulk of the coding in most products in the mid-1980s onwards, with just small parts written in asm. The fact that the £1500 (that's 1500 quid in old money!) IAR Z180 compiler generated crap code, probably 5x bigger and 10x slower than hand-crafted asm, didn't matter, because the CPU spent probably 99% of its cycles running 1% of the code, not to mention spending most of that 99% waiting for a keystroke :) What mattered was that the box worked and you got decent coder productivity.

from time to time i take some of my ooold projects (working products, as you say) and rewrite them to my current standards. In the past when i was way less experienced i used few big files with everything in it, big structures holding everything, do everything volatile because some things needed it, and i wanted the compiler to shut up about casting. Current projects have many, many smaller files with dedicated functions and scope, getters and setters to private member of structures, volatile only where actually needed (multithreaded/interrupt), assembly modules instead of trickery or walls of volatile asm i had to micromanage. On most of them code size down about 40%, and cosiderable speed gains

I don't know if definitive information was posted about the the status of KickStart/code-size-limited  versions of the IAR toolchains (the bulk of the thread spun off into comparing compiler optimizations, etc.), but here's the dope straight from IAR (emphasis added):

Quote
Hope you’re doing well! I am the account manager that covers Washington for IAR, so I’m happy to assist you. I saw your note about the kickstart/code-size limited version of our Embedded Workbench licenses. Unfortunately, we stopped providing the type of license earlier this year. We do have options for purchasing a perpetual license if you’d like to discuss that.

The OP might want to edit that into the opening post so that anyone who stumbles onto this thread wondering about the status of IAR's free/hobbyist/student oriented toolchains will actually get an answer instead of having to search through 6 or more pages of compiler wars.

Sorry, rule 25 of the internet :)
« Last Edit: December 03, 2024, 08:51:11 am by JPortici »
 
The following users thanked this post: Siwastaja

Online peter-h

  • Super Contributor
  • ***
  • Posts: 4414
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #138 on: December 03, 2024, 12:05:06 pm »
Quote
Sorry, rule 25 of the internet

I am a mod/admin on a tech forum (not electronics) and I would have simply moved the compiler discussion into a general "compiler optimisation discussion" thread. But clearly on EEVBLOG there isn't the time available for doing that; it is about 10x bigger in daily post count than mine.

Quote
from time to time i take some of my ooold projects (working products, as you say) and rewrite them to my current standards. In the past when i was way less experienced i used few big files with everything in it, big structures holding everything, do everything volatile because some things needed it, and i wanted the compiler to shut up about casting. Current projects have many, many smaller files with dedicated functions and scope, getters and setters to private member of structures, volatile only where actually needed (multithreaded/interrupt), assembly modules instead of trickery or walls of volatile asm i had to micromanage. On most of them code size down about 40%, and cosiderable speed gains

I don't doubt that, but if a product sells and is proven reliable in the marketplace over years, I would not change anything on it unless necessary. I don't even change the brand of a capacitor (used in the output filter of a SMPS) until I have built a few circuits with it, tested them over temperature etc, and sent them out, and nothing has come back after a year.

Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9439
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #139 on: December 03, 2024, 12:54:17 pm »
I don't doubt that, but if a product sells and is proven reliable in the marketplace over years, I would not change anything on it unless necessary. I don't even change the brand of a capacitor (used in the output filter of a SMPS) until I have built a few circuits with it, tested them over temperature etc, and sent them out, and nothing has come back after a year.

Well, if your design flow does not handle switching of compiler versions, which is perfectly normal, then you simply freeze your tools. That's a sensible thing to do and works especially well with open-source tools. I mean, you can find as old version of gcc online as you wish, and use it on any system where it originally worked and it still works. And with virtual machines, stuff like this is easier than ever. At least old software stays the same bit-by-bit, compare this to capacitors where you cannot keep using it if manufacturer stops making them, and they would have batch-to-batch variations anyway...

But, sometimes doing design refresh cycle might be a good idea even if the product sells well as it is. Listen to the market needs for improvements; do a fresh design so that youngsters in the company can take over it. Although doing that haphazardly is not a good idea, it is easy to step into some kind of mine of using $current_trend_tool_of_the_year, which has much shorter lifetime than e.g. K&R C before C89 had, which is still workable, quite an achievement. For example, if you rewrite your stuff in that New C Everybody Will Be Using Because Google Uses It language which everybody talked about just 3-4 years ago and the name of which I forgot - you are going to rewrite it now in Rust - and again in something else in just 5 years.
« Last Edit: December 03, 2024, 12:56:10 pm by Siwastaja »
 

Online JPortici

  • Super Contributor
  • ***
  • Posts: 3578
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #140 on: December 03, 2024, 01:25:05 pm »
Disagree. I don't keep crappy code just because "it works", this is how you build house of cards, i do have comments in old projects a la "do not move this statement around" or "don't remove this variable" and that is 99.9% crappy design on me (again, i was inexperienced in both C and development in general back then), or things cobbled together, or incremental additions without any structure. it may be working, but it's shit code. Almost all my projects over time have been refactored, or rewritten from the ground up, and it's much, much, much easier to add new features without the castle falling down.

We do have projects from before my time which are frozen, and a series of comments at the beginning which state what to change to go from behaviour X to Y. They are still there because old applications that we can't or don't want to properly test so they stay as it it, but i refuse to touch. Most projects were like this, it was a mess. One of the things i did is make a parameter out of everything that made sense, and make it programmable so it was ONE firmware i had to keep updating instead of managing, say, ten, times X for every project. That was the spawn of the "don't change anything" mentality which was actually incremental builds with lack of planning, don't look at it wrong or it won't work.

Some times i do find actual compiler bugs (which get reported, then fixed in the following release) so whenever there is an update i run my tests on some projects, measure the changes, see if the bugs have been effectively solved, and then i update. Every update in the compiler is a breath of fresh air because better diagnostic and/or better code generation, and i get to remove the workarounds that ultimately would become shit code
« Last Edit: December 03, 2024, 01:31:51 pm by JPortici »
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9439
  • Country: fi
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #141 on: December 03, 2024, 03:34:11 pm »
Disagree. I don't keep crappy code just because "it works", this is how you build house of cards, i do have comments in old projects a la "do not move this statement around" or "don't remove this variable" and that is 99.9% crappy design on me (again, i was inexperienced in both C and development in general back then), or things cobbled together, or incremental additions without any structure. it may be working, but it's shit code. Almost all my projects over time have been refactored, or rewritten from the ground up, and it's much, much, much easier to add new features without the castle falling down.

Of course we agree on this, but reality is, some embedded device manufacturers and developers do not think they are dealing with a software project at all and will not consider any sound software development practices, some even just program the same binary decade after decade. It's not different to freezing a mold for plastic manufacturing or something. Freezing tools to be able to do minor fixes is just a small step further from that.

Besides, old incremental cost does not matter to future decisions. It is easy to fool oneself to not make necessary investments as they look more expensive than "just fixing this one little thing". Then again there is also considerable risk in starting a big renovation. V2.0 rarely makes financial sense, second-system syndrome is very real too, and the fact you succeed in major rewrites tells more about you than about software (let alone hardware) industry as a whole. How are the companies supposed to find people like you and do it reliably?

If they have something which works and needs a little patch every now and then, even if its band-aid over band-aid and ugly, carries some risk (mostly related to someone who knows how it works retiring or poorly kept backups getting destroyed etc.), but starting a major overhaul carries a potentially much biffer risk, becoming a massive time sink which finally is less reliable than the old system and in need of being replaced again in only just a few years. Publicly funded software (e.g. healthcare information systems) being a typical example, at least here. Fearing that, I'm not surprised that sensible boards of directors are not too fond of the idea of rewriting software systems, even if we engineers would prefer it, and describe the old systems with very strong words.
« Last Edit: December 03, 2024, 03:37:00 pm by Siwastaja »
 

Online peter-h

  • Super Contributor
  • ***
  • Posts: 4414
  • Country: gb
  • Doing electronics since the 1960s...
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #142 on: December 03, 2024, 04:41:53 pm »
This topic is much bigger than freezing a C compiler. What about the PCB tools? Yes it may be that you can find old GCC versions online but you were dumb to not archive your tools originally. Today's PCB tools are often rented, which probably means no chance of archiving. Then you have schematic tools, though nowadays usually integrated with PCB tools.

So there is a whole philosophy of whether to freeze a selling project or not.

Spinning off a new and improved version is a totally different discussion.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 
The following users thanked this post: Siwastaja

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3302
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #143 on: December 03, 2024, 05:12:41 pm »
I don't keep crappy code just because "it works"

How do you classify the code into crappy and not crappy?

I think the fact that the code does what it is designed to do is of foremost importance.

Although many others would disagree and tell you that the good code must be politically correct, and what it does is secondary.
 
The following users thanked this post: peter-h, Siwastaja

Offline coppice

  • Super Contributor
  • ***
  • Posts: 10141
  • Country: gb
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #144 on: December 03, 2024, 05:33:13 pm »
I don't keep crappy code just because "it works"

How do you classify the code into crappy and not crappy?

I think the fact that the code does what it is designed to do is of foremost importance.

Although many others would disagree and tell you that the good code must be politically correct, and what it does is secondary.
I think that depends how you see the word crappy. There is plenty of crappy code dealing with broken hardware and other quirks, which, while crappy, has no known better alternative. There's crappy code for something short term, where you are monitoring for any unfortunate side effects, and it gets the job done. Then there's the genuine garbage, that's for the long term, and really ought to be properly addressed.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3302
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #145 on: December 03, 2024, 05:36:13 pm »
What about the PCB tools?

I wrote my own. First needed it to make single-layer TH PCBs, then, as technologies evolved added features here and there when I needed. Now it can do multi-layer, length matching, other useful things.
 
The following users thanked this post: Siwastaja

Online JPortici

  • Super Contributor
  • ***
  • Posts: 3578
  • Country: it
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #146 on: December 04, 2024, 09:47:09 am »
I don't keep crappy code just because "it works"

How do you classify the code into crappy and not crappy?

I think the fact that the code does what it is designed to do is of foremost importance.

Although many others would disagree and tell you that the good code must be politically correct, and what it does is secondary.

A measure for crappiness can be absence of structuring, consistency, presence of hacks to coerce the compiler into doing what you think it should do, like  abusing globals and volatile, mixing C and assembly when there is almost always a better/proper way to do things, difficulty in adding functionality as it will have side effects on other parts of code that are difficult to change because of all the above.
In the last few years i rewrote several of my old firmwares from the ground up. About 80-85% of the time spent to define the actual behaviour to replicate (i.e.: specification), 10% for A/B testing and 5% actual coding, since then adding new features has been much, much easier.
The embedded muse was full of such cases and examples, a rewrite of problematic software can have a "high" initial cost (which, again, is defining in detail the actual specifications of the current firmware), but pays off almost immediately.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15911
  • Country: fr
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #147 on: December 04, 2024, 09:49:05 am »
What about the PCB tools?

I wrote my own. First needed it to make single-layer TH PCBs, then, as technologies evolved added features here and there when I needed. Now it can do multi-layer, length matching, other useful things.

I'd be curious to have a look, if you have some screenshots.
 

Online 5U4GB

  • Frequent Contributor
  • **
  • Posts: 638
  • Country: au
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #148 on: December 04, 2024, 09:55:47 am »
Another example of this is gcc's -fno-delete-null-pointer-checks.  It'd be like having a car where, each time to start it, you have to remember to specify -fno-ignore-brake-pedal.

Another alarmistic exaggeration: for example, I have never used -fno-delete-null-pointer-checks, and did not even know about it before this thread. I have never seen any project use it. And I have never seen any issues caused by this feature.

I know of several projects that have used it to prevent gcc from deleting null pointer checks.  This is presumably why it was added to gcc, I'm pretty sure they wouldn't just throw it in on a dare.
« Last Edit: December 04, 2024, 10:00:50 am by 5U4GB »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3302
  • Country: ca
Re: No more code-size-limited version of IAR embedded workbench for ARM?
« Reply #149 on: December 04, 2024, 02:25:35 pm »
I'd be curious to have a look, if you have some screenshots.

Sure

 
The following users thanked this post: Siwastaja


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf