When I was working on a big project (ZigBee stack, ~150K binary), we have reported numerous bugs to IAR, it is no better. The first few were painful, since we had to go through the support people. At some point we've just got direct contact with the engineers and things got better.
Not just that but ARM GCC is maintained by ARM themselves so saying it is free isn't exactly right.
In what sense does it make it "not free"? That's like saying that since RedHat maintains a lot of GCC/employs many maintainers "saying GCC is free isn't exactly right".
"Free-ness" of software doesn't depend on who maintains the code but on the license (when talking free as in freedom) and on price (when talking free as in beer). In GCC case the license is still GPL and it is still a free download, both the "normal" and the ARM versions.
Not free as in: with every ARM device you buy you pay for the development of gcc for the ARM platform.
It is kind of like (commercial) TV and radio aren't free even though you don't have to pay anything to the TV station. With every product you buy you pay for TV and radio.
He's probably twisting it in the sense that "we all pay a little something for it when we buy something with an ARM core".
..but it's Open Source, so you can fix issues like this yourself...
Seems you get what you pay for.
BTW there are code-limited free versions of IAR and other professional tools, which ought to be enough for something a simple as a power supply...
I'll get my coat.
Hey Mike, why don't you go PIC your nose...
(...No, I don't have anything to contribute to this thread. I use avr-gcc and have found zero problems with it.)
Tim
I think the point of the video was not to solve the problem, but to show just how horrible tools are, since the most obvious solution or at least path to a workaround was mentioned here multiple times.
AVR-GCC toolchain is infamous with its "relocation truncated to fit: R_AVR_13_PCREL" liner errors that pop up for multiple reasons, so they get fixed and pop up again. This happens from time to time, not a huge deal, just rearrange some stuff in the code and it goes away.
I think the point of the video was not to solve the problem, but to show just how horrible tools are, since the most obvious solution or at least path to a workaround was mentioned here multiple times.
AVR-GCC toolchain is infamous with its "relocation truncated to fit: R_AVR_13_PCREL" liner errors that pop up for multiple reasons, so they get fixed and pop up again. This happens from time to time, not a huge deal, just rearrange some stuff in the code and it goes away.
Hmm, so a discrepancy between where the compiler expected the data to be, and where the linker wanted/had to put it? I can see that being a problem with the >128k AVRs, yes. Or more generally, any system with segmented memory, or even more generally, a platform with different sizes of pointers.
And yeah, everything is horrible. Anything only ever gets fixed just enough to be usable, for some arbitrary (market-driven?*) degree of "usable". Any sufficiently large project, is too large to ever find all the bugs, or unexpected quirks, before release.
*Hey, you guys all like capitalism, right? Right?...
Tim
Hmm, so a discrepancy between where the compiler expected the data to be, and where the linker wanted/had to put it?
No, linker is just not able to place the code. LD does not "shuffle" the functions around, it just places things in a linear order and once placed, a function can no longer be taken out and placed somewhere else. And two or more functions may be located in a way that the distance between them is bigger than 13-bit relative offset allows.
This can happen on smaller memory devices too. 13-bits it +/- 8K words, so any device with memory size over this number may be affected.
And that just describes the legitimate case. This particular linker error happened in a few cases because of compiler bugs and had nothing to do with the linker itself.
Hmm, so a discrepancy between where the compiler expected the data to be, and where the linker wanted/had to put it?
No, linker is just not able to place the code. LD does not "shuffle" the functions around, it just places things in a linear order and once placed, a function can no longer be taken out and placed somewhere else. And two or more functions may be located in a way that the distance between them is bigger than 13-bit relative offset allows.
This can happen on smaller memory devices too. 13-bits it +/- 8K words, so any device with memory size over this number may be affected.
And that just describes the legitimate case. This particular linker error happened in a few cases because of compiler bugs and had nothing to do with the linker itself.
That error ought to be sanely reported, and fixed by selecting an appropriate memory model.
That error ought to be sanely reported, and fixed by selecting an appropriate memory model.
I'm not really sure how to report it any better short of writing a complete explanation in the error message. Given the error message it is not all that hard to find documentation explaining the error in details.
You can say that compiler can try harder to avoid the issue, but it is still possible to create code that can't be properly placed in the device no matter what.
Also IAR shows very similar message in this situation as well.
Not free as in: with every ARM device you buy you pay for the development of gcc for the ARM platform.
It is kind of like (commercial) TV and radio aren't free even though you don't have to pay anything to the TV station. With every product you buy you pay for TV and radio.
That's a rather oddball definition, but okay. Let's not pollute the thread with more philosophical disputes.
I have seen similar things happen with STM32 code as well - sometimes simply changing the optimization level for the compiler will make this go away because the compiler will inline some functions, remove unused code and suddenly the linker won't blow up anymore. My impression is that the ARM port of binutils could use a bit more love.
And re Mike and the suggestion to use IAR - given that you are mostly a PIC developer, you are probably intimately familiar with the shitshow the various PIC compilers, especially for the smaller devices (PIC18 and such) are. And they were (still are?) paid. So much for the "you get what you pay for" in this case. I will use GCC or Clang (that supports ARM as well, AFAIK) any day over those things.
https://www.mail-archive.com/bug-binutils@gnu.org/msg29920.htmlIt seems that Nick was right that this hitting the stack limit.
The reason it's doing so is that you have a quite a lot of templates in your
C++ code.
The linker seems to segfault when it's trying to demangle this symbol
_ZNSt11_Tuple_implILj0EJN7General6Parser4NodeINS1_7KeywordILj4ELj2EEENS1_6StatesIJNS2_INS3_ILj5ELj3EEENS5_IJNS1_4SCPI3EndINS7_15CommandInternalIRKS4_JNS1_5ParamIfEENS7_5BlankILj0EEZNS7_7CommandISB
which is done in libiberty
https://github.com/gcc-mirror/gcc/blob/master/libiberty/cp-demangle.c#L4315
This hits two VLAs expanding two structs of 16 bytes. However you have 1485065
entries in dpi.num_copy_templates
causing it to push the stack down (dpi.num_saved_scopes +
dpi.num_copy_templates) * 16 bytes. Which is 190129824 bytes, or 181mb and so
way over blowing your stack limit.
I tried with GCC 9 which seems to do a better job with the templates and it
works there. But I guess the real fix is to not use those VLAs in libiberty.
But I believe that's maintained by GCC if I'm not mistaken.
for now, you can work around it by increasing your ulimit.
So, not really a bug of the linker, aside from it not reporting the stackoverflow correctly?
I did see that you're running your working dir in OneDrive. I must discourage that. It has caused me strange issues before running working dirs inside cloud synced directories.
Would be great if David replied, assuming he has a forum account, if nothing else just to confirm he reads this thread.
Yes he does. His forum name is
Seppy.
David,
At 2:14 in the followup, you say the std::vector has massive disadvantages - what are you basing that statement on? Any particular context? Perhaps it was just a slip of the tongue, but the vector itself is allocated on the stack, its memory buffer is however allocated from the heap so not all is kept on the stack.
Looks like the right way for GNU would be to undefine CP_DYNAMIC_ARRAYS, if they are still using this in the latest version, so that it uses the heap allocation part.