Author Topic: embeddedgurus.com: "An open letter to the developers of the MPLAB IDE"  (Read 73120 times)

0 Members and 1 Guest are viewing this topic.

Offline AndersAnd

  • Frequent Contributor
  • **
  • Posts: 572
  • Country: dk
Quote
Based on these graphs it doesn't really look like 16 bits are declining although they had a peak in 2010

It is hard to reconcile that sentence to itself, :)
I mixed up the 8 and 16 bit graphs, it was 8 bit with a peak in 2010.
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Quote
Based on these graphs it doesn't really look like 16 bits are declining although they had a peak in 2010

It is hard to reconcile that sentence to itself, :)
I mixed up the 8 and 16 bit graphs, it was 8 bit with a peak in 2010.

You could have corrected it by saying that 16 bits had a peak in 2012.

8 bit like the retro arcades still have a lot of legs left :)
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1198
  • Country: fi
what I'd like to know (genuinely asking) is if MC creates the no-ops or bad code on purpose or accident.
They're an artifact of the compiler design and the crap architecture of the 8-bit PICs. The compiler generates code before having allocated addresses for variables, so it must leave space for potential bank switches. The full compiler optimizes away the unneeded instructions in a later pass.

Offline linux-works

  • Super Contributor
  • ***
  • Posts: 1999
  • Country: us
    • netstuff
so, there was never a gcc-based tool chain for PIC?  seems strange to me, that it was never done.

I would like to use PIC chips and have the same options I have with avr, but not being gcc-based (or fully free based) really annoys me.  I'm not a corporation and can't justify licensing fees and any of that BS.  I know gcc is not the best technology around, but its old, mostly understood (lol) and covers a lot of ground.  lots of people have used gcc based tools for decades and we have come to trust them for any kind of use, light or heavy.

maybe I'm old school, but I know I can work with a project that uses gcc and makefiles.

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
"he full compiler optimizes away the unneeded instructions in a later pass."

A guess like that is easy to confirm: compile a piece of code in free and count the number of nop instructions.

I am always amazed that people are a lot more willing to regergitate what they heard from total strangers on the internet than to do their own (and simple) thinking.
================================
https://dannyelectronics.wordpress.com/
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1198
  • Country: fi
so, there was never a gcc-based tool chain for PIC?  seems strange to me, that it was never done.
The 8-bit PIC architecture is a very poor fit for GCC. Although it does support some odd-ball architectures, it's really designed for register-rich, orthogonal architectures.

Every once in a while someone brings up porting GCC for 8-bit PICs, but so far it's either proven too difficult or no-one's been serious about it.

Offline ecat

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: gb
Just downloaded XC8 1.31 as an upgrade to 1.12. On the only pic16 source I have to hand ram usage increases  by 2 bytes but flash usage is down by 332 words (about 10%). Not bad.

In related news, Microchip have some sort of deal on XC32++, no real info as to what is offered, apart from the C++ extensions and Dinkumware. Whatever, the offer had the word 'Free' in the title so it must be good :)
 

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11640
  • Country: my
  • reassessing directives...
if there is a thread for this month, this is it. the most derailed and the most successfull troll ever. (i skipped several pages but...) surprisingly nobody spotted that the original link is a retarded mental by "the so called guru but in fact a not guru from the look of his arguments" talking about the wrong troll? instead of discussing a retarded guru everybody went discussing retarded avr vs pic issue that wasted many hours of people's time. i dont know who is this Nigel Jones and whats his problem with Microchip. no split window? need make file? the funniest thing is... breakpoint where it isnt. dude! as an advice, dont sniff a pot and write something in your official page! its just too embarassing. why this type of open letter troll got "not moved to general" from mcu section is out of my head.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1198
  • Country: fi
In related news, Microchip have some sort of deal on XC32++, no real info as to what is offered, apart from the C++ extensions and Dinkumware. Whatever, the offer had the word 'Free' in the title so it must be good :)
XC32++ licenses have AFAIK always been free, but it's just a license key for XC32 that enables compilation of C++.

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
so, there was never a gcc-based tool chain for PIC?  seems strange to me, that it was never done.
SDCC which is an open source GCC-ish compiler has some support for 8 bit PIC. The problem with PIC (and other archaic architectures like the 8051) is that they where never designed with using C as a programming language in mind. Their Harvard architecture, short stacks and banked memory make it hard to apply the flat memory model C is using. It takes a large amount of effort to make a C compiler for these kind of processors and even the commercial ones only implement a subset of the language.

GCC is never intended to support these kind of architectures. IMHO GCC is not a poor man's choice either. The GCC compiler will give most commercial compilers a run for their money. Many processor manufacturers help to improve GCC for their CPUs. Even SDCC performed better for the 8051 than a commercial compiler I bought (15 years ago). The biggest difference between a commercial compiler and GCC is the availability of optimised libraries and a slick looking IDE to get started quickly.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline linux-works

  • Super Contributor
  • ***
  • Posts: 1999
  • Country: us
    • netstuff
I have not looked inside gcc, but I thought it would follow a 2 (at least) phase process; parsing c code into some middle abstraction, then generating code from that, for the target cpu.  are you saying that those 2 conceptual phases are not clean, in gcc's architecture?  (I'm not a compiler back-end expert so I don't know how hard it would be; but I always thought that compiling a high level language would have nothing at all to do with how the processor implements it.  c flat model?  what do you mean by that?  c is just a language and it does not force any binary generated code concepts over than pushing stuff to a stack before a call, etc etc.  not sure why C is 'hard' for some cpus, but there must be some subtle details I'm not seeing, in how gcc is actually implemented.

in general, I do find it hard to understand why a high level language would, at all, be 'easier' on some cpus than other.  code gen is code gen and of course, you need a code generator and optimizer for each different cpu arch.  so what?


Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
I have not looked inside gcc, but I thought it would follow a 2 (at least) phase process; parsing c code into some middle abstraction, then generating code from that, for the target cpu.  are you saying that those 2 conceptual phases are not clean, in gcc's architecture?  (I'm not a compiler back-end expert so I don't know how hard it would be; but I always thought that compiling a high level language would have nothing at all to do with how the processor implements it.  c flat model?  what do you mean by that?  c is just a language and it does not force any binary generated code concepts over than pushing stuff to a stack before a call, etc etc.  not sure why C is 'hard' for some cpus, but there must be some subtle details I'm not seeing, in how gcc is actually implemented.

in general, I do find it hard to understand why a high level language would, at all, be 'easier' on some cpus than other.  code gen is code gen and of course, you need a code generator and optimizer for each different cpu arch.  so what?
Let's put it this way: for 8 bit architectures like 8051 and PIC you can't use the 2 phase approach if you want to create efficient code. The compiler needs to look what kind of C construct is used and translate that into something which can be used on the PIC or 8051. One thing C compilers for 8 bit architectures do is trace which function is called from where and create a fixed memory location for all variables which are supposed to be on the stack. This is called an overlay to emulate a stack. For that the compiler has to figure out which execution paths exist (main process, IRQ processes, re-entrant functions). It gets really hairy when a function is called from both the main process and an IRQ. And that is just one of the challenges. Another problem is banked memory. The compiler has to figure out where a pointer can point to and determine it needs to add information to the pointer which kind of memory it points to or not. And produce code which can deal with such a pointer. Creating such a compiler is an art in itself.

edit: typos
« Last Edit: May 01, 2014, 10:32:35 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline zapta

  • Super Contributor
  • ***
  • Posts: 6190
  • Country: us
...  The compiler has to figure out where a pointer can point to and determine it needs to add information to the pointer which kind of memory it points to or not. And produce code which can deal with such a pointer. Creating such a compiler is an art in itself.

Every first class compiler is a work or art, none of them is 'trivial', and each architecture has its idiosyncrasies.  Here is a sample list of gcc optimizations taken from this article http://www.linuxjournal.com/article/7269

 

Offline neslekkim

  • Super Contributor
  • ***
  • Posts: 1305
  • Country: no
I actually bothered to test this theory. I have a medium size commercial project developed with the free version of C18. There are no NOPs, except where I explicitly put them. The compiler doesn't generate NOPs by itself. This is targeting a PIC18.

Does it need to be EXACT NOP statements?

C code:
tmpPeriod=255;

Compiles to this in Free mode:
00C3  30FF     MOVLW 0xFF
00C4  00F8     MOVWF 0x78
00C5  0878     MOVF 0x78, W
00C6  00F5     MOVWF tmpPeriod

And this in Pro mode:
00B5  30FF     MOVLW 0xFF
00B7  00F7     MOVWF tmpPeriod

Those two MOV statements:
00C4  00F8     MOVWF 0x78
00C5  0878     MOVF 0x78, W

Are those usefull?, other than wasting and cycles? Are they anything other than an NOP? (which also only exist for same purpose?)
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Quote
Does it need to be EXACT NOP statements?

That (the use of NOP) was the accusation here. So yes.

Quote
Are those usefull?,

They are not useful in that particular case.

Can they be useful? Yeah.

Quote
an NOP? (which also only exist for same purpose?)

An NOP can exist for a lot of valid reasons.
================================
https://dannyelectronics.wordpress.com/
 

Offline neslekkim

  • Super Contributor
  • ***
  • Posts: 1305
  • Country: no
I would say you are nit-picking a bit now :), NOP instruction is useful on its own (it can be for word alignement on cpu's that benefits that, can be to make correct delay loops etc).
Those mov instructions are ofcouse useful where you need them, but inserted as is now, not so. And if it was nop  or those mov instructions, same-same, they just take space and cycles. Similar to the earlier Goto tricks that was used, but that is now removed.

Anyway, it gets better.. Found the disasembly function for the code in mplab, seems like lot of dead code are removed as you also found, will try to compare some code similar to those used in the article I linked, to see if it is like this on the 1.31 compiler.
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Quote
they just take space and cycles.

As pointed out earlier, you probably want to step back and see a bigger picture. The compiler, in the initial steps of translating C to assembly, is simply putting canned pieces (template) to each sentence. It is quite feasible to understand that those canned pieces are designed to be generic, thus ****necessarily**** inefficient (so that they will work under all conditions). The inefficiency is less of a concern as it can be addressed with the optimizer later.

You are arguing that those templates are designed ****purposefully**** or even ****punitively**** inefficient. I think  you have a very high hurdle to hit, particularly for the latter.
================================
https://dannyelectronics.wordpress.com/
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Just to add: the same inefficiencies can be observed in other compilers too.
================================
https://dannyelectronics.wordpress.com/
 

Offline retrolefty

  • Super Contributor
  • ***
  • Posts: 1648
  • Country: us
  • measurement changes behavior
Rather then to push it off as some kind of evil malfeasance, could it not just be the fact that the 'free version' just lacks or turns off all compiler optimization functions, and the for fee pro version allows for enabling the various optimization options.

 Complaining that the free version is not as feature rich as the pro version seems a little strange to me. Now if the pro version with all optimization options turned off still generates 'better code' then the free version, then maybe there is something to complain about, but is that indeed the case?



 
« Last Edit: May 02, 2014, 04:57:21 pm by retrolefty »
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1198
  • Country: fi
Rather then to push it off as some kind of evil malfeasance, could it not just be the fact that the 'free version' just lacks or turns off all compiler optimization functions, and the for fee pro version allows for enabling the various optimization options.
It has always been clearly stated that the free version has limited optimizations, it's only crazy people that claim they're intentionally sabotaging their own compiler.

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Quote
could it not just be the fact that the 'free version' just lacks or turns off all compiler optimization functions,

I think it is clearly communicated, but may not have been as clearly understood by some, that the free version has no optimization (either from day 1 or after a period of time).

Quote
Complaining that the free version is not as feature rich as the pro version seems a little strange to me.

I think people have migrated to suggesting (?) that NOPs or similar "waste-of-time" instructions have been intentionally added to the free version with the express / sole(?) purpose of degrading its performance.

I haven't seen much proof for such an argument, and I find it logically difficult to understand from a chip vendor's perspective.
================================
https://dannyelectronics.wordpress.com/
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Quote
it's only crazy people that claim they're intentionally sabotaging their own compiler.

Particularly when you are in the business of (also) selling chips. Slowing down your own compiler, thus your chips, makes no sense to me.
================================
https://dannyelectronics.wordpress.com/
 

Offline neslekkim

  • Super Contributor
  • ***
  • Posts: 1305
  • Country: no
Quote
it's only crazy people that claim they're intentionally sabotaging their own compiler.

Particularly when you are in the business of (also) selling chips. Slowing down your own compiler, thus your chips, makes no sense to me.

Was not implying that, after all, the compiler came from someone who was selling compilers, not silicon.. but since they now are bought buy the silicon vendor, some forgets the origin, and complains about that.  (me also, probably, before understanding the fact)

 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13748
  • Country: gb
    • Mike's Electric Stuff
It's a while ago so I may be misremembering, but I think that before Microchip bought Hitech, Hitech's free version was code-size and/or device limited, with normal optimisations, so the de-optimising was a later change after Microchip bought it.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Quote
the compiler came from someone who was selling compilers, not silicon.

Google Hi-tech OCG.

and / or Hi-tech's Lite license for its compilers.

And if you are interested, get a pre-OCG compiler (9.60 std or pro or even 9.65) and compare its code generation vs. post-OCG compilers.
================================
https://dannyelectronics.wordpress.com/
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf