Author Topic: Why are you still using 8 bit MCUs?  (Read 113066 times)

0 Members and 1 Guest are viewing this topic.

Offline JTR

  • Regular Contributor
  • *
  • Posts: 107
  • Country: au
Re: Why are you still using 8 bit MCUs?
« Reply #250 on: October 09, 2013, 10:31:29 am »
I have found most PIC24s are now cheaper than an 8 bit PIC18, but I love 5v ;) (Working mainly with automotive stuff so 5v sensors etc)

Matt

Well there are some 5V PIC24s, not a huge selection but "some." Of course there is the older dsPIC30s and all of these are 5V.

The PIC24 is an "agreeable" ISA to work with whether you code in C or assembler.  When used with C30 or XC16 in free mode it suffers a lot less from the lost of optimisations than all the other PIC families in my experience. I sometimes wonder if this family is under utilized simply based on a perception that 32-bits is the way to go. Anyway, that's just me musing and wondering...
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Why are you still using 8 bit MCUs?
« Reply #251 on: October 09, 2013, 10:35:09 am »
Quote
I sometimes wonder if this family is under utilized simply based on a perception that 32-bits is the way to go.

Definitely the case. PIC24, in my view, is the best mcu Microchip has to offer. Yet, it is grossly under-marketed and under-appreciated.
================================
https://dannyelectronics.wordpress.com/
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: Why are you still using 8 bit MCUs?
« Reply #252 on: October 09, 2013, 11:35:51 am »
Quote
you'll spend less time fiddling with algorithmic C code to make it work.

Is that really true?

"x = y+z;" is always the same (in C) on an 8-bit mcu or a 32-bit mcu.

I have plengty of math-intensive routines that run flawlessly, and unmodified, on 8-bit/16-bit and 32-bit mcus.
Wait until you start using pointers on an 8 bit PIC or 8051. Ofcourse it works but its soooo slow because the compiler needs to adds a lots of extra code in order to emulate a single memory map. Its actually so bad that Keil's compiler for the 8051 doesn't even attempt to support non-constant function pointers.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Why are you still using 8 bit MCUs?
« Reply #253 on: October 09, 2013, 11:40:49 am »
I have to say that I have used pointers (to constants / variables / functions) on both chips. No ill feeling so far.

What specific issues are you referring to?
================================
https://dannyelectronics.wordpress.com/
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13750
  • Country: gb
    • Mike's Electric Stuff
Re: Why are you still using 8 bit MCUs?
« Reply #254 on: October 09, 2013, 11:57:31 am »
Quote
I sometimes wonder if this family is under utilized simply based on a perception that 32-bits is the way to go.

Definitely the case. PIC24, in my view, is the best mcu Microchip has to offer. Yet, it is grossly under-marketed and under-appreciated.
yes - it is nice, especially the pin remapping which makes PCB layouts really easy  - QFNs on 2 layers - no problem!
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline mrflibble

  • Super Contributor
  • ***
  • Posts: 2051
  • Country: nl
Re: Why are you still using 8 bit MCUs?
« Reply #255 on: October 09, 2013, 12:03:29 pm »
yes - it is nice, especially the pin remapping which makes PCB layouts really easy  - QFNs on 2 layers - no problem!

Okay, now I'm curious. I had more or less kicked out microchip due to one bad experience too many. But good chips are good chips, sooooo. Any particular ones in the PIC24 family with an awesome price/performance ratio? Or if you happen to know this, any ones with center aligned PWM?

In the meantime, time for a parametric search...

 

Offline senso

  • Frequent Contributor
  • **
  • Posts: 951
  • Country: pt
    • My AVR tutorials
Re: Why are you still using 8 bit MCUs?
« Reply #256 on: October 09, 2013, 12:21:28 pm »
I used one PIC24, for 2 weeks, could blink and led, add code to init the uart the pic just hanged lol, tried 5 of them, consistent behavior, no one in the microchips forum could understand the error, tried tons of code, bye bye  |O  :-DD
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: Why are you still using 8 bit MCUs?
« Reply #257 on: October 09, 2013, 12:30:39 pm »
I have to say that I have used pointers (to constants / variables / functions) on both chips. No ill feeling so far.

What specific issues are you referring to?
Speed and code size. Another problem is that 8 bit PICs and 8051 basically have no stack so the compiler needs to emulate this as well. This means that function parameters and variables are in RAM. The compiler usually tries to seperate IRQ processes from the primary process and reuse as much space as possible. Now try a function which is used both from the main process and an interrupt... Try to think about how this will work with pointers to functions. In a device with banked memory like the 8 bit PICs the mapping of variables to memory usually needs some 'pragma' steering as well. If you want fast & small code then you have to resort to some sort of C slang where C starts to look like programming with assembler macros.
« Last Edit: October 09, 2013, 12:44:23 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13750
  • Country: gb
    • Mike's Electric Stuff
Re: Why are you still using 8 bit MCUs?
« Reply #258 on: October 09, 2013, 12:50:22 pm »
yes - it is nice, especially the pin remapping which makes PCB layouts really easy  - QFNs on 2 layers - no problem!

Okay, now I'm curious. I had more or less kicked out microchip due to one bad experience too many. But good chips are good chips, sooooo. Any particular ones in the PIC24 family with an awesome price/performance ratio? Or if you happen to know this, any ones with center aligned PWM?

In the meantime, time for a parametric search...
My default is a PIC24FJ64GA002, as you can get it in a 28QFN which is one of the better package size to capability ratios available (RAM in particular), and a few bigger ones when I've needed more UARTS, but haven't looked very hard at the rest of the rage. I've also use the GB ones for USB host. 
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13750
  • Country: gb
    • Mike's Electric Stuff
Re: Why are you still using 8 bit MCUs?
« Reply #259 on: October 09, 2013, 12:52:45 pm »
The main reason to go 32-bit (instead of 8- or 16-bit) is that you'll spend less time fiddling with algorithmic C code to make it work. 8-bit architectures often add restrictions to what you can do in C--compromises between what the language definition supplies and what the hardware can support. With a 32-bit ARM chip, the silicon is basically an engine that runs C.

The only people who say that are desktop programmers who don't really understand C or what the compiler does, or how CPUs work. In that respect 8 bit is usually easier than 32.
It really isn't about the core - 32Bit MCUs tend to come with much more complex peripherals, clocking schemes & PLLs, power management,  memory mapping, DMA etc. which take more figuring out than the simpler peripherals of typical 8-bit devices.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Why are you still using 8 bit MCUs?
« Reply #260 on: October 09, 2013, 12:54:01 pm »
Quote
especially the pin remapping

Absolutely agree with that. A life saver.

Quote
Or if you happen to know this, any ones with center aligned PWM?

The good thing about those chips is that their pwm / ic channels utilize dedicated timers. Kinda like the TI approach: identical feature-rich peripherals. Datasheet is your best friend at this point.

Quote
tried tons of code,

Not enough information to blame it on the chip alone.

================================
https://dannyelectronics.wordpress.com/
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Why are you still using 8 bit MCUs?
« Reply #261 on: October 09, 2013, 12:55:23 pm »
Quote
Speed and code size.

How  much slower do you think it is?
================================
https://dannyelectronics.wordpress.com/
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26918
  • Country: nl
    • NCT Developments
Re: Why are you still using 8 bit MCUs?
« Reply #262 on: October 09, 2013, 01:22:16 pm »
The main reason to go 32-bit (instead of 8- or 16-bit) is that you'll spend less time fiddling with algorithmic C code to make it work. 8-bit architectures often add restrictions to what you can do in C--compromises between what the language definition supplies and what the hardware can support. With a 32-bit ARM chip, the silicon is basically an engine that runs C.

The only people who say that are desktop programmers who don't really understand C or what the compiler does, or how CPUs work. In that respect 8 bit is usually easier than 32.
Its the other way around. I dumped the 8051 in favor of the MSP430 because the MSP430 has one single memory space, a real stack etc. I was tired of having to jump through hoops because of the limits the 8051 imposed. I wanted to get my projects going and that was much quicker with the 16 bit MSP430 than the 8051. I really don't understand how having 2 or 3 different memory spaces each with different instructions to access them can be regarded as being simple.

@dannyf: for the project I worked on it was way too slow and way too big because of the compiler inferred pointer emulation. The code size was twice the size of the available memory. I rewrote the software (which ran perfectly on ARM; ARM loves pointers) to be a better fit for the PIC (which was in the existing hardware so I couldn't get rid of it).
« Last Edit: October 09, 2013, 01:30:38 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline senso

  • Frequent Contributor
  • **
  • Posts: 951
  • Country: pt
    • My AVR tutorials
Re: Why are you still using 8 bit MCUs?
« Reply #263 on: October 09, 2013, 02:01:56 pm »
I used one PIC24, for 2 weeks, could blink and led, add code to init the uart the pic just hanged lol, tried 5 of them, consistent behavior, no one in the microchips forum could understand the error, tried tons of code, bye bye  |O  :-DD

Kinda suggests the problem was you, if such a simple application failed.

Kinda suggests the samples they sent me had a hardware bug :o
Never had a problem with STM32F4's, Atmegas, Attinys, PIC32 and MSP430..

There are also some pic's that have hardware stack's, and no one is crying about that, each chip as its limitations, choose whatever fits the best.
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Why are you still using 8 bit MCUs?
« Reply #264 on: October 09, 2013, 04:39:51 pm »
Quote
Kinda suggests the samples they sent me had a hardware bug

Tough to argue against thousands / millions (?) of such chips used successfully.

Quote
The code size was twice the size of the available memory.

I hope that doesn't mean the use of pointers doubled the memory.

I have used pointers on those chips and I can code faster with pointers than with normal assignments for some applications.
================================
https://dannyelectronics.wordpress.com/
 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
Re: Why are you still using 8 bit MCUs?
« Reply #265 on: October 09, 2013, 05:55:24 pm »
In all cases you code for the device and environment.
I disagree. You code for the product requirements. Device selection comes later (assuming you have a choice).

Quote
The difference I see is I do not have an ad version to using a 8 or 16 bit microprocessor. 

You're more flexible than I am, to your credit. My recent projects have involved enough code that 8-bit mcus simply wouldn't work. One project used an MSP430 and required dancing around address space limitations.

Quote
By your derogatory comment of "code monkey" I now wonder why I would even bother to try explain my position to you as an peer.  I think giving you that credit was a mistake on my part.  As a person with decades of experience in electronics and programming I find your attitude deplorable.

Chill out, dude. I *am* a "code monkey".  :P
 

Offline WarSim

  • Frequent Contributor
  • **
  • Posts: 514
Why are you still using 8 bit MCUs?
« Reply #266 on: October 09, 2013, 07:38:34 pm »

In all cases you code for the device and environment.
I disagree. You code for the product requirements. Device selection comes later (assuming you have a choice).

Pre-device selection programming is usually limited to templates, resource map and POC code.  Err wait ... Your statement indicates a different methodology.

I think I now understand which of the three programming methodology you use.  It is a valid, but i believe that process is best suited to programming in a single architecture class.  It is faster process but restricted by class, because any class deviations will cause any increased speed to be lost.  So yes given your chosen programming methodology using only 32bit uC would be your best choice.  Because not all tasks are amenable to a 8/16 bit chip but all should work in a 32bit chip with varying levels of efficiency.  Until 64bit uC are made :)

Sorry can't remember the names right now.  My best guesses are deconstruction (me), code collection (you) and story board (scripters and web programers). 

I used code collection method when programming for mini-computers and PCs.  When I started with AI programming in 90's I changed methodologies.  It was all IP locked down, everything I did was essentially original.  Later in 00 when programming uC I found I preferred the deconstruction methodology for these tasks as well. 

This debate helped me remember more about the methodologies thanks. 
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Why are you still using 8 bit MCUs?
« Reply #267 on: October 09, 2013, 08:06:32 pm »
Until 64bit uC are made :) 
Thanks for the smiley because I really, really would not know what to use a 64 bit address space for in an embedded device.    :-//  That said the term embedded is misused everywhere even in 6' large 19" rack sun systems driving some industrial application, which is far from embedded in my vocabulary.
 

Offline WarSim

  • Frequent Contributor
  • **
  • Posts: 514
Why are you still using 8 bit MCUs?
« Reply #268 on: October 09, 2013, 09:59:40 pm »

Until 64bit uC are made :) 
Thanks for the smiley because I really, really would not know what to use a 64 bit address space for in an embedded device.    :-//  That said the term embedded is misused everywhere even in 6' large 19" rack sun systems driving some industrial application, which is far from embedded in my vocabulary.
Yes, but it is part of a "System".  The quintessential over used word.  :) 

Don't laugh (too much) we still used Solaris 9 on some of our special purpose computers. 
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Why are you still using 8 bit MCUs?
« Reply #269 on: October 10, 2013, 12:09:29 am »
Quote
Wait until you start using pointers
Well, pointers to functions or data in Harvard "code space" can be interesting on an architecture where pointers are normally only 16 bits, but there is more than 64k of flash.  Compilers will do most of the work, most of the time, but they start to violate expectations of straightforward operation.  (avr-gcc has "trampolines."  Whee!)

Smaller PICs have trouble with data structures bigger than "some" of a bank size, because actual data memory isn't contiguous; each bank has some IO registers and some general purpose ram.

Many languages get a bit grumpy when you violate their core assumptions by having more than one address space (harvard architecture) or having pointers to different types of objects be different sizes.  (C on a PDP10 was interesting, because it was word-addressable.  Pointers to strings were an entirely different "thing.")

This is more of an argument about matching compilers to architectures and problems, rather than an 8bit vs 32bit argument, technically speaking.  But non-technically, "use a 32bit cpu because its architecture is more directly suited to running code written in a modern language" *is* part of the overall argument.

It can be "educational" to go back and look at some of the shared C software from the early days of the IBM/PC, when C libraries were less standardized, and there was CP/M plus several versions of unix floating around as well.  The amount of conditional compilation was ... horrifying!
 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
Re: Why are you still using 8 bit MCUs?
« Reply #270 on: October 10, 2013, 01:47:59 am »
I used code collection method when programming for mini-computers and PCs.  When I started with AI programming in 90's I changed methodologies.  It was all IP locked down, everything I did was essentially original.  Later in 00 when programming uC I found I preferred the deconstruction methodology for these tasks as well. 
I'm not totally clear on the distinction between "code collection" and "deconstruction", but it sounds reasonable. There's quite a lot of open "source" out there now, and it often makes sense to use someone else's code (if it works) rather than writing it yourself. It used to be build vs. buy, but now it's more like build vs. fork. Is that what you mean by "code collection"?

BTW, it sounds like we're contemporaries. My first job was writing Z80 assembly code and FORTRAN in the early 80s.
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: Why are you still using 8 bit MCUs?
« Reply #271 on: October 10, 2013, 06:34:00 am »
I've had an interesting couple of days trying to see how I might port a PIC18 design from a couple of years ago onto a faster processor.

The PIC in question was a particularly good fit for the design at the time, not because of the core (which was too slow, and the fragmented memory map was a PITA) but with the other peripherals integrated onto the die. Aside from the voltage regulator to power it, the PIC was about the only active component on the board.

The peripheral set in question? Two analogue comparators, three PWM outputs, an RTC, and some true, byte addressable EEPROM. Plus the usual Flash, RAM, timers and so on which just about every microcontroller has.

I'd hoped that with there being so many ARM Cortex M0 and M0+ micros out there, finding one to replace my PIC would have been easy, but no. EEPROM in particular seems quite rare, which is a great shame because the design really does need to store and update a few bytes at a time at fairly regular intervals. I'm fairly sure it's down to process limitations at the wafer fab... whatever process can make the core for pennies isn't so well suited to fabricating EEPROM cells.

Maybe this particular project is just a bit of an oddball in terms of its requirements?

The other thing which came out of my research is the difference in code space required. The original project fits (just!) into a 32k PIC18. Rewrite it for an STM32 part using the ST peripheral libraries - which, with hindsight, was probably more trouble than it's worth - and it won't fit into a 64k device without optimisation. Flash capacity would appear not to be directly comparable between the families, despite the fact that the Cortex M0 uses 16 bit (Thumb) instructions to improve code density. It's not a particularly I/O intensive project, most of the code is application level and stored bitmap graphics, so the impact of the more complex peripherals is minimal.

If anyone has any suggestions for a device with the necessary peripherals but about, say, twice the performance of a PIC18 @ 64 MHz, I'd be genuinely appreciative. If it's a similar price, so much the better!

Offline WarSim

  • Frequent Contributor
  • **
  • Posts: 514
Why are you still using 8 bit MCUs?
« Reply #272 on: October 10, 2013, 07:21:44 am »

I used code collection method when programming for mini-computers and PCs.  When I started with AI programming in 90's I changed methodologies.  It was all IP locked down, everything I did was essentially original.  Later in 00 when programming uC I found I preferred the deconstruction methodology for these tasks as well. 
I'm not totally clear on the distinction between "code collection" and "deconstruction", but it sounds reasonable. There's quite a lot of open "source" out there now, and it often makes sense to use someone else's code (if it works) rather than writing it yourself. It used to be build vs. buy, but now it's more like build vs. fork. Is that what you mean by "code collection"?

BTW, it sounds like we're contemporaries. My first job was writing Z80 assembly code and FORTRAN in the early 80s.
No it doesn't imply borrowed code.  All the code can be original.  The distinction is by when in the programming process that functional blocks are implemented.
As I said I may have the names wrong. 

All programming problems are broken down in phases.  Starting at concept ending at working code, and extended by revision. 
The least used method method is story board is when the concept deconstruction diverts to code generation after info and data flow charting is done.  Fourth was the first attempt the bridge the gap.  Delphi was a half step between this and the next methodology. 

Code collection continues the process through process control and end point identification.  This methodology can have archive shorter development time through the use of existing libraries.  The negative effect is every architecture change can involve huge increases in conditional compile blocks.  In a perfect scenario the only code specifically created is the glue code to connect the parts together.  Another restriction is this method only gain it's full benefit if the process components have been generated before by the last methodology. 

Deconstruction continues the process through scope charts, convention controls and instance plans.  At this stage all that is left is code each task block.  Heave use of code snippets enables increased programming speed or just type at over 180wpm.  I can only do 120wpm so I use code snippets, allot of my peers are really fast typers.  Advantages are no IP costs and not preexisting code dependant.  Disadvantage longer pre-code planing. 

I hope this explains what I was referring to.  Even if I got the methodology names wrong. 

I was taught that keying in should only be 20% of the effort and if you have to recompile a third time you failed.  Yes this is old school it no longer takes a week to compile a complete project.  Obviously code for a uC could take more than 30min to compile.  My last major computer based project took a very fast PC 8 hrs to compile.  So this point was more significant then than now. 

Almost all my coding these days follows the deconstruction methodology not because it is better but because it is necessary.  At work I get assigned the tasks others don't want to tackle.  I get the project just before it goes back out the door as unable to implement.  The bean counters get the excuse to bid high and the job comes back and I get the work. 

Seems I find glueing libraries together boring, I don't do it as a hobby.  My last 5 DIY of my last 7 projects I did was to prove the that something can be done.  I really annoys me when people say something can't be done when it can.  I do it to prove to myself they are wrong no prove to them they are wrong.  The other projects because I could not find what I wanted for sale anywhere. 
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Why are you still using 8 bit MCUs?
« Reply #273 on: October 10, 2013, 10:54:07 am »
Quote
twice the performance of a PIC18 @ 64 MHz,

I don't know if you can run this particular pic at 64Mhz but what "performance" are you talking about? In certain cases, an ARM-based chip can be multitude faster than the PIC, and in others not so or just comparable. So without defining what "performance" you are looking for, it is hard to tell you specifically if an ARM chip will do.

I do say that in IO-intensive applications, an ARM chip can be considerably faster (and in some cases 3x faster) giving that many of them allow masked read / writes, or bitbanding. For math-intensive applications, the ones with FPUs are considerably faster but not so much on 8-bit types (other than division).

I do think your observation on code space is spot on and consistent with mine. The exception can be those chips with onboard ROM.

As to eeprom, Freescale, NXP and TI offer many such chips. Unfortunately, those happen to the high-end vendors too.
================================
https://dannyelectronics.wordpress.com/
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2095
Re: Why are you still using 8 bit MCUs?
« Reply #274 on: October 10, 2013, 10:57:34 am »
If anyone has any suggestions for a device with the necessary peripherals but about, say, twice the performance of a PIC18 @ 64 MHz, I'd be genuinely appreciative. If it's a similar price, so much the better!

PIC24FV32KA304 maybe.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf