I'm trying to select a chip to learn and use in my hobby projects for simple things like driving displays, reading some pots etc.For generic projects the cheapo hardware is Arduino Uno clones (ATmega328), esp8266, esp32, STM blue pill etc. These devices cost nothing (compared to your time).
As you may already expect, they are typically easier to program and design with. Just have a look how many bypass capacitors you need around e.g. STM32F051 and a similar 8-bit micro.
When I stepped forward from AVR towards ARM (STM32), I noticed to spend much more time scrolling through the datasheet finding a certain information than I was used on the AVR; well was likely caused in my refusal of using fancy libraries like the real pita HAL.IMHO that is an STM32 specific problem. ARM microcontrollers aren't made equal and some have better documentation / easier to understand peripherals than others.
if you want a cheap chip with a decent design, go for Avr8. You are full of support. If you then want to see something more advanced, go to Xmega.
I really like 8-bit controllers from silicon labs, but looking at their arm offering I can't help but wonder if there's even a point in using 8-bit chips anymore. Cortex M0+ chips are cheap as chips these days.
So my question is - what are 8-bit uC still used for (in new designs). As a hobbyst, is there still a point in using them or should I invest my time in learning ARM platform?
... their inherent limitations, especially regarding the memory models, can be a real PITA.
... their inherent limitations, especially regarding the memory models, can be a real PITA.
They may. But under some circumstances, they may be turned into advantages. For example, some of PIC16 have the PCLATH register which must be set every time you do GOTO. This is certainly an example of PITA you're talking about. However, I had a project once, where I managed to put 3 different apps into a small PIC16 by giving each of them its own page. Each application had its own ISR, its own table of virtual functions. Switching between them was as easy as setting the PCLATH register. It is really fast and it preserves good ISR latencies. Replicating the design with "proper" 32-bit chip while preserving the same latencies would require really big and fast chip.
... I really like 8-bit controllers from silicon labs ...I've been using the Silabs series for years, and still am. They have an awesome amount of I/O A/Ds D/As IRQs etc etc There's bugger all I can't do without them.
So my question is - what are 8-bit uC still used for (in new designs). As a hobbyst, is there still a point in using them or should I invest my time in learning ARM platform?
So my question is - what are 8-bit uC still used for (in new designs). As a hobbyst, is there still a point in using them or should I invest my time in learning ARM platform?
Say you have a main board and a front panel/user-interface board. Your main board has a big micro or an FPGA or whatever, and you could run a large ribbon cable between the two boards so the main micro could control everything on the front panel, or you could use a small 8-bitter as a co-processor, managing all of the user interface stuff and communicating with the main board over SPI. SPI at 20 MHz, say, gives you a ton of bandwidth for reading buttons and encoders and updating the blinkenlighten.
Some of these 8-bit guys are cheaper than those I2C-based LED drivers. It's crazy.
I've spent couple of days looking at the low end of uC market as I'm trying to select a chip to learn and use in my hobby projects for simple things like driving displays, reading some pots etc. General housekeeping stuff.
I really like 8-bit controllers from silicon labs, but looking at their arm offering I can't help but wonder if there's even a point in using 8-bit chips anymore.
... As a hobbyst, is there still a point in using them or should I invest my time in learning ARM platform?
I am designing a brand new system with 4x 8bit AVR's in it. The tasks are appropriate for the capability of the AVR's, I can distribute the tasks to physically separate MCU's, they are low power, cheap, familiar, reliable. There are 9 PCB's in the system so a single 32bit MCU with big IO capability would be a signal routing disaster.
One deals with a few buttons, LCD display, and a few LED's, another is essentially an IO expander since I need to monitor quite a few low-speed digital inputs, another is primarily for A/D conversion to monitor fairly slow analog signals, the fourth one is master control with the most robust firmware. They are all connected with I2C so it only takes 2 wires from PCB to PCB instead of 40+ discreet signals per PCB.
I like them, even for modern and brand new designs of industrial gear.
9 pcbs? what are you building the next generation space shuttle?
Many posters have mentioned the PIC and AVR 8 bitters in the preceeding posts.Sure, still used.
How about all the other traditional mcu’s, such as HCS08 and HCS12?
Are they no longer used?
The PCB count
How about all the other traditional mcu’s, such as HCS08 and HCS12?
Are they no longer used?
I really like 8-bit controllers from silicon labs, but looking at their arm offering I can't help but wonder if there's even a point in using 8-bit chips anymore. Cortex M0+ chips are cheap as chips these days.
if you want to prepare your students for the future
Yep, for example I like their MCUs with USB. I can get MCU with USB, decent ADC which is more flexible to use (in analog) than in most MCUs, internal 3.3 vreg (that can provide up to 100mA) and precise internal oscillators, starting from $0.75 at qty of 1 and $0.65 Q qty of 100, $0.55 @ qty 1000. I don't even need any crystal. MCU and a few decoupling caps is all I need to get a working USB device. Try that with ARM.I really like 8-bit controllers from silicon labs, but looking at their arm offering I can't help but wonder if there's even a point in using 8-bit chips anymore. Cortex M0+ chips are cheap as chips these days.
But generally not as cheap as 8 bit micros in production quantity.
They also tend to have a reputation of still being sold in 10-15 years time.
Also the 8 bitter come in tiny low pin count packages and can often work from 5V.
if you want to prepare your students for the future
you'd better teach them how to design :horse:
I still use 8 bit AVRs. They're simple, dependable and familiar. I messed around with ARM some but even just setting up the environment to develop for them is far more complex
Q: Can anyone recommend some 32-bitters that work with simple or open-source toolchains (or similar)? Something you can install without pain on any computer, something that doesn't need a license or GUI or gigabytes?
Q: Can anyone recommend some 32-bitters that work with simple or open-source toolchains (or similar)? Something you can install without pain on any computer, something that doesn't need a license or GUI or gigabytes
Thankyou for the advice, I'll give it a go.
Hearing that I mainly just need a per-device magic header makes me happy. Entry points and everything else don't change per core design, or are they also something hacked into the header?
Low power devices (battery power, untethered) are probably the biggest technical reason a hobbyist would be interested in 8-bit micros. (All those additional transistors aren't "free" in terms of power consumption.)
For commercial applications, a chip in quantity being $0.30 instead of $0.90 translates to about a $3 difference in the cost of the finished good on a retailer's shelf, so that's a huge driver. Then, if you're a hobbyist with industrial/commercial aspirations, you might be interested in following that trend as well.
Q: Can anyone recommend some 32-bitters that work with simple or open-source toolchains (or similar)? Something you can install without pain on any computer, something that doesn't need a license or GUI or gigabytes?
IoT and wearables pose a unique requirement: nanoamp sleeping while retaining RAM content. That's the place for 8-bitters and FRAM MSP430s.
I went to a few distributors' websites and priced out the cheapest 1K quantity of 8-bit AVRs and the cheapest 1K quantity of M0+ chips I could find to get the $0.30 and $0.90 figures. Do you have reference data to contradict those figures?QuoteFor commercial applications, a chip in quantity being $0.30 instead of $0.90 translates to about a $3 difference in the cost of the finished good on a retailer's shelf, so that's a huge driver. Then, if you're a hobbyist with industrial/commercial aspirations, you might be interested in following that trend as well.If you have a chip with only a simple MCU and a small amount of SRAM on it then the area and therefore cost of the actual chip in modern smallish processes is completely dominated by the pads used to connect to the pins on the package. Whether your internal datapaths are 8 or 32 bits wide is way down in the noise.
However 8 bit processors often need more instructions and more clock cycles to get your computation done than a 32 bit processor. The 32 bit processor can finish and go to a low power mode sooner, even at the same clock speed.
Nonsense. Embedded applications grow larger and larger. Predictability and latency issues are solved by using buffers and DMA nowadays. More speed and more memory means you can make more complex software quicker with less bugs. And there are many areas where having speed just saves the day. Think about signal processing. Over the years I've created quite a few ARM microcontroller based circuits which do some hefty signal processing. Back in the old days you'd need a DSP for that and assembly programming.However 8 bit processors often need more instructions and more clock cycles to get your computation done than a 32 bit processor. The 32 bit processor can finish and go to a low power mode sooner, even at the same clock speed.
Often you don't need any arithmetic at all, especially on critical paths. So, it's absolutely of no consequence whether your chip is 8-bit or 32-bit.
Of course, all things being equal, 32-bit is better than 8-bit, but in the real world things are not equal. The architectures which lost their battle in big computers already (ARM, MIPS) have been dragged into the embedded world, where they're even less suitable. Embedded world needs direct access commands, predictability, low latency. It doesn't need long pipelines and caches.
TI's MSP430 series is pretty good at that as well. Actually a more modern process is likely to perform better compared to an architecture (8051 for example) which wasn't designed for low power. Less bits doesn't alway mean less power. Another thing to look for when looking at low power devices is the start-up time for the oscillators. A redesign of a PIC based product using an MSP430 microcontroller ended up consuming nearly 3 times less power.So they, at least, don't think wearables should be restricted to 8 or 16 bits.
Not essentially, but doesn't mean 8-bitters have no place.
As a system manager chip (power sequencing, voltage rail monitoring, etc.), 8-bitters are still better.
They can stay alive forever without consuming any meaningful current.
TI's MSP430 series is pretty good at that as well. Actually a more modern process is likely to perform better compared to an architecture (8051 for example) which wasn't designed for low power.Process != architecture.
Lowest MCU sleep current with supply brownout (50 nA)And it's 8051
Lowest MCU active current (150 μA / MHz at 24.5 MHz)
Lowest MCU wake on touch average current (< 1 μA)
Lowest sleep current using internal RTC and supply brownout (< 300 nA)
Ultra-fast wake up for digital and analog peripherals (< 2 μs)
Integrated LDO to maintain ultra-low active current at all voltages
Up to 14 capacitive sense channels
Over the years I've created quite a few ARM microcontroller based circuits which do some hefty signal processing. Back in the old days you'd need a DSP for that and assembly programming.
And the Apple watch (a typical wearable) has a 64 bit dual core ARM processor.I don't think the Apple Watch is a typical wearable. It's probably the top end of wearables, with most applications requiring a lot less power.
TI's MSP430 series is pretty good at that as well. Actually a more modern process is likely to perform better compared to an architecture (8051 for example) which wasn't designed for low power. Less bits doesn't alway mean less power. Another thing to look for when looking at low power devices is the start-up time for the oscillators. A redesign of a PIC based product using an MSP430 microcontroller ended up consuming nearly 3 times less power.I keep seeing the MSP430 in relation to low power. It's it really that much better than most other solutions?
Wherever it is enough and ARM can do quite a lot. One of the projects for example involved an acoustic echo canceller which did part of the calculations in soft floating point; I used a small 70MHz ARM microcontroller for that. Buffering inside the serial codec interface took care of any latency issues so no worries there.Over the years I've created quite a few ARM microcontroller based circuits which do some hefty signal processing. Back in the old days you'd need a DSP for that and assembly programming.DSP?
$5 dsPIC33CH can do FIR at a rate of 100MTap/s (totally predictable, no reliance on cache or bus congestion)
$15 smallest Spartan-7 FPGA can do FIR at a rate of $4GTap/s (totally predictable)
Where does your ARM DSP fall in this spectrum?
Many posters have mentioned the PIC and AVR 8 bitters in the preceeding posts.
How about all the other traditional mcu’s, such as HCS08 and HCS12?
Are they no longer used?
I keep seeing the MSP430 in relation to low power. It's it really that much better than most other solutions?
I went to a few distributors' websites and priced out the cheapest 1K quantity of 8-bit AVRs and the cheapest 1K quantity of M0+ chips I could find to get the $0.30 and $0.90 figures. Do you have reference data to contradict those figures?QuoteFor commercial applications, a chip in quantity being $0.30 instead of $0.90 translates to about a $3 difference in the cost of the finished good on a retailer's shelf, so that's a huge driver. Then, if you're a hobbyist with industrial/commercial aspirations, you might be interested in following that trend as well.If you have a chip with only a simple MCU and a small amount of SRAM on it then the area and therefore cost of the actual chip in modern smallish processes is completely dominated by the pads used to connect to the pins on the package. Whether your internal datapaths are 8 or 32 bits wide is way down in the noise.
And it's not just big company commercial application. Countless one-man-bands are doing Kickstarters these days and it's trivial to sell 1000 units in a kickstarter. That 70 cents extra on the BOM cost really matters at even 1000qty. And ordering your parts pre-programmed can be a big deal too. Also, it's not uncommon to have a little cheap pre-programmed 8bit 5 pin micro in a circuit just to do one small dedicated job, rather than have the main processor care about doing that.
And it's not just big company commercial application. Countless one-man-bands are doing Kickstarters these days and it's trivial to sell 1000 units in a kickstarter. That 70 cents extra on the BOM cost really matters at even 1000qty. And ordering your parts pre-programmed can be a big deal too. Also, it's not uncommon to have a little cheap pre-programmed 8bit 5 pin micro in a circuit just to do one small dedicated job, rather than have the main processor care about doing that.Even if it's those 70 cents, that's 700 bucks. Depending on the circumstances I could see that easily being offset by a shorter development cycle or something similar. Or not, of course.
.... E.g. for a client project I'm using a small PIC for the reset circuit, replacing a special dedicated >$1 reset IC, and additionally using it for controlling the main power of the circuit in sleep mode, which all fits in one of these small, ultra low power PICs, and programming costs only a few cents:
I went to a few distributors' websites and priced out the cheapest 1K quantity of 8-bit AVRs and the cheapest 1K quantity of M0+ chips I could find to get the $0.30 and $0.90 figures. Do you have reference data to contradict those figures?QuoteFor commercial applications, a chip in quantity being $0.30 instead of $0.90 translates to about a $3 difference in the cost of the finished good on a retailer's shelf, so that's a huge driver. Then, if you're a hobbyist with industrial/commercial aspirations, you might be interested in following that trend as well.If you have a chip with only a simple MCU and a small amount of SRAM on it then the area and therefore cost of the actual chip in modern smallish processes is completely dominated by the pads used to connect to the pins on the package. Whether your internal datapaths are 8 or 32 bits wide is way down in the noise.
However 8 bit processors often need more instructions and more clock cycles to get your computation done than a 32 bit processor. The 32 bit processor can finish and go to a low power mode sooner, even at the same clock speed.
Often you don't need any arithmetic at all, especially on critical paths. So, it's absolutely of no consequence whether your chip is 8-bit or 32-bit.
Of course, all things being equal, 32-bit is better than 8-bit, but in the real world things are not equal. The architectures which lost their battle in big computers already (ARM, MIPS) have been dragged into the embedded world, where they're even less suitable. Embedded world needs direct access commands, predictability, low latency. It doesn't need long pipelines and caches.
Long pipelines and caches have nothing to do with the ISA.
MAC W4*W5, A, [W8]+=6, W4, [W10]+=2, W5
SiFive's "E20" 32 bit RISC-V core has no caches, just instruction and data SRAMs and runs at 1 clock cycle for all instructions except taken branches, which take 2 cycles. That's pretty predictable. The Cortex M0 is similar.
BTW, instead of a simple microcontroller, these days you can get a Linux capable CPU in a TQFP package for $1:I understand the desire for faster hardware, but is a full-blown Linux system really a good upgrade for the average AVR use case? Adding many abstraction layers isn't necessarily helping, which is why 8 bitters seem to be still viable.
https://hackaday.com/2018/09/17/a-1-linux-capable-hand-solderable-processor/ (https://hackaday.com/2018/09/17/a-1-linux-capable-hand-solderable-processor/)
If you pay $3.25 for the V3S, you get 64 MB integrated DRAM and it runs at 1.2 GHz, has video input/output, H.264 encoders/decoders, ethernet, USB etc.:
http://linux-sunxi.org/V3s (http://linux-sunxi.org/V3s)
I guess this will cause some trouble for the other CPU manufacturers. For example the STM32F439BIT6 costs EUR 10 in higher quantities, but has only 256 kB RAM and runs at 180 MHz. Why would anyone buy such a CPU anymore?
Many years ago I built diskless stations, which booted Linux over ethernet (with these old BNC ethernet cards with boot sockets, and burning my own EPROM chips for it) and then running an X11 server (in X11 terminology, the "server" is just the program which receives the drawing instructions to display it). There was one server PC which ran all the programs, for a dozen stations. Each station needed only 8 MB RAM. With 64 MB you could fly to the moon.
I'm sure you can run it bare-metal as well.
I'm sure you can run it bare-metal as well.Is there a convenient, accessible and well documented IDE though?
Also, it's not uncommon to have a little cheap pre-programmed 8bit 5 pin micro in a circuit just to do one small dedicated job, rather than have the main processor care about doing that.Indeed first a jellybean oldschool ttl/cmos solution takes up way more pcbspace than a single small micro, then it is more flexible.
I guess this will cause some trouble for the other CPU manufacturers. For example the STM32F439BIT6 costs EUR 10 in higher quantities, but has only 256 kB RAM and runs at 180 MHz. Why would anyone buy such a CPU anymore?
At 1k quantities quoted prices are almost random numbers. Even 100k doesn't generally get you close to serious volume pricing, and you need to negotiate serious pricing. Its never quoted in price lists. Some devices can be obtained in small quantities for a modest premium over a negotiated volume price. Others are several times the volume price.I went to a few distributors' websites and priced out the cheapest 1K quantity of 8-bit AVRs and the cheapest 1K quantity of M0+ chips I could find to get the $0.30 and $0.90 figures. Do you have reference data to contradict those figures?QuoteFor commercial applications, a chip in quantity being $0.30 instead of $0.90 translates to about a $3 difference in the cost of the finished good on a retailer's shelf, so that's a huge driver. Then, if you're a hobbyist with industrial/commercial aspirations, you might be interested in following that trend as well.If you have a chip with only a simple MCU and a small amount of SRAM on it then the area and therefore cost of the actual chip in modern smallish processes is completely dominated by the pads used to connect to the pins on the package. Whether your internal datapaths are 8 or 32 bits wide is way down in the noise.
Coming back to your "why would anyone" question. People still design in those 8-bitters and run them at 1MHz, with a few kilobytes of memory, in masses, because they are enough for so many tasks. So why would 256K and 180MHz be "too little" for anyone?
Many years ago I built diskless stations, which booted Linux over ethernet (with these old BNC ethernet cards with boot sockets, and burning my own EPROM chips for it) and then running an X11 server (in X11 terminology, the "server" is just the program which receives the drawing instructions to display it). There was one server PC which ran all the programs, for a dozen stations. Each station needed only 8 MB RAM. With 64 MB you could fly to the moon.
I also need the processing power - for which, the 64-bit data bus, 64-bit memories and fairly usable set of 64-bit instructions as well as SIMD set are good to have
There are many applications where it needs to be faster and where you need more memory, for example for video IO (this chip has an ethernet interface and a camera input, so probably could be used as a web cam), or polyphonic realtime audio synthesis with effects (reverb needs lots of memory). And the Allwinner V3s is not just a core, it has some useful peripherals as well, see the datasheet (http://linux-sunxi.org/images/2/23/Allwinner_V3s_Datasheet_V1.0.pdf), like DMA, PWM, SPI, I2C, UART, audio codec etc. So if you don't miss a peripheral for your application, it could be a cheap replacement for the higher priced STM32 series chips, but with more RAM and much faster. It can run slower as well, probably using not much more power then as a STM32 running at 180 MHz.
If you want to multiply 32-bit numbers, the pipeline will be rather long
I also need the processing power - for which, the 64-bit data bus, 64-bit memories and fairly usable set of 64-bit instructions as well as SIMD set are good to have
why 64-bit instructions?
Many years ago I built diskless stations, which booted Linux over ethernet (with these old BNC ethernet cards with boot sockets, and burning my own EPROM chips for it) and then running an X11 server (in X11 terminology, the "server" is just the program which receives the drawing instructions to display it). There was one server PC which ran all the programs, for a dozen stations. Each station needed only 8 MB RAM. With 64 MB you could fly to the moon.
Let me understand, these ethernet cards come with a sort of PC-BIOS extension, and it's cool that every IBM-PC's BIOS scans these areas at the bootup and if it finds an extension (with a valid checksum) it jumps into it
I know the maximal size of these Ethernet card's ROM is 512Kbyte. Right? What did you put inside that ROM? sort of net-bootloader? To load the Linux kernel and the 'ram-rootfs' from the Net? Hence inside the 'ram rootfs' image, did you had X11-server with xterminal, fonts, and WindowsManager and miscellanea?
Here I am doing these things with PowerPC boards, but ... I need 32Mbyte of ram just for the kernel (5Mbyte stripped) and rootfs. X11 is now as big as a dead elephant, and it doesn't matter if you try to strip it by forcing "nano-X", these tricks don't pay in term of the Mbyte you need.
TI's MSP430 series is pretty good at that as well. Actually a more modern process is likely to perform better compared to an architecture (8051 for example) which wasn't designed for low power.Process != architecture.
https://www.silabs.com/products/mcu/8-bit/efm8-sleepy-bee (https://www.silabs.com/products/mcu/8-bit/efm8-sleepy-bee)QuoteLowest MCU sleep current with supply brownout (50 nA)And it's 8051
Lowest MCU active current (150 μA / MHz at 24.5 MHz)
Lowest MCU wake on touch average current (< 1 μA)
Lowest sleep current using internal RTC and supply brownout (< 300 nA)
Ultra-fast wake up for digital and analog peripherals (< 2 μs)
Integrated LDO to maintain ultra-low active current at all voltages
Up to 14 capacitive sense channels
I'm sure you can run it bare-metal as well.
At least compare to the equivalents, like their Cortex M0+ chips. Zero gecko for example:You missed the point and they are not so much different anyway. And compared with modern MCUs in average both have very low consumption. I never said that their 8051 offering is the lowest power consumption MCU ever.
https://www.silabs.com/products/mcu/32-bit/efm32-zero-gecko (https://www.silabs.com/products/mcu/32-bit/efm32-zero-gecko)
There are many applications where it needs to be faster and where you need more memory, for example for video IO (this chip has an ethernet interface and a camera input, so probably could be used as a web cam), or polyphonic realtime audio synthesis with effects (reverb needs lots of memory). And the Allwinner V3s is not just a core, it has some useful peripherals as well, see the datasheet (http://linux-sunxi.org/images/2/23/Allwinner_V3s_Datasheet_V1.0.pdf), like DMA, PWM, SPI, I2C, UART, audio codec etc. So if you don't miss a peripheral for your application, it could be a cheap replacement for the higher priced STM32 series chips, but with more RAM and much faster. It can run slower as well, probably using not much more power then as a STM32 running at 180 MHz.
I don't remember the details, but I think the EPROM was smaller. There was a website where you could create the EPROM file. It loaded the Linux kernel by TFTP, then mounting a system partition with all programs read-only over NFS and a user partition over NFS.
$70 cents on 1000 units is $700 dollars. On average little over 1 days worth of engineering time. One of the things I've learned over the years is to start looking at component costs at much higher volumes than 1000 units. However it is good to put a lot of thought into production. Again looking at component costs only can severely hurt your business if a product take too long to program & test.I went to a few distributors' websites and priced out the cheapest 1K quantity of 8-bit AVRs and the cheapest 1K quantity of M0+ chips I could find to get the $0.30 and $0.90 figures. Do you have reference data to contradict those figures?QuoteFor commercial applications, a chip in quantity being $0.30 instead of $0.90 translates to about a $3 difference in the cost of the finished good on a retailer's shelf, so that's a huge driver. Then, if you're a hobbyist with industrial/commercial aspirations, you might be interested in following that trend as well.If you have a chip with only a simple MCU and a small amount of SRAM on it then the area and therefore cost of the actual chip in modern smallish processes is completely dominated by the pads used to connect to the pins on the package. Whether your internal datapaths are 8 or 32 bits wide is way down in the noise.
And it's not just big company commercial application. Countless one-man-bands are doing Kickstarters these days and it's trivial to sell 1000 units in a kickstarter. That 70 cents extra on the BOM cost really matters at even 1000qty. And ordering your parts pre-programmed can be a big deal too. Also, it's not uncommon to have a little cheap pre-programmed 8bit 5 pin micro in a circuit just to do one small dedicated job, rather than have the main processor care about doing that.
$70 cents on 1000 units is $700 dollars. On average little over 1 days worth of engineering time. One of the things I've learned over the years is to start looking at component costs at much higher volumes than 1000 units. However it is good to put a lot of thought into production. Again looking at component costs only can severely hurt your business if a product take too long to program & test.70 cents here and there. I don't think one will limit not caring about expenses to MCU only, in the end will end up up with 3-5 times more expensive BOM.
You are making the same mistake as many others: you don't care about engineering time!$70 cents on 1000 units is $700 dollars. On average little over 1 days worth of engineering time. One of the things I've learned over the years is to start looking at component costs at much higher volumes than 1000 units. However it is good to put a lot of thought into production. Again looking at component costs only can severely hurt your business if a product take too long to program & test.70 cents here and there. I don't think one will limit not caring about expenses to MCU only, in the end will end up up with 3-5 times more expensive BOM.
I'm sure you can run it bare-metal as well.
Indeed socalled NRE costs can be huge. The problem is that the target for some manager is BOM reduction and not overall cost reduction. A lot of managers also take on a project just to keep their personell busy so their budget and fte's are not cut next budget round. Last don't forget ignorance, how much effort will it eventually be to change from lets say a PIC16 to an ST32F0 , most have absolutely no clue.You are making the same mistake as many others: you don't care about engineering time!$70 cents on 1000 units is $700 dollars. On average little over 1 days worth of engineering time. One of the things I've learned over the years is to start looking at component costs at much higher volumes than 1000 units. However it is good to put a lot of thought into production. Again looking at component costs only can severely hurt your business if a product take too long to program & test.70 cents here and there. I don't think one will limit not caring about expenses to MCU only, in the end will end up up with 3-5 times more expensive BOM.
I know saving a few cents from the BOM gets you an 'atta boy' quickly because it is easy to visualise. However in many projects simple costs savings end up to become huge time sinks. If you need to spend a few weeks to try and optimise code because of a silicon bug, too litle memory or lesser performance in a cheaper microcontroller you'll end up losing your bosses' money. And it is not just development time but also the sales start later. Not to mention the time could have been spend on the next product.
Indeed socalled NRE costs can be huge. The problem is that the target for some manager is BOM reduction and not overall cost reduction. A lot of managers also take on a project just to keep their personell busy so their budget and fte's are not cut next budget round. Last don't forget ignorance, how much effort will it eventually be to change from lets say a PIC16 to an ST32F0 , most have absolutely no clue.
... you don't care about engineering time!
That is another problem of going for 8 bitters: they run out of steam at some point and then you have to go up the learning curve again.Indeed socalled NRE costs can be huge. The problem is that the target for some manager is BOM reduction and not overall cost reduction. A lot of managers also take on a project just to keep their personell busy so their budget and fte's are not cut next budget round. Last don't forget ignorance, how much effort will it eventually be to change from lets say a PIC16 to an ST32F0 , most have absolutely no clue.For sure - I have all sorts of functionality coded, documented, and proven solid on 8bit AVR's. My last and current projects may have benefited from more powerful chips - but it would have delayed the release by months. Opportunity costs and development costs explode and I am back to a Ramen noodle diet for 6 months. Introducing a new architecture will certainly have a time penalty. Hell, I am using a single micro to generate a 3 phase clock to sync power converters. Super easy and I have control via I2C after assembly. Cheap, easy, and the success is nearly guaranteed. I have the chips in stock and loaded into the pick_and_place because they are used in a dozen other projects for totally different tasks. I use the same chip to monitor a soft power switch and drive 2 RGB LED's, communicating over 2-wire I2C. It is easy.
For the most part - I generally prioritize speed and predictability during design. I need to do some low-priority projects with some fancier uC's to develop the skills and libraries without the time/financial pressure.
As a result nearly every project I do has a different microcontroller in it (which fits the project). I'm not stuck to a limited choice of microcontrollers.Yes you are since you limit yourself to NXP :-*
I'm sure you can run it bare-metal as well.
I'd assume you can run a barebones binary from uboot
There are many applications where it needs to be faster and where you need more memory, for example for video IO (this chip has an ethernet interface and a camera input, so probably could be used as a web cam), or polyphonic realtime audio synthesis with effects (reverb needs lots of memory). And the Allwinner V3s is not just a core, it has some useful peripherals as well, see the datasheet (http://linux-sunxi.org/images/2/23/Allwinner_V3s_Datasheet_V1.0.pdf), like DMA, PWM, SPI, I2C, UART, audio codec etc. So if you don't miss a peripheral for your application, it could be a cheap replacement for the higher priced STM32 series chips, but with more RAM and much faster. It can run slower as well, probably using not much more power then as a STM32 running at 180 MHz.
This is not the sort of thing that microcontrollers are typically used for. Do you really want to wait for Linux to boot and think about crashes, memory leaks, hacks, etc in your microwave oven, dishwasher, clothes washer and dryer, tv remote, alarm clock, etc? Linux SOCs and microcontrollers are two entirely different fields with some small bit of overlap in the middle. The strength of the microcontroller is in the peripherals, and the simplicity. There is effectively no boot time, there is no operating system, everything happens in real time and can be tweaked down to individual clock cycles. You can get microcontrollers in tiny packages with only a few pins, you can get ones that consume miniscule amounts of power. It is absolutely silly to suggest a Linux SOC for microcontroller applications, even if the Linux route was cheaper the end result would be inferior for the sort of applications where microcontrollers are typically used.
Engineering time also depends on who's doing the job. You'll work much faster and better if you work with something you're already familiar with. From the boss' point of view - if I employed nctnico, I would make sure to give him ARM based jobs. On the other hand, Siwastaja would have zero chance of getting ARM job - all the ARM jobs would go to nctnico!
True. Some of the bootloaders (u-boot isn't the only one) have turned into mini-OSses with a lot of functionality. So if you need a lot of processing power / memory in a microcontroller-ish application I would see this method as an option.I'm sure you can run it bare-metal as well.
I'd assume you can run a barebones binary from uboot
Right, this would be the easiest way. Boot time would be much less than a second as well, and you could integrate your program in u-boot itself, which then autostarts after u-boot did all the hard stuff like setting up the PLL etc. Don't know the version on the Allwinner, but usually u-boot even supports USB and network.
As a hobbyst, is there still a point in using them or should I invest my time in learning ARM platform?
Absolutely, don't waste your time with 8 bitters, ARM is the way to go.With zero arguments you're not exactly making a strong case, especially considering there's a thread's worth of arguments why 8 bit chips are relevant.
http://infocenter.arm.com/help/index.jsp (http://infocenter.arm.com/help/index.jsp)
https://www.youtube.com/watch?v=7LqPJGnBPMM (https://www.youtube.com/watch?v=7LqPJGnBPMM)
https://www.youtube.com/watch?v=d_O2tu5CMbQ (https://www.youtube.com/watch?v=d_O2tu5CMbQ)
https://www.youtube.com/watch?v=snmipEHsDu0 (https://www.youtube.com/watch?v=snmipEHsDu0)
Also in arduino form:
https://www.youtube.com/watch?v=EaZuKRSvwdo (https://www.youtube.com/watch?v=EaZuKRSvwdo)
With zero arguments you're not exactly making a strong case, especially considering there's a thread's worth of arguments why 8 bit chips are relevant.
There's a reason the entire world has moved on to ARM µCs already, big time, to the tune of billions of µCs per year. And you want to steer him into a dead end road?Really, entire world? Then why new parts still appear no the market?
Eight bitters are obsolete, a thing of the past, "not recommended for new designs".
Semiconductor MCU revenue market forecast –millions of dollars
The OP: "[..] I'm trying to select a (µC) chip to learn [..]".
Had he said "I'm trying to learn electronics", you'd recommend to begin with thermionic valves? No.
There's a reason the entire world has moved on to ARM µCs already, big time, to the tune of billions of µCs per year. And you want to steer him into a dead end road?
Eight bitters are obsolete, a thing of the past, "not recommended for new designs".
8bit is a great place to learn the basics. They provide a solid baseline skill set that translates to all sorts of more complex architectures.
IMHO this is wrong. There is absolutely no reason you can't learn to program efficiently on an ARM. Just give a student an assignment for which the C compiler can't do it's optimising magic. If necessary the knowledge can be applied to 8 bit as well but in today's situation not learning ARM means someone is behind. And anyone who is claiming ARM is more complicated than 8 bit clearly has never really looked at a simple ARM controller. These do exist and are easier to understand compared to a 8051.Eight bitters are obsolete, a thing of the past, "not recommended for new designs".8bit is a great place to learn the basics. They provide a solid baseline skill set that translates to all sorts of more complex architectures. They also provide a CRITICAL understanding of system efficiency. When a developer or programmer is using completely overkill hardware - it promotes sloppy and inefficient programming/development techniques.
but in today's situation not learning ARM means someone is behind. And anyone who is claiming ARM is more complicated than 8 bit clearly has never really looked at a simple ARM controller. These do exist and are easier to understand compared to a 8051.You lost me here.
IMHO this is wrong. There is absolutely no reason you can't learn to program efficiently on an ARM.
Just take a low end ARM controller from NXP like the LPC1111 as an example. These don't have many fancy features and don't need a single line of assembly to get going. Also you don't need to study thousands of pages about the ARM core. Where did you get that idea from? System integrators who design microcontrollers and SoCs may want to know all the ins&outs but I for sure don't need to know all that. If you want to program using assembly then all the opcodes for the ARM fit on two (or three) pages. Even better: it has less addressing modes and memory areas compared to the 8051 so less to worry about.but in today's situation not learning ARM means someone is behind. And anyone who is claiming ARM is more complicated than 8 bit clearly has never really looked at a simple ARM controller. These do exist and are easier to understand compared to a 8051.You lost me here.
OR you learn the ARM core and read the thousands of pages, study registers and opcodes/assembly instructions It sure is more complicated than 8 bit cores.
If you mean the peripherals that has 0 to do with ARM but I know you know that so that is not what you meant, so what exactly did you mean to say ?
The OP: "[...] I'm trying to select a (µC) chip to learn [...]".The world hasn't moved on to ARM and eight bitters aren't obsolete or a thing of the past. Plenty of reasons have been given for why that is, and your claims of the opposite still don't come with arguments.
Had he said "I'm trying to learn electronics", you'd recommend to begin with thermionic valves? No.
There's a reason the entire world has moved on to ARM µCs already, big time, to the tune of billions of µCs per year. And you want to steer him into a dead end road?
Eight bitters are obsolete, a thing of the past, "not recommended for new designs".
Both Apple and Microsoft are working to migrate their products from Intel to ARM-based processors developed in-house.
There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.ARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it. I assume we value his opinion above ours. ARM chips are a lot more complicated than 8 bit chips. There's no way around it.
Jack Ganssle is just an old fart. Remember: Those who can, do; those who can't, teach. From my own experience ARM assembly is as easy or difficult like assembly for the Z80, ADSP2180, x86, 8051, MIPS, ARM, 68000 and a few others I have probably already forgotten about.There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.ARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it. I assume we value his opinion above ours. ARM chips are a lot more complicated than 8 bit chips. There's no way around it.
Few people bother with any assembly language in MCUs. They use C. ARM chips are generally a lot more complicated than 8 bit chips, but its nothing to do with the instruction set. Its the complexity of getting them configured to the point where they can run any useful code which makes them more complicated to get started with.There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.ARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it. I assume we value his opinion above ours. ARM chips are a lot more complicated than 8 bit chips. There's no way around it.
There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.ARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it. I assume we value his opinion above ours. ARM chips are a lot more complicated than 8 bit chips. There's no way around it.
Again: that depends entirely on what kind of microcontroller you are using. There are 8 bit microcontrollers which are hard to configure as well. Don't confuse the CPU core for peripheral complexity.Few people bother with any assembly language in MCUs. They use C. ARM chips are generally a lot more complicated than 8 bit chips, but its nothing to do with the instruction set. Its the complexity of getting them configured to the point where they can run any useful code which makes them more complicated to get started with.There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.ARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it. I assume we value his opinion above ours. ARM chips are a lot more complicated than 8 bit chips. There's no way around it.
Jack Ganssle is just an old fart. Remember: Those who can, do; those who can't, teach. From my own experience ARM assembly is as easy or difficult like assembly for the Z80, ADSP2180, x86, 8051, MIPS, ARM, 68000 and a few others I have probably already forgotten about.I'll trust the old fart over the random forum guy. Besides, not teaching doesn't automatically mean being in the "can" group. ;)
I neither mentioned nor intended to refer to peripheral complexity.Again: that depends entirely on what kind of microcontroller you are using. There are 8 bit microcontrollers which are hard to configure as well. Don't confuse the CPU core for peripheral complexity.Few people bother with any assembly language in MCUs. They use C. ARM chips are generally a lot more complicated than 8 bit chips, but its nothing to do with the instruction set. Its the complexity of getting them configured to the point where they can run any useful code which makes them more complicated to get started with.There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.ARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it. I assume we value his opinion above ours. ARM chips are a lot more complicated than 8 bit chips. There's no way around it.
I hate it when other people say something is hard because they failed or are not inclined to make the effort to learn. It doesn't say anything about something being hard or easy to do for someone else. Idolising a person is the worst thing to do. Always go from your own strength and let nobody tell you are unable to do this or that. History is littered with people who succeeded where many others failed.Jack Ganssle is just an old fart. Remember: Those who can, do; those who can't, teach. From my own experience ARM assembly is as easy or difficult like assembly for the Z80, ADSP2180, x86, 8051, MIPS, ARM, 68000 and a few others I have probably already forgotten about.I'll trust the old fart over the random forum guy. Besides, not teaching doesn't automatically mean being in the "can" group. ;)
Actually you did because every microcontroller has the CPU start executing code from the default start address after power up. ARM microcontrollers are no exception to that rule.I neither mentioned nor intended to refer to peripheral complexity.Again: that depends entirely on what kind of microcontroller you are using. There are 8 bit microcontrollers which are hard to configure as well. Don't confuse the CPU core for peripheral complexity.Few people bother with any assembly language in MCUs. They use C. ARM chips are generally a lot more complicated than 8 bit chips, but its nothing to do with the instruction set. Its the complexity of getting them configured to the point where they can run any useful code which makes them more complicated to get started with.There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.ARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it. I assume we value his opinion above ours. ARM chips are a lot more complicated than 8 bit chips. There's no way around it.
Are you considering the clock, interrupt control, power system, and so on to be peripherals? Those things take considerable effort to set up on most ARM MCUs before any actual application code can run, and do anything useful with the real peripherals. In most 8 bit MCUs you might need a couple of operations to get the clock up to speed.Actually you did because every microcontroller has the CPU start executing code from the default start address after power up. ARM microcontrollers are no exception to that rule.I neither mentioned nor intended to refer to peripheral complexity.Again: that depends entirely on what kind of microcontroller you are using. There are 8 bit microcontrollers which are hard to configure as well. Don't confuse the CPU core for peripheral complexity.Few people bother with any assembly language in MCUs. They use C. ARM chips are generally a lot more complicated than 8 bit chips, but its nothing to do with the instruction set. Its the complexity of getting them configured to the point where they can run any useful code which makes them more complicated to get started with.There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.ARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it. I assume we value his opinion above ours. ARM chips are a lot more complicated than 8 bit chips. There's no way around it.
I hate it when other people say something is hard because they failed or are not inclined to make the effort to learn. It doesn't say anything about something being hard or easy to do for someone else. Idolising a person is the worst thing to do. Always go from your own strength and let nobody tell you are unable to do this or that. History is littered with people who succeeded where many others failed.I agree with the last bit, but not with the assumptions before it.
[Simple ARM chips exist]QuoteSuch as? I mean, it's not really awful, but I don't think I've run across an ARM yet that isn't: "oh, you want to run at the rated maximum clock speed? That means you'll have to start by configuring our complicated clock system! (Don't worry; you can use the 1kbyte library function instead.)"QuoteARM assembly isn't easy peasy. It being complicated is why industry veterans like Jack Ganssle don't bother with it.ARM assembly language is ... unpleasant, designed to be output by a compiler rather than written by a human.That's especially true of the Cortex-M0 used on the low-end ARM MCUs. "Supports the ARM instruction set. No, not that instruction. Not that register. And not that mode of THAT instruction, and ... the range of operands for THIS instruction is a bit limited on CM0." Sigh.Even on a well-behaved ARM, it's still three instructions, a literal pool constant, and two registers trashed to write a value to a peripheral register.
8bit is a great place to learn the basics. They provide a solid baseline skill set that translates to all sorts of more complex architectures.
There are simple ARM µCs too, and ARM assembly is easy peasy, a much better design and much easier to understand than the arbitrary ISA messes of the obsolete 8 bitters.
Many of the 8 bitters do have horrid ISAs. 8080/Z80, 8051, 6800, 6502, PIC. A few however are actually pretty pleasant: 6809 and AVR, for example.
I do love ARM, and it's simpler than x86, but its gained a lot of cruft over the years and is much more complex than MIPS or RISC-V. Both RISC-V (with 16 and 32 bit instructions) and the latest nanoMIPS encoding (with 16, 32 and 48 bit instructions) come in very code size competitive with Thumb2, while still being much simpler.
time constrained programWere the ARM manuals sufficient? When it comes to timing, it seems to all fall apart when you get to the not-full-speed flash memory, which usually has some sort of "accelerator" or cache in front of it, which is usually a vendor feature that is not very well documented :-(
Are there 8bit microcontrollers based on 8080/Z80, 8051, 6800, 6502? I thought they were simply CPU's.Z80 - yes. There's a whole line Zilog microcontrollers (on-chip RAM and code memory, plus peripherals), plus some of the Renesas chips are pretty much Z80-like (with bells, whistles, an kludges.)
Jack Ganssle is just an old fart.I find it that this kind of personal attack is unnecessary, especially with that type of wording, even if I may agree about the advantage of modern µCs. It's just an unpleasant kind of interaction, and quickly derails.
Quotetime constrained programWere the ARM manuals sufficient? When it comes to timing, it seems to all fall apart when you get to the not-full-speed flash memory, which usually has some sort of "accelerator" or cache in front of it, which is usually a vendor feature that is not very well documented :-(QuoteAre there 8bit microcontrollers based on 8080/Z80, 8051, 6800, 6502? I thought they were simply CPU's.Z80 - yes. There's a whole line Zilog microcontrollers (on-chip RAM and code memory, plus peripherals), plus some of the Renesas chips are pretty much Z80-like (with bells, whistles, an kludges.)
The 8051 was always a microcontroller. There are now oodles of variants with up to a full address space (or more) of program flash, although RAM is usually a bit on the weak side. Any number of special purpose chips (MP3 player chip, USB Flash Controller Chip, etc) turn out to be 8051 cores with some special purpose hardware attached.
I don't know if there are any 6800 microcontrollers, but the 6805 line (which is quite similar architecturally) spurred several microcontroller families, including S08 and S12 families that are still labeled as current, rather than "legacy."
I don't know of any 6502-like microcontrollers.
I don't know of any 6502-like microcontrollers.IIRC the ST7 and also STM8 have evolved from the 6502.
It has the same six registers (A, X, Y, SP, PC, CC) as the ST7, but the index registers X and Y have been expanded to 16 bits, and the program counter has been expanded to 24 bits.
Even before cache and flash issues the interrupt overhead was already too high to bother.
1802 | RCA | ? Most instructions take 16 clocks (6.4μs), some, 24 (9.6μs). 2.5MHz @ 5V. |
8080 | Intel | ? (still waiting for information) |
8088 | Intel | 10 bus cycles or 40 clocks(?) (B)+(C) (still waiting for further information) |
8086 | Intel | WDC says 182 clocks max total latency. * * * * * (still waiting for information) |
Z8 | Zilog | IRET (E) takes 16 execution cycles. I don't know how many clock cycles per execution cycle. 8MHz? |
Z80 | Zilog | 11-19 clocks (B)+(C) depending on mode, or 2.75-4.75μs @ 4MHz. RETI (E) is 14 clocks, or 3.5μs @ 4MHz. |
Z8000 | Zilog | IRET (E) takes 13 cycles in non-segmented mode, and 16 in segmented mode. I don't know if that's instruction cycles or clock cycles. |
8048 | Intel | (?) return (E) is 2.7μs @ 11MHz |
8051 | Dallas | 1.5μs @ 33MHz (52 clocks) latency |
8051 | Intel | 1.8μs (C) min @ 20MHz. 5.4μs (A)+(C) max total latency @ 20MHz. (3-9μs @ 12MHz.) Interrupt sequence (C) and return (E) take 4.6μs @ 20MHz ST |
80C51XA | Philips | 2.25μs for interrupt+return (C)+(E) @ 20MHz. ST Instructions 2-24 cy, or 0.1-1.2μs. Avg 5-6 cy, or around 0.27μs. |
KS88 | Samsung | 3μs for interrupt+return (C)+(E) @ 8MHz ST Instructions 6-28 cy, or 0.75-2.5μs. Avg 11 cy, or 1.38μs. |
78K0 | NEC | 4.3μs for interrupt+return (C)+(E) @ 10MHz ST Instructions 4-50 cy, or 0.4-5.0μs. Avg 15 cy, or 1.5μs. |
COP8 | National | 70 clocks (7 instruction cycles). RETI (E) is 50 clocks (5 instruction cycles). (7μs & 5μs @ 10MHz) |
μPD78C05 | NEC | RETI (E) takes 13 or 15 clocks (2.08 or 2.4μs at 6.25MHz) |
μPD70008/A | NEC | sequence (C) takes 13 or 19 clocks. Return (E) takes 14 clocks. Instructions take 4-23 clocks each. 6MHz in '87 book. |
V20 | NEC | RETI (E) takes 39 clocks or 3.9μs @ 10MHz in '87 book. Instruction set is a superset of that of 8086/8088. |
V25 | NEC | ? (still waiting for information) |
68000 | Motorola | 46 clocks or 2.875μs minimum @ 16MHz (B)+(C)?. Has a very complex interrupt system. |
6800 | Motorola | (C)=13 clocks, including pushing the index register and both accumulators. RTI (E) takes 10 clocks. 2MHz. |
6809 | Motorola | (C)=19 clocks. Stacks all registers. RTI (E) 15 clocks. 2MHz (8MHz/4). FIRQ-RTI take 10 & 6 clocks, & work more like 6502 IRQ-RTI. |
68HC05 | Motorola | 16 clocks typ (8μs @ 2MHz) |
68HC08 | Motorola | Instructions 1-9 cy, or 0.125-1.125μs. Avg 4-5 cy, or around 0.55μs. |
68HC11 | Motorola | (C)=14 clocks. RTI (E)=12 clocks. Total for interrupt+return=8.75μs @ 4MHz (16MHz/4). ST Instructions 2-41 cy, or 0.5-10.25μs. Avg 6-7 cy, or around 1.6μs. |
68HC12 | Motorola | 2.63μs for interrupt+return (C)+(E) @ 8MHz. ST Instructions 1-13 cy, or 0.125-1.625μs. Avg 3-4 cy, or 0.45μs. |
68HC16 | Motorola | 2.25μs for interrupt+return (C)+(E) @ 16MHz. ST Instructions 2-38 cy, or 0.125-2.375μs. Avg 6-7 cy, or around 0.4μs. |
PIC16 | Microchip | (C)=8 clocks (2 instruction cycles), and RETFIE (E) is also 8 clocks; but this doesn't even include saving and restoring the status register. That's an extra, rather mickey-mouse operation. 20MHz Most instructions 4 cy, or 0.2μs. |
TMS370 | TI | 15 cycles (3μs) min (C), 78 (15.6μs) max (A)+(C), and a cycle is 4 clocks (200ns min)! 20MHz. RTI (E) is 12 cy (48 clocks or 2.4μs). |
TMS7000 | TI | (C)=19 cycles min (17 if from idle status) 5MHz, 400ns min cycle time (IOW, interrupt sequence is 7.6μs min, 6.8 from idle.) RETI (E) is 9 cycles, or 3.6μs @ 5MHz. |
ST6 | STM | 78 clocks min, or 9.75μs @ 8MHz to fetch interrupt vector. More to reach first ISR instruction. RETI is 26 clocks, or 3.25μs. |
ST7 | STM | 3μs for interrupt+return @ 8MHz. ST Instructions 2-12 cy, or 0.25-1.5μs. Avg 4-5 cy, or around 0.55μs. |
ST9 | STM | External IRQ best case: 1.08μs @ 24MHz. NMI best case: 0.92μs. internal interrupts best case: 1.04μs. 2.25μs @ 24MHz for interrupt and return ST Instructions 6-38 instruction cy, or 0.5-3.67μs. Avg 17 cy, or around 1.4μs. |
ST9+ | STM | 1.84μs @ 25MHz for interrupt and return, ST Instructions 2-26 instruction cy, or 0.16-1.04μs. Avg 11 cy, or around 0.9μs |
H8/300 | Hitachi | 8/16-bit: 2.1μs @ 10MHz for interrupt and return ST Instructions 2-24 cy, or 0.2-3.4μs. Avg 5-6 cy, or around 0.55μs. |
M16C M30218 | Mitsubishi Renesas | 18 cy min (C), or 1.125μs @ 16MHz w/ 16-bit data bus. 50 cy max (A)+(C). REIT is 6 cy, or 0.375μs. Dual register sets like the Z80. Max instruction length 30 cy. |
CIP-51 | Silicon Labs Cygnal | μC p/n C8051F2xx) total latency 5-18 cy or 0.2-0.72μs @ 25MHz. RETI takes 5 cy, or 0.2μs. This is the only one I have data on here that gives the 6502 below any competition. |
65C02 | WDC | Normal latency (C) 7 clocks (0.35μs) min, 14 clocks (0.7μs) max (A)+(C). RTI 6 cy (0.3μs). 20MHz. Instructions 2-7 cy, or 0.1-0.35μs. Avg 4 cy, or 0.2μs. Special case: IRQ from WAIt instrucion with interrupt-disable bit I set: no more than 1 cy (0.05μs!) |
Sometimes you need to make a bold statement to get a point across to wake people up. Look at Jack's bio. He clearly is a systems designer and not a programmer.Jack Ganssle is just an old fart.I find it that this kind of personal attack is unnecessary, especially with that type of wording, even if I may agree about the advantage of modern µCs. It's just an unpleasant kind of interaction, and quickly derails.
I will say my personal experience is that the ARM MCU's I've used finding a lot of the details of the core, which matter for a time constrained program, was not easy and not in the datasheets. The datasheets were mostly about peripherals, which is normal for an ARM MCU as far as I can tell. I was forced(or I could just not know I guess) to look up the data in the ARM manuals. That isn't pleasant for me and feels like searching through a large garbage bin for a lost check. The 8-bit MCUs I've used had that data in the datasheet. If you don't care that's fine but that was my experience. I still use 8 bit and 32 bit MCUs, ARM and not, but it's definitely not as easy to find some data that you might need.This depends on the manufacturer. NXP has an extensive section on the processor core (including instruction set) in the user manual.
Sometimes you need to make a bold statement to get a point across to wake people up. Look at Jack's bio. He clearly is a systems designer and not a programmer.It would probably be naive to think his experience and preference only applies to him personally being behind a keyboard or writing code. This would also have been the moment to concede it may not have been the best way of expressing yourself, rather than doubling down on it.
You are still idolising. If someone tells me something is extremely hard to do I already close one ear. I'm not going to get useful information from that person other than he/she can't and doesn't know people who can. I keep one ear open for clues on how not to tackle a problem but that is sketchy because maybe the approach was good but the execution was wrong.Sometimes you need to make a bold statement to get a point across to wake people up. Look at Jack's bio. He clearly is a systems designer and not a programmer.It would probably be naive to think his experience and preference only applies to him personally being behind a keyboard or writing code. This would also have been the moment to concede it may not have been the best way of expressing yourself, rather than doubling down on it.
ARM architecture is easy to understand, what comes to the interrupt vector table, how the interrupts work, and how to use them. Also register pushing on interrupt entry. I find all this simpler than on AVR, although AVR is quite simple as well. Having prioritized interrupts that can pre-empt others, IMO, makes everything easier. You can write longer ISRs with lower priorities, and make timing critical ISRs pre-empt them; you have more tools in your box to work different ways case-by-case. Using two lines of code to set the interrupt priority and enable it is not complex.
I don't know if there are any 6800 microcontrollers, but the 6805 line (which is quite similar architecturally) spurred several microcontroller families, including S08 and S12 families that are still labeled as current, rather than "legacy."The 6801 was an MCU built around the 6800 core. Later on, the 6805 MCUs used a stripped down version of the 6800 core, and the 6811 MCUs used an enhanced version of the 6800 core. There were other variants, some from Hitachi and ST.
I don't know of any 6502-like microcontrollers.I believe WDC and others put the 6502 core in some MCUs.
ARM architecture is easy to understand, what comes to the interrupt vector table, how the interrupts work, and how to use them. Also register pushing on interrupt entry. I find all this simpler than on AVR, although AVR is quite simple as well. Having prioritized interrupts that can pre-empt others, IMO, makes everything easier. You can write longer ISRs with lower priorities, and make timing critical ISRs pre-empt them; you have more tools in your box to work different ways case-by-case. Using two lines of code to set the interrupt priority and enable it is not complex.
In avionics we have constraints and ISRs pre-empt is extremely bad for us, hence strictly prohibited.
I don't want to give you a "no-go", simply a hint: I would put extra careful in abusing of this feature.
so no OS allowed?
so no OS allowed?
no-go for nested interrupts, no-go for preempted interrupts. Our directives are simple: interrupts are allowed but! when an interrupt happens it MUST not be interrupted.
Even before cache and flash issues the interrupt overhead was already too high to bother.
[url]https://community.arm.com/processors/b/blog/posts/beginner-guide-on-interrupt-latency-and-interrupt-latency-of-the-arm-cortex-m-processors[/url] ([url]https://community.arm.com/processors/b/blog/posts/beginner-guide-on-interrupt-latency-and-interrupt-latency-of-the-arm-cortex-m-processors[/url])
For comparison, here's a famous 8 bitter:
[url]http://6502.org/tutorials/interrupts.html#a[/url] ([url]http://6502.org/tutorials/interrupts.html#a[/url])
[url]http://6502.org/tutorials/interrupts.html#1.3[/url] ([url]http://6502.org/tutorials/interrupts.html#1.3[/url])
A copy-paste of a table from there (clearly biased):
1802 RCA ? Most instructions take 16 clocks (6.4μs), some, 24 (9.6μs). 2.5MHz @ 5V. 8080 Intel ? (still waiting for information) 8088 Intel 10 bus cycles or 40 clocks(?) (B)+(C) (still waiting for further information) 8086 Intel WDC says 182 clocks max total latency. * * * * * (still waiting for information) Z8 Zilog IRET (E) takes 16 execution cycles. I don't know how many clock cycles per execution cycle. 8MHz? Z80 Zilog 11-19 clocks (B)+(C) depending on mode, or 2.75-4.75μs @ 4MHz. RETI (E) is 14 clocks, or 3.5μs @ 4MHz. Z8000 Zilog IRET (E) takes 13 cycles in non-segmented mode, and 16 in segmented mode. I don't know if that's
instruction cycles or clock cycles.8048 Intel (?) return (E) is 2.7μs @ 11MHz 8051 Dallas 1.5μs @ 33MHz (52 clocks) latency 8051 Intel 1.8μs (C) min @ 20MHz. 5.4μs (A)+(C) max total latency @ 20MHz. (3-9μs @ 12MHz.)
Interrupt sequence (C) and return (E) take 4.6μs @ 20MHz ST80C51XA Philips 2.25μs for interrupt+return (C)+(E) @ 20MHz. ST
Instructions 2-24 cy, or 0.1-1.2μs. Avg 5-6 cy, or around 0.27μs.KS88 Samsung 3μs for interrupt+return (C)+(E) @ 8MHz ST
Instructions 6-28 cy, or 0.75-2.5μs. Avg 11 cy, or 1.38μs.78K0 NEC 4.3μs for interrupt+return (C)+(E) @ 10MHz ST
Instructions 4-50 cy, or 0.4-5.0μs. Avg 15 cy, or 1.5μs.COP8 National 70 clocks (7 instruction cycles). RETI (E) is 50 clocks (5 instruction cycles). (7μs & 5μs @ 10MHz) μPD78C05 NEC RETI (E) takes 13 or 15 clocks (2.08 or 2.4μs at 6.25MHz) μPD70008/A NEC sequence (C) takes 13 or 19 clocks. Return (E) takes 14 clocks.
Instructions take 4-23 clocks each. 6MHz in '87 book.V20 NEC RETI (E) takes 39 clocks or 3.9μs @ 10MHz in '87 book.
Instruction set is a superset of that of 8086/8088.V25 NEC ? (still waiting for information) 68000 Motorola 46 clocks or 2.875μs minimum @ 16MHz (B)+(C)?. Has a very complex interrupt system. 6800 Motorola (C)=13 clocks, including pushing the index register and both accumulators. RTI (E) takes 10 clocks. 2MHz. 6809 Motorola (C)=19 clocks. Stacks all registers. RTI (E) 15 clocks. 2MHz (8MHz/4).
FIRQ-RTI take 10 & 6 clocks, & work more like 6502 IRQ-RTI.68HC05 Motorola 16 clocks typ (8μs @ 2MHz) 68HC08 Motorola Instructions 1-9 cy, or 0.125-1.125μs. Avg 4-5 cy, or around 0.55μs. 68HC11 Motorola (C)=14 clocks. RTI (E)=12 clocks. Total for interrupt+return=8.75μs @ 4MHz (16MHz/4). ST
Instructions 2-41 cy, or 0.5-10.25μs. Avg 6-7 cy, or around 1.6μs.68HC12 Motorola 2.63μs for interrupt+return (C)+(E) @ 8MHz. ST
Instructions 1-13 cy, or 0.125-1.625μs. Avg 3-4 cy, or 0.45μs.68HC16 Motorola 2.25μs for interrupt+return (C)+(E) @ 16MHz. ST
Instructions 2-38 cy, or 0.125-2.375μs. Avg 6-7 cy, or around 0.4μs.PIC16 Microchip (C)=8 clocks (2 instruction cycles), and RETFIE (E) is also 8 clocks; but this doesn't
even include saving and restoring the status register. That's an extra, rather mickey-mouse
operation. 20MHz Most instructions 4 cy, or 0.2μs.TMS370 TI 15 cycles (3μs) min (C), 78 (15.6μs) max (A)+(C), and a cycle is 4 clocks (200ns min)! 20MHz.
RTI (E) is 12 cy (48 clocks or 2.4μs).TMS7000 TI (C)=19 cycles min (17 if from idle status) 5MHz, 400ns min cycle time
(IOW, interrupt sequence is 7.6μs min, 6.8 from idle.) RETI (E) is 9 cycles, or 3.6μs @ 5MHz.ST6 STM 78 clocks min, or 9.75μs @ 8MHz to fetch interrupt vector. More to reach first ISR instruction.
RETI is 26 clocks, or 3.25μs.ST7 STM 3μs for interrupt+return @ 8MHz. ST
Instructions 2-12 cy, or 0.25-1.5μs. Avg 4-5 cy, or around 0.55μs.ST9 STM External IRQ best case: 1.08μs @ 24MHz.
NMI best case: 0.92μs. internal interrupts best case: 1.04μs. 2.25μs @ 24MHz for interrupt
and return ST Instructions 6-38 instruction cy, or 0.5-3.67μs. Avg 17 cy, or around 1.4μs.
ST9+ STM 1.84μs @ 25MHz for interrupt and return, ST
Instructions 2-26 instruction cy, or 0.16-1.04μs. Avg 11 cy, or around 0.9μsH8/300 Hitachi 8/16-bit: 2.1μs @ 10MHz for interrupt and return ST
Instructions 2-24 cy, or 0.2-3.4μs. Avg 5-6 cy, or around 0.55μs.M16C
M30218Mitsubishi
Renesas18 cy min (C), or 1.125μs @ 16MHz w/ 16-bit data bus. 50 cy max (A)+(C). REIT is 6 cy, or 0.375μs.
Dual register sets like the Z80. Max instruction length 30 cy.CIP-51 Silicon Labs
CygnalμC p/n C8051F2xx) total latency 5-18 cy or 0.2-0.72μs @ 25MHz. RETI takes 5 cy, or 0.2μs.
This is the only one I have data on here that gives the 6502 below any competition.65C02 WDC Normal latency (C) 7 clocks (0.35μs) min, 14 clocks (0.7μs) max (A)+(C). RTI 6 cy (0.3μs). 20MHz.
Instructions 2-7 cy, or 0.1-0.35μs. Avg 4 cy, or 0.2μs.
Special case: IRQ from WAIt instrucion with interrupt-disable bit I set: no more than 1 cy (0.05μs!)
([url]http://6502.org/tutorials/interrupts/cartoon_3.gif[/url])
You are still idolising. If someone tells me something is extremely hard to do I already close one ear. I'm not going to get useful information from that person other than he/she can't and doesn't know people who can. I keep one ear open for clues on how not to tackle a problem but that is sketchy because maybe the approach was good but the execution was wrong.I'm prioritising the experience of a well known and respected industry icon over the self assessment of a random guy on the Internet. You can tell yourself that's idolising, but that only seems to prove my point.
The AVR we were considering had 5 cycles latency in, 4 cycles out and ran at 20MHz. I believe the total overhead for the ARM based MCU was ~30 and clocked at 48MHz. It didn't work out. Believe the ARM was an M4 core. Neither ended up being selected but it's just an example.One thing to consider here is that (unlike most other microcontrollers) an ARM Cortex has already saved a whole bunch of register on the stack when you enter the interrupt routine. On most other microcontrollers the software has to push the registers onto the stack by itself before being able to do something useful. The latter adds to the total latency of the interrupt handling.
Add also that you may have to save even more on the stack if you use floating point. It gets very painful, very fast, and it’s not unusual to have to start disassembling and massaging the ISR code or modifiers accordingly. One problem with this approach is that your well meaning fiddling makes the code less maintainable when someone comes along withiut knowledge of your assumptions.Personally I try hard to avoid being dependant on software when it comes down to nanosecond timing on a regular microcontroller because it is hard to achieve and hard to maintain. I guess these situations are likely originating from a hardware designer thinking 'they can fix this in software for sure'.
The AVR we were considering had 5 cycles latency in, 4 cycles out and ran at 20MHz. I believe the total overhead for the ARM based MCU was ~30 and clocked at 48MHz. It didn't work out. Believe the ARM was an M4 core. Neither ended up being selected but it's just an example.One thing to consider here is that (unlike most other microcontrollers) an ARM Cortex has already saved a whole bunch of register on the stack when you enter the interrupt routine. On most other microcontrollers the software has to push the registers onto the stack by itself before being able to do something useful. The latter adds to the total latency of the interrupt handling.
The AVR we were considering had 5 cycles latency in, 4 cycles out and ran at 20MHz. I believe the total overhead for the ARM based MCU was ~30 and clocked at 48MHz. It didn't work out. Believe the ARM was an M4 core. Neither ended up being selected but it's just an example.One thing to consider here is that (unlike most other microcontrollers) an ARM Cortex has already saved a whole bunch of register on the stack when you enter the interrupt routine. On most other microcontrollers the software has to push the registers onto the stack by itself before being able to do something useful. The latter adds to the total latency of the interrupt handling.
Yes, we actually didn't need to save or restore anything because we kept the registers from the compiler. Bit of an edge case. The ARM manuals did specify the registers automatically saved and restored. If you use floating point hardware it saved and restored all of those registers which more than doubles the overhead if I remember right. It's good because you don't have to worry, bad because you can't do anything about it. In our case the AVR worked except we had no time to manage UI and user IO stuff. After testing we decided that wasn't ideal to require a reset to change modes and things. The ARM would have worked better there but the interrupt time wouldn't work. It's likely we could have found the perfect ARM MCU but we fell back on our goto MCU instead.
The AVR we were considering had 5 cycles latency in, 4 cycles out and ran at 20MHz. I believe the total overhead for the ARM based MCU was ~30 and clocked at 48MHz. It didn't work out. Believe the ARM was an M4 core. Neither ended up being selected but it's just an example.One thing to consider here is that (unlike most other microcontrollers) an ARM Cortex has already saved a whole bunch of register on the stack when you enter the interrupt routine. On most other microcontrollers the software has to push the registers onto the stack by itself before being able to do something useful. The latter adds to the total latency of the interrupt handling.
Yes, we actually didn't need to save or restore anything because we kept the registers from the compiler. Bit of an edge case. The ARM manuals did specify the registers automatically saved and restored. If you use floating point hardware it saved and restored all of those registers which more than doubles the overhead if I remember right. It's good because you don't have to worry, bad because you can't do anything about it. In our case the AVR worked except we had no time to manage UI and user IO stuff. After testing we decided that wasn't ideal to require a reset to change modes and things. The ARM would have worked better there but the interrupt time wouldn't work. It's likely we could have found the perfect ARM MCU but we fell back on our goto MCU instead.
afair an M4 is 12 or 29 cycles with/without fpu, and with lazy stacking the hardware itself figures if an interrupt
uses the FPU
The AVR we were considering had 5 cycles latency in, 4 cycles out and ran at 20MHz. I believe the total overhead for the ARM based MCU was ~30 and clocked at 48MHz. It didn't work out. Believe the ARM was an M4 core. Neither ended up being selected but it's just an example.One thing to consider here is that (unlike most other microcontrollers) an ARM Cortex has already saved a whole bunch of register on the stack when you enter the interrupt routine. On most other microcontrollers the software has to push the registers onto the stack by itself before being able to do something useful. The latter adds to the total latency of the interrupt handling.
Yes, we actually didn't need to save or restore anything because we kept the registers from the compiler. Bit of an edge case. The ARM manuals did specify the registers automatically saved and restored. If you use floating point hardware it saved and restored all of those registers which more than doubles the overhead if I remember right. It's good because you don't have to worry, bad because you can't do anything about it. In our case the AVR worked except we had no time to manage UI and user IO stuff. After testing we decided that wasn't ideal to require a reset to change modes and things. The ARM would have worked better there but the interrupt time wouldn't work. It's likely we could have found the perfect ARM MCU but we fell back on our goto MCU instead.
afair an M4 is 12 or 29 cycles with/without fpu, and with lazy stacking the hardware itself figures if an interrupt
uses the FPU
Our reference was http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka16366.html (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka16366.html)
I didn't do the calculations but it's the data we have. You could be right that it's 12in 12 out, so 24. Slightly better than the AVR, but not by enough for our purposes.
(unlike most other microcontrollers) an ARM Cortex has already saved a whole bunch of register on the stack when you enter the interrupt routine.
push r1
push r0
in r0, 0x3f ; PSR
push r0
eor r1, r1
(assembly language programmers complain about the save/setup of "known zero" R1.) We're up to about 15 cycles, which is more than ARM CM0-4 takes ALL THE TIME.Add also that you may have to save even more on the stack if you use floating point. It gets very painful, very fast, and it’s not unusual to have to start disassembling and massaging the ISR code or modifiers accordingly. One problem with this approach is that your well meaning fiddling makes the code less maintainable when someone comes along withiut knowledge of your assumptions.Personally I try hard to avoid being dependant on software when it comes down to nanosecond timing on a regular microcontroller because it is hard to achieve and hard to maintain. I guess these situations are likely originating from a hardware designer thinking 'they can fix this in software for sure'.
1) Caches are a moot point. They tend to be disabled by default. Just don't enable them.You think so, eh? I'm talking less about formal "cache memory that you need to enable" and more about things like the "2*64bit prefetch buffer" (stm32f1xx) or the "Enhanced flash memory accelerator" (LPC176x)Your lovely single-cycle 120MHz RISC CPU isn't going to run so well if every instruction takes 5 additional cycles to fetch from the flash program memory (SAMD51. But 50ns access times for flash seem to be "typical.")
Another problem is the mistake of lumping all ARMs together.
10) Previous point tldr: With AVR, every beginner has a clear and simple route to follow. On ARM MCUs, everybody's teaching different way, it's hard to know what to do, and it often looks difficult and complex - many code examples are long just to blink an LED.
11) Once more: the biggest issue I see on learning ARM MCUs is lack of simple, easy to understand, lightweight examples, and tutorials to do sane, sustainable development.
2) Getting an STM32 blink an LED requires one (1) register write more than an AVR
ldr r1, =IOPORT_BASE ;; load address of ioport registers. (48bits!)
mov r0, #(1<<pinno) ;; load bit that needs set (assumes that pinno<=7, on CM0.)
;; 32bits on v7m, for all single-bit values.
;; needs to be another 48bit "ld r0,=b" on v6m, or maybe
;; a mov followed by a shift.
str r0, [r1, IOPORT_SET] ;; write to the SETABIT register. (1 word)
ARM assembly is easy peasyPay no attention to the 4 possible instruction encodings for loading a constant into a register, that range from 16 to 48 bits of flash space, and half of which are not available on a CM0... It's a RISC CPU and RISC always has completely regular instructions!
Another problem is the mistake of lumping all ARMs together.There is currently a strange pattern of behaviour when offering MCUs to customers:
Especially from the POV of a beginner where the biggest hurdle is learning a toolchain and getting programming working.
An STM32 is as different to a NXP ARM as it is to a PIC. It's all about the peripherals and the tools.
Well usually you'd use an LDR Rx,[PC+y] instruction to load a constant into a register on ARM.QuoteARM assembly is easy peasyPay no attention to the 4 possible instruction encodings for loading a constant into a register, that range from 16 to 48 bits of flash space, and half of which are not available on a CM0... It's a RISC CPU and RISC always has completely regular instructions!
There is currently a strange pattern of behaviour when offering MCUs to customers:
- For engineers: MCUs mostly sell for their peripheral and memory content, and the core doesn't matter a lot. Even the speed of the core doesn't matter a lot, because most MCUs aren't run at their full speed.
- For managers: If it doesn't have an ARM core they aren't interested. If it does have an ARM core, they will happily sit through a sales pitch for a device that is a horrible mismatch for their needs.
http://www.coolermaster.com/peripheral/keyboards/suppressor/ (http://www.coolermaster.com/peripheral/keyboards/suppressor/)There is currently a strange pattern of behaviour when offering MCUs to customers:
- For engineers: MCUs mostly sell for their peripheral and memory content, and the core doesn't matter a lot. Even the speed of the core doesn't matter a lot, because most MCUs aren't run at their full speed.
- For managers: If it doesn't have an ARM core they aren't interested. If it does have an ARM core, they will happily sit through a sales pitch for a device that is a horrible mismatch for their needs.
For marketers: Our device is powered by cutting-edge 2-core ARM processor.
usually you'd use an LDR Rx,[PC+y] instruction to load a constant into a register on ARM.
if ((PORTB & SWITCHMASK) == MY_SWITCH_COMBO ...
Code: [Select]if ((PORTB & SWITCHMASK) == MY_SWITCH_COMBO ...
It's sort of like "ARM does everything faster and better except for dealing with peripheral registers", overlooking the factor that a lot of embedded code does very little BUT "deal with peripheral registers. :-(
Quote1) Caches are a moot point. They tend to be disabled by default. Just don't enable them.You think so, eh? I'm talking less about formal "cache memory that you need to enable" and more about things like the "2*64bit prefetch buffer" (stm32f1xx) or the "Enhanced flash memory accelerator" (LPC176x)Your lovely single-cycle 120MHz RISC CPU isn't going to run so well if every instruction takes 5 additional cycles to fetch from the flash program memory (SAMD51. But 50ns access times for flash seem to be "typical.")
http://www.coolermaster.com/peripheral/keyboards/suppressor/ (http://www.coolermaster.com/peripheral/keyboards/suppressor/)
Boasting 72MHz 32 bit MCU and 128kB of flash in keyboard, as if does matter FFS :palm:.
http://www.coolermaster.com/peripheral/keyboards/suppressor/ (http://www.coolermaster.com/peripheral/keyboards/suppressor/)Not to mention that having excess resources on the microcontroller means more room for nefarious people to play with. Manipulating the keyboard firmware can be very interesting.
Boasting 72MHz 32 bit MCU and 128kB of flash in keyboard, as if does matter FFS :palm:.
What's wrong with DIP? Why there's no ARMs in DIP? Damn it.There are a few M0 parts with a DIP package option. Not many, but look around and you'll find some aimed at the white good market. Look hard enough and there might be the odd M3 or M4 part, aimed at the motor control market, offered in a DIP package.
Cheap gumstick modules with all thr supporting passives appear to be the diy replacement.
Well, You may need a beefy MCU. However it does not matter if it's 8051 or ARM or clock used as long as such product woks as intended. It's not like it gives more FPS on PC. About G613, I have it, they could not fix it for more than half a year since release. And when they released it, mine died during firmware update |O. BTW it's not like it lost key press, any first press was registered as press- immediate release, thus key combinations failed.http://www.coolermaster.com/peripheral/keyboards/suppressor/ (http://www.coolermaster.com/peripheral/keyboards/suppressor/)
Boasting 72MHz 32 bit MCU and 128kB of flash in keyboard, as if does matter FFS :palm:.
Complex RGB LED blinking patterns needs lots of fast PWM channels, CPU resources and memory :-DD Also it seems that 21st century programming and regression testing culture requires firmware update for everything including LED lamps. True story: Logitech managed to release G613 keyboard and MK850 combo with very serious keyboard bug (lost Ctrl+keypress on wake-up). Only firmware update saved them from recall.
There are PIC32's in DIP with interla oscilator that just need power and a couple of caps to work.Cheap gumstick modules with all thr supporting passives appear to be the diy replacement.
Exactly. Bare DIP chip is not ready to run out of the box, the same apply to LQFP/SOIC ARM soldered on adapter. Apart from dirt-cheap & popular stm32 Blue Pill (https://wiki.stm32duino.com/index.php?title=Blue_Pill), there are many other breadboardable ARM boards:
https://os.mbed.com/platforms/?form-factor=6 (https://os.mbed.com/platforms/?form-factor=6)
IIRC, according to Hackaday, that's not the case anymore.NXP still makes one with 28 pins!
https://hackaday.com/2018/04/15/rip-dip-arm/ (https://hackaday.com/2018/04/15/rip-dip-arm/)
No it doesn't - PWM doesn't scale well so you use use binary code modulation instead, all you need is the ability to turn on for a precise time - trivial if using external drivers using a compare peripheral, but also doable entirely in software with a little care
Complex RGB LED blinking patterns needs lots of fast PWM channels,
Add also that you may have to save even more on the stack if you use floating point. It gets very painful, very fast, and it’s not unusual to have to start disassembling and massaging the ISR code or modifiers accordingly. One problem with this approach is that your well meaning fiddling makes the code less maintainable when someone comes along withiut knowledge of your assumptions.Personally I try hard to avoid being dependant on software when it comes down to nanosecond timing on a regular microcontroller because it is hard to achieve and hard to maintain. I guess these situations are likely originating from a hardware designer thinking 'they can fix this in software for sure'.
NXP still makes one with 28 pins!
https://www.nxp.com/part/LPC1114FN28 (https://www.nxp.com/part/LPC1114FN28)
Actually this is quite a capable microcontroller. I'm using the QFN version in one of the projects I've done recently. And yes, these have an internal oscillator as well so with some bypass caps you are ready to roll.
No it doesn't
Complex RGB LED blinking patterns needs lots of fast PWM channels,
Exactly. Bare DIP chip is not ready to run out of the box, the same apply to LQFP/SOIC ARM soldered on adapter.There are PIC32's in DIP with interla oscilator that just need power and a couple of caps to work.
NXP still makes one with 28 pins!
https://www.nxp.com/part/LPC1114FN28 (https://www.nxp.com/part/LPC1114FN28)
Newark says they have it available in the near future. Given the component shortages it doesn't surprise me. Try to buy 100nf 0603 capacitors... :scared:NXP still makes one with 28 pins!
https://www.nxp.com/part/LPC1114FN28 (https://www.nxp.com/part/LPC1114FN28)
Thanks nctnico, but what the $&%... can't find/buy it anywhere, must be made of unobtanium! :-)
Newark says they have it available in the near future. Given the component shortages it doesn't surprise me. Try to buy 100nf 0603 capacitors... :scared:NXP still makes one with 28 pins!
https://www.nxp.com/part/LPC1114FN28 (https://www.nxp.com/part/LPC1114FN28)
Thanks nctnico, but what the $&%... can't find/buy it anywhere, must be made of unobtanium! :-)
What is the thing with DIP? Are they still being used by any mass produced products?If you want to use a single sided SRBP PCB, as many whites goods makers still do, DIPs are the prefered package.
The only DIP chips I can still think of are DIP4 optocouplers and DIP8 integrated flyback SMPS chips.
For the sake of love, I've not used a DIP for years, literally years, besides student teaching projects.
PS. Also why TSSOP? They are harder to hand solder than DFN/QFN. The gull wing sucks and stores solder, so bridging happens all the time with short pad extension. With QFN, I can do 0.2mm extension per side and they still don't bridge.
What is the thing with DIP? Are they still being used by any mass produced products?besides SMPS PSU you mentioned that alone can count in millions, DIP are in gazillions in home appliances such as heater shower, water heater, refrigerator, air conditioning, toaster everything but highly densed electronics/computing related stuffs.
The only DIP chips I can still think of are DIP4 optocouplers and DIP8 integrated flyback SMPS chips.
DIPs can be placed automatically for decades.If you want to use a single sided SRBP PCB, as many whites goods makers still do, DIPs are the prefered package.
Just use an SOP as a DIP. Fan out the pins and it's virtually a DIP, without the high package material cost and manual part insertion cost.
Automated through hole placement is slow and easy to screw up. I don't think it 100% of parts are automated even at places with machine placement.DIPs can be placed automatically for decades.
Don't think that's true for China, or maybe I'm wrong.
In my mind, automatic THT placement is expensive and prone to failure, way more expensive than SMT.
Modern single layer PCBs usually consist of largish SMT parts glued on the bottom including ICs and through hole parts and jumpers on the top. Like electrolytic capacitors, power semiconductors, optocouplers. All of it is wave soldered, no reflow. If it's simple circuit, it may be through hole only as well.
Nobody does reflow soldering on single layer phenolic PCBs to begin with. There is additional assembly step but PCB goes through soldering only one time.Modern single layer PCBs usually consist of largish SMT parts glued on the bottom including ICs and through hole parts and jumpers on the top. Like electrolytic capacitors, power semiconductors, optocouplers. All of it is wave soldered, no reflow. If it's simple circuit, it may be through hole only as well.
if it has bunch of big connectors you might have to do wave solder any way so going all through hole might same a few steps
I visited a TV manufacturing plant from Nokia in Germany in the early 90's and even back then they placed all through-hole components automatically. The machines would even detect a failure, pull the part back out (bin it) and re-insert a new one. I'd say those through hole placement machines where more impressive than those newfangled SMT p&p machines 8)Automated through hole placement is slow and easy to screw up. I don't think it 100% of parts are automated even at places with machine placement.DIPs can be placed automatically for decades.
Don't think that's true for China, or maybe I'm wrong.
In my mind, automatic THT placement is expensive and prone to failure, way more expensive than SMT.
I visited a TV manufacturing plant from Nokia in Germany in the early 90's and even back then they placed all through-hole components automatically. The machines would even detect a failure, pull the part back out (bin it) and re-insert a new one. I'd say those through hole placement machines where more impressive than those newfangled SMT p&p machines 8)That's probably one of the reasons why Nokia TVs are not made since 1996.
The SMT parts where glued to the solder side and the whole PCB would then go through the wave soldering machine.
There are many automated THT lines in China.DIPs can be placed automatically for decades.
Don't think that's true for China, or maybe I'm wrong.
In my mind, automatic THT placement is expensive and prone to failure, way more expensive than SMT.
Absolutely smallest and cheapest won't have core-coupled RAMs, of course.If you meant the "Tightly Coupled Memory" (TCM) feature provided by ARM, that's an OPTION in CM7, not available at all in CM0 through CM4. (According to the wikipedia table. OTOH, the SAMD51 claims to have TCM, and the WP table also claims that M0, M3, M4 don't have cache, either. While several data sheets claim to have simple caches. (I assume that this is the difference between implementing the cache in the vendor-provided memory system vs using the ARM-provided IP. In which case I don't know what Atmel/Microchip means by "TCM"...)
QuoteAbsolutely smallest and cheapest won't have core-coupled RAMs, of course.If you meant the "Tightly Coupled Memory" (TCM) feature provided by ARM, that's an OPTION in CM7, not available at all in CM0 through CM4.
What is the thing with DIP? .... For the sake of love, I've not used a DIP for years, literally years, besides student teaching projects.I bet by the time you used your first SMD chip, other people had been using them for years, literally years.
PS. Also why TSSOP? They are harder to hand solder than DFN/QFN. The gull wing sucks and stores solder, so bridging happens all the time with short pad extension. With QFN, I can do 0.2mm extension per side and they still don't bridge.At least one practical reason is that QFN are more culpable to cracked solder joints due to board flex and/or thermal expansion/cycling. The difference has been tested. And regarding deep thermal cycling, apparently gull wing chips can be more than an order of magnitude better in some of these tests.
Well, this post certainly wanders all over the place ;-)
QuoteAbsolutely smallest and cheapest won't have core-coupled RAMs, of course.If you meant the "Tightly Coupled Memory" (TCM) feature provided by ARM, that's an OPTION in CM7, not available at all in CM0 through CM4.
I bet by the time you used your first SMD chip, other people had been using them for years, literally years.I understood that QFN is actually one of the most reliable packages from a mechanical point of view. I think I read about this in a paper somewhere, but I don't think I'd be able to find that again. I stand to be corrected.
The reason, as far as the manufacturers are concerned, is because of branding/marketing/education. If some n00b chooses TI over Maxim IC because it comes in DIP, he might end up buying hundreds of thousands of (SMD) TI parts in the future, just because that's what he is familiar with.
At least one practical reason is that QFN are more culpable to cracked solder joints due to board flex and/or thermal expansion/cycling. The difference has been tested. And regarding deep thermal cycling, apparently gull wing chips can be more than an order of magnitude better in some of these tests.
The reliability of QFN mounting is quite variable. If people take the thermal stress issues seriously during soldering, as people normally do with BGA packages, results can be very good. if you fail to get things evenly heated during the soldering process you can get very poor results. So, don't expect most hand soldered QFNs to achieve wonderful reliability. The thermal expansion of the QFN package material has been chosen to be close to that of FR4. If you tried to use any other PCB material you might find reliability problems with QFNs.QuotePS. Also why TSSOP? They are harder to hand solder than DFN/QFN. The gull wing sucks and stores solder, so bridging happens all the time with short pad extension. With QFN, I can do 0.2mm extension per side and they still don't bridge.At least one practical reason is that QFN are more culpable to cracked solder joints due to board flex and/or thermal expansion/cycling. The difference has been tested. And regarding deep thermal cycling, apparently gull wing chips can be more than an order of magnitude better in some of these tests.
I visited a TV manufacturing plant from Nokia in Germany in the early 90's and even back then they placed all through-hole components automatically. The machines would even detect a failure, pull the part back out (bin it) and re-insert a new one. I'd say those through hole placement machines where more impressive than those newfangled SMT p&p machines 8)
The SMT parts where glued to the solder side and the whole PCB would then go through the wave soldering machine.
It's nowhere near to fully automated TH component placement. All it places are axial parts with 2 leads like resistors.I visited a TV manufacturing plant from Nokia in Germany in the early 90's and even back then they placed all through-hole components automatically. The machines would even detect a failure, pull the part back out (bin it) and re-insert a new one. I'd say those through hole placement machines where more impressive than those newfangled SMT p&p machines 8)
The SMT parts where glued to the solder side and the whole PCB would then go through the wave soldering machine.
Yes, they are truly impressive, like the Panasonic AV132 puts components in like good old chip-shooter places chips at 30000 per hour or thereabout. They just appear on the board faster than eye can see!
Though I still see the center pad of QFN's a right pain to get right every time, you have to generally get weird with your paste masks to get better than 99.9% yields, which generally leads me to fitting a dot when its needed, vs a full pad,
It needs bit more explanation, it seems. Automated thru-hole line consist of three kinds of machines. Typically there is first the axial component inserting machine (ex. AV132), after that comes the radial inserting machine placing radial components, like small electrolytics, film caps and such. Then at last there is one or usually more odd-form machines placing rest of components, like connectors, transformers, big electrolytics and such. That kind of line you would be using, if you were making thru-hole only boards.It's nowhere near to fully automated TH component placement. All it places are axial parts with 2 leads like resistors.
Yes, they are truly impressive, like the Panasonic AV132 puts components in like good old chip-shooter places chips at 30000 per hour or thereabout. They just appear on the board faster than eye can see!
Clearly, it's ST's addition, and not the "ARM's option". The name is "CCM" for core-coupled memory instead of "TCM".Thanks for the clarification and example part.
Really, entire world? Then why new parts still appear no the market?
(https://www.electronicdesign.com/sites/electronicdesign.com/files/uploads/2014/12/Table-1-big.jpg)QuoteSemiconductor MCU revenue market forecast –millions of dollars
IIRC, according to Hackaday, that's not the case anymore.
https://hackaday.com/2018/04/15/rip-dip-arm/
These kind of stats say nothing about the number of designs which is a way more interesting number. 8 bit makes sense in very high volume if cost is an issue so just looking at dollar figures gives a very incomplete picture. You can't really see where the market is going from it.Really, entire world? Then why new parts still appear no the market?
(https://www.electronicdesign.com/sites/electronicdesign.com/files/uploads/2014/12/Table-1-big.jpg)QuoteSemiconductor MCU revenue market forecast –millions of dollars
Nice stats, which show that the 4 & 16 bitters are the ones that stagnate, but the 8b & 32b are growing at actually very similar CAGR
I guess my point was that claiming 32bit CPUs are "easier to use" than 8-bit chips is a bit ... misleading
you're going to end up using a relatively complex-to-use vendor-specific feature to get the job done.
These kind of stats say nothing about the number of designs which is a way more interesting number. 8 bit makes sense in very high volume if cost is an issue so just looking at dollar figures gives a very incomplete picture. You can't really see where the market is going from it.You can at least assume that 8 bitters are sold/used in significantly higher numbers. Because amount shown is in money and 8 bitters are cheaper, especially super cheap end which go into mass produced devices.
The other way around: claiming that 8bit CPUs are easier to use is equally misleading. The peripherals in a microcontroller are a different story and you can find examples of complicated peripherals regardless the number of bits in the CPU.QuoteClearly, it's ST's addition, and not the "ARM's option". The name is "CCM" for core-coupled memory instead of "TCM".Thanks for the clarification and example part.
I guess my point was that claiming 32bit CPUs are "easier to use" than 8-bit chips is a bit ... misleading,
What is the thing with DIP?Mount DIP on one side and SMT between the legs on the other side! Chop the legs of a DIP to turn it into a SMT. Aviation and Space?
The other way around: claiming that 8bit CPUs are easier to use is equally misleading. The peripherals in a microcontroller are a different story and you can find examples of complicated peripherals regardless the number of bits in the CPU.For a simple job, most 8-bits are easier than most larger MCUs, however once you get stuck into real work, the additional processing power, ROM/RAM space and peripheral functionality can make some jobs much easier on bigger parts, as there's less need to optimise/work around the limitations of 8-bit devices
PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.
I don't understand the lack of love for that series.
PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.It has some nice features, especially the very flexible peripheral pin mapping, good at low power, but 3.3v only ( apart from one or two), and can be more expensive than some of the PIC32 range.
I don't understand the lack of love for that series.
For a simple job, most 8-bits are easier than most larger MCUs, however once you get stuck into real work, the additional processing power, ROM/RAM space and peripheral functionality can make some jobs much easier on bigger parts, as there's less need to optimise/work around the limitations of 8-bit devices
PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.
I don't understand the lack of love for that series.
PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.
I don't understand the lack of love for that series.
I somewhat arbitrarily chose Atmel a number of years ago. It was partly based on the size and depth of information available on the internet at the time. After getting setup and dialed in for most of the basic functions I needed - I had no real interest in learning another product family - even if it was 'better'
Now, I am at a place where I am ready for more power, better peripherals, better everything. I have emotionally prepared myself for a learning curve to get that.
This currently a strong view amongst engineering managers, and it makes no sense. Nobody has to learn much about a new core, whether its 16 or 32 bits. They program in C, and only need to know a few simple things about the actual core. Its the peripherals that provide all the power, and take all the effort to learn. There is almost no commonality of peripherals from one silicon vendor to the next.PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.
I don't understand the lack of love for that series.
I somewhat arbitrarily chose Atmel a number of years ago. It was partly based on the size and depth of information available on the internet at the time. After getting setup and dialed in for most of the basic functions I needed - I had no real interest in learning another product family - even if it was 'better'
Now, I am at a place where I am ready for more power, better peripherals, better everything. I have emotionally prepared myself for a learning curve to get that.
and if you are going to upgrade and have a learning curve why go half way and pick 16bit from a single source when
you can go all the way to 32 bit and have numerous similar choices from many sources
There is almost no commonality of peripherals from one silicon vendor to the next.CMSIS?
Its a good joke, but it doesn't seem like people fall for it very often. Once you get past the "we use standard ARM cores" step, and the engineering manager is happy to listen to you extol the virtues of all the unique stuff in your peripherals they know the score. They know this will require their engineers take some serious time to be fully conversant with your peripheral set. They know CMSIS will only cut it for very basic uses, loosing access to all the clever stuff in your peripherals. They know whatever clever stuff they implement with your MCU will be completely non-portable. They still insist on that ARM core, though.There is almost no commonality of peripherals from one silicon vendor to the next.CMSIS?
I'll get my coat.... :-DD :-DD
There is almost no difference between the PIC24 and PIC32 range from the C programming user's point of view so it's not a big curve - many of the peripherals are the same or at least back-compatible supersets on the PIC32, so code written for the PIC24 will often need minimal changes to run on the 32PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.
I don't understand the lack of love for that series.
I somewhat arbitrarily chose Atmel a number of years ago. It was partly based on the size and depth of information available on the internet at the time. After getting setup and dialed in for most of the basic functions I needed - I had no real interest in learning another product family - even if it was 'better'
Now, I am at a place where I am ready for more power, better peripherals, better everything. I have emotionally prepared myself for a learning curve to get that.
and if you are going to upgrade and have a learning curve why go half way and pick 16bit from a single source when
you can go all the way to 32 bit and have numerous similar choices from many sources
PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.
I don't understand the lack of love for that series.
I somewhat arbitrarily chose Atmel a number of years ago. It was partly based on the size and depth of information available on the internet at the time. After getting setup and dialed in for most of the basic functions I needed - I had no real interest in learning another product family - even if it was 'better'
Now, I am at a place where I am ready for more power, better peripherals, better everything. I have emotionally prepared myself for a learning curve to get that.
and if you are going to upgrade and have a learning curve why go half way and pick 16bit from a single source when
you can go all the way to 32 bit and have numerous similar choices from many sources
PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.
I don't understand the lack of love for that series.
Using a 32bit ARM micro means you'll very likely be using a peripheral library, and they're usually shit to work with.You can live without the library and you can be better off without it, but it will take some additional brain power to understand the registers. When vendors are given gigabytes of address space to place control registers, they tend to get lazy and verbose.
Just when you start to learn the library they release a new version and it breaks everything.
It seldom takes much brain power. It takes time. Often lots of it. Often patching together information from various sources, because no one source is complete. Often using the source code for the crappy library, because only the programmer of that library ever figured out some details that are not in the documentation.Using a 32bit ARM micro means you'll very likely be using a peripheral library, and they're usually shit to work with.You can live without the library and you can be better off without it, but it will take some additional brain power to understand the registers. When vendors are given gigabytes of address space to place control registers, they tend to get lazy and verbose.
Just when you start to learn the library they release a new version and it breaks everything.
PIC24 series is every bit as easy to use as 8 bit but you get vastly better chip.
I don't understand the lack of love for that series.
(Early) PIC24 had lot of quirks, even in newer models the peripherals are dumb, low MIPS anyway. PIC18 K42/K83 series have far better peripherals
if you want 16bit go dsPIC.
What I mean is reading through the chip’s manual and have a general grasp on what goes where and does what, and how the modules are associated with each other. This will take some brainpower to build up the dependency map. When coming to individual peripherals spend some time poking around it to figure out what does what, and with those understandings you can start writing your own drivers.It seldom takes much brain power. It takes time. Often lots of it. Often patching together information from various sources, because no one source is complete. Often using the source code for the crappy library, because only the programmer of that library ever figured out some details that are not in the documentation.Using a 32bit ARM micro means you'll very likely be using a peripheral library, and they're usually shit to work with.You can live without the library and you can be better off without it, but it will take some additional brain power to understand the registers. When vendors are given gigabytes of address space to place control registers, they tend to get lazy and verbose.
Just when you start to learn the library they release a new version and it breaks everything.
dsPIC33 are all 3.3v parts and not very low power.
dsPIC33 are all 3.3v parts and not very low power.
dsPIC33EV is a 5V part, i use it in a lot of projects.. It needs 2 to 5 jumpers (depending on how may pins you are willing to sacrifice) to make it compatible with PIC18s, if you care about that.
But that doesn't matter, most new projects are going to be dsPIC33CH. sweet sweet dual core
About the PIC24 (KA)... I may still have PTSD, they were my very first 16bitter in high school and to this day i still have problems with them. And they are really "dumb" compared to the newer 8bitters, as you saw yourself.
Sure you have a core that's easier to deal with but the peripherals are just meh (to me)
Single cell Li-po and 3.3V parts... That is a pain to design. So far my best bet is actually abusing a battery bank chip TP5410 into pushing out 4.7V regardless whether the power is connected or not (by default it switches the output off when external power is applied) then regulate it down to 3.3V using AMS1117, SPX3819 or TPS52200. I am not completely sure about those buck-boost chips and their current capability.dsPIC33 are all 3.3v parts and not very low power.
dsPIC33EV is a 5V part, i use it in a lot of projects.. It needs 2 to 5 jumpers (depending on how may pins you are willing to sacrifice) to make it compatible with PIC18s, if you care about that.
But that doesn't matter, most new projects are going to be dsPIC33CH. sweet sweet dual core
About the PIC24 (KA)... I may still have PTSD, they were my very first 16bitter in high school and to this day i still have problems with them. And they are really "dumb" compared to the newer 8bitters, as you saw yourself.
Sure you have a core that's easier to deal with but the peripherals are just meh (to me)
Yes, you're right there a 5V parts too, but they need voltage regulators too, my point was around the additional BOM requirements for that: in single cell Li devices, you can forego that with these PIC24FxxKA/KL/KM devices. I regularly use the KM devices which are pretty well set up with peripherals, both analogue and digital. If you need to do a bit of number crunching, they are a reasonable solution, PIC16s are almost all weak in that area. A bonus was differential ADC, and dual DAC with external outputs: there are plenty of use cases for that dual output DAC, and no many microcontrollers offer them, especially with external outputs. I had discounted the PIC16 as I didn't think it would have the number crunching horse power, but with the PID accelerator my view changed. The math accelerator is a real PITA to learn how to use though!
Single cell Li-po and 3.3V parts... That is a pain to design.
I don't understand the lack of love for that series [PIC24]I think that in general, 16bit microcontrollers didn't get a lot of traction because the first "resource" that people were running out of was memory - either program memory or flash memory. And the 16bit chips tended to have the same sorts of memory sizes and awkward bank switching schemes as the 8bit chips. The sort of 256k/64k flash/ram configurations you can commonly get for cheap with a 32bit CPU is practically unheard of in the 8bit world (and somewhat unpleasant to use if you can find it.)
The battery bank chipset I used comes with both a linear charger (similar to TP4056) and a fixed 5V step-up regulator in one SOIC8-1EP package, I just need to give it a charge current control resistor, a power inductor and a Schottky diode to make it all work. It is rated for 1A maximum output current.Single cell Li-po and 3.3V parts... That is a pain to design.
Yeah. I usually try and select parts that work down to 3V or below, and regulate the battery to this voltage with an LDO. (If you don't need your main 3V as a reference, you may even not regulate it at all, but in this case you have to use parts that can handle up to 4.2V, and 3.3V parts usually can't operate at above 3.6V, so that severly limits your options.) Most of my battery-operated designs are powered at 3V or even lower (if I can), such as 2.7V or 2.5V. So no need for a step-up regulator, which has two drawbacks: reduced efficiency obviously (especially at low currents) and a current draw on the battery that increases as the battery empties - which is often bad for the battery's life and may even damage it if not protected properly.
Single cell Li-po and 3.3V parts... That is a pain to design. So far my best bet is actually abusing a battery bank chip TP5410 into pushing out 4.7V regardless whether the power is connected or not (by default it switches the output off when external power is applied) then regulate it down to 3.3V using AMS1117, SPX3819 or TPS52200. I am not completely sure about those buck-boost chips and their current capability.
So my question is - what are 8-bit uC still used for (in new designs).
As a hobbyst, is there still a point in using them
This was actually one of my applications: use the PIC as a switch mode charge regulator and low current power supply, as there wasn't a single device off the shelf that did what I wanted: gas gauge, charger, power supply. Having a device with a reasonably wide voltage range (2.0 to 5.5V for PIC24FV, 2.3 to 5.5V for PIC16F1xxx well suited to 5V (USB) Li charging, and running off the Li battery without external regulators. It did mean external MOSFETs though. The PIC16F161x has a pair of 100mA drive outputs, useful in lieu of gate drivers. I use a Cuk topology to allow an output voltage higher or lower than the Li source.TP5410 is a combo charger and power supply, the only thing missing would be the gas gauge. That chip is actually a bit wasteful as it is a linear charger and a fixed 5V output non-synchronous boost converter combo, but for low cost applications there is little other options than TP5410 + resistor + inductor + two diodes.
Of course. It's not all about price. 8 bits is still overkill for many, many jobs and can really simplify life.
4 bitters aren't so easy to get outside Japan these days.Of course. It's not all about price. 8 bits is still overkill for many, many jobs and can really simplify life.And 4 bits better yet...
Of course. It's not all about price. 8 bits is still overkill for many, many jobs and can really simplify life.
And 4 bits better yet...
16bit ... 68HC16 ... never seen, never used :-//Motorola developed two cores - the CPU16 (the core in the 68HC16) and the CPU32 (based on the 68000 core) - at the same time, intending to mix and match a set of modular peripherals and these two cores to suit a wide range of needs. It turned out most people were either OK with Motorola's existing HC05/HC08 or HC11 cores, or needed the performance of the CPU32.
16bit ... 68HC16 ... never seen, never used :-//Motorola developed two cores - the CPU16 (the core in the 68HC16) and the CPU32 (based on the 68000 core) - at the same time, intending to mix and match a set of modular peripherals and these two cores to suit a wide range of needs. It turned out most people were either OK with Motorola's existing HC05/HC08 or HC11 cores, or needed the performance of the CPU32.
Anything that runs off 5V.
Motorola developed two cores - the CPU16 (the core in the 68HC16) and the CPU32 (based on the 68000 core) - at the same time, intending to mix and match a set of modular peripherals and these two cores to suit a wide range of needs. It turned out most people were either OK with Motorola's existing HC05/HC08 or HC11 cores, or needed the performance of the CPU32.
The HC05 was one of the most successful MCU families. It was a key part of what made Motorola the biggest MCU maker in the world.Quote from: coppiceMotorola developed two cores - the CPU16 (the core in the 68HC16) and the CPU32 (based on the 68000 core) - at the same time, intending to mix and match a set of modular peripherals and these two cores to suit a wide range of needs. It turned out most people were either OK with Motorola's existing HC05/HC08 or HC11 cores, or needed the performance of the CPU32.moto HC05. Yuck. Any youngling that still yearns for the 8 bit era should be forced, as I was, to program these register starved memory deficient craptastic things. That will cure you.
Anything that runs off 5V.
Turning that around and looking at starting requirements from the other end of the telescope, what if you want true 5V outputs and nott just 5V tolerant and must have 32 bit core?
I know of:
Cypress PSOC4 arm cortex m0
Cypress PSOC5 arm cortex m3
Atmel/Microchip SAMC21 automotive series cortex m0 these are nice because additionally they are one of the few mcu that have CAN FD
Are there any others? including obscure non-arm 32 bitters?
The HC05 was one of the most successful MCU families. It was a key part of what made Motorola the biggest MCU maker in the world.Quote from: coppiceMotorola developed two cores - the CPU16 (the core in the 68HC16) and the CPU32 (based on the 68000 core) - at the same time, intending to mix and match a set of modular peripherals and these two cores to suit a wide range of needs. It turned out most people were either OK with Motorola's existing HC05/HC08 or HC11 cores, or needed the performance of the CPU32.moto HC05. Yuck. Any youngling that still yearns for the 8 bit era should be forced, as I was, to program these register starved memory deficient craptastic things. That will cure you.
That would have been around 1994 or 1995, right? The issue wasn't acquisitions. They badly screwed up resource allocation across the board. Sales grew faster than they could scale their factories. Efforts to use outside fabs hit many stumbling blocks. Combine this with the fact that the needs of Motorola communications always came first, and everything else a poor second. They had a lot of angry customers who really liked a Motorola chip but were suddenly unable to get quantities of it as they reached mass production. The backlash from this is a big part of why Motorola fell from their number 2 position in the semiconductor world.The HC05 was one of the most successful MCU families. It was a key part of what made Motorola the biggest MCU maker in the world.Quote from: coppiceMotorola developed two cores - the CPU16 (the core in the 68HC16) and the CPU32 (based on the 68000 core) - at the same time, intending to mix and match a set of modular peripherals and these two cores to suit a wide range of needs. It turned out most people were either OK with Motorola's existing HC05/HC08 or HC11 cores, or needed the performance of the CPU32.moto HC05. Yuck. Any youngling that still yearns for the 8 bit era should be forced, as I was, to program these register starved memory deficient craptastic things. That will cure you.
Oh I have first hand experience with the popularity part as well. Some time after Moto bought or merged with GM-Hughes the auto sector snarfed up all available supply and they went on allocation. Couldn't get one for love or money, almost sank/bankrupted the small manufacturer I was working for.
[..] HC11 cores, or needed the performance of the CPU32.
That would have been around 1994 or 1995, right? The issue wasn't acquisitions. They badly screwed up resource allocation across the board. Sales grew faster than they could scale their factories. Efforts to use outside fabs hit many stumbling blocks. Combine this with the fact that the needs of Motorola communications always came first, and everything else a poor second. They had a lot of angry customers who really liked a Motorola chip but were suddenly unable to get quantities of it as they reached mass production. The backlash from this is a big part of why Motorola fell from their number 2 position in the semiconductor world.
That's the birth of ECUs, when cars were moving from carburetor to EFI.
That's the birth of ECUs, when cars were moving from carburetor to EFI.
Only sort of contemporaneous, there were already ECUs designed in 82 or so. Typically either an 8051 or intel 8096. I am reasonably certain the 8096 was designed initially for a Robert Bosch ECU. The first 8051 ECUs were not fast enough for control loop every revolution, instead had averaged control. I don't think anyone used a 6805 for engine control but could be wrong. It was the start of a massive influx of mcu's in other secondary control circuits, from radios to what-ever. The first antilock brakes were already common but the first generation was not done with digital control. It was some type of analog mechanical mechanism. Then Motorola came out with the 68xxx around 1986, can't remember the part number but it was the initial mcu that offered the crazy complicated 4 channel timer control unit, before the first coldfires. The motorola app-engineer taunted us in a seminar "guess what application and customer those timer channels are for"
I doubt that Motorola used a 6502. Using any NMOS in the mid 70s was a big problem for a car engine. NMOS would only operate a few degrees below zero. It didn't take much cooling to make MPUs of the time, like the 6800 and 8085, crash. TI and others put a lot of effort into IIL versions of their processors, because it was a technology that worked over a wide temperature range. Eventually, the temperature range of MOS processes improved, but only a few suppliers, like Motorola and Hitachi, really got to grips with the qualification needs of the automotive industry. The cost structure of IIL was never really sorted out, and it quickly faded away.That's the birth of ECUs, when cars were moving from carburetor to EFI.
Only sort of contemporaneous, there were already ECUs designed in 82 or so. Typically either an 8051 or intel 8096. I am reasonably certain the 8096 was designed initially for a Robert Bosch ECU. The first 8051 ECUs were not fast enough for control loop every revolution, instead had averaged control. I don't think anyone used a 6805 for engine control but could be wrong. It was the start of a massive influx of mcu's in other secondary control circuits, from radios to what-ever. The first antilock brakes were already common but the first generation was not done with digital control. It was some type of analog mechanical mechanism. Then Motorola came out with the 68xxx around 1986, can't remember the part number but it was the initial mcu that offered the crazy complicated 4 channel timer control unit, before the first coldfires. The motorola app-engineer taunted us in a seminar "guess what application and customer those timer channels are for"
I think Motorola and GM did an ECU sometime in the mid 70's based on 6502
I doubt that Motorola used a 6502. Using any NMOS in the mid 70s was a big problem for a car engine. NMOS would only operate a few degrees below zero. It didn't take much cooling to make MPUs of the time, like the 6800 and 8085, crash. TI and others put a lot of effort into IIL versions of their processors, because it was a technology that worked over a wide temperature range. Eventually, the temperature range of MOS processes improved, but only a few suppliers, like Motorola and Hitachi, really got to grips with the qualification needs of the automotive industry. The cost structure of IIL was never really sorted out, and it quickly faded away.That's the birth of ECUs, when cars were moving from carburetor to EFI.
Only sort of contemporaneous, there were already ECUs designed in 82 or so. Typically either an 8051 or intel 8096. I am reasonably certain the 8096 was designed initially for a Robert Bosch ECU. The first 8051 ECUs were not fast enough for control loop every revolution, instead had averaged control. I don't think anyone used a 6805 for engine control but could be wrong. It was the start of a massive influx of mcu's in other secondary control circuits, from radios to what-ever. The first antilock brakes were already common but the first generation was not done with digital control. It was some type of analog mechanical mechanism. Then Motorola came out with the 68xxx around 1986, can't remember the part number but it was the initial mcu that offered the crazy complicated 4 channel timer control unit, before the first coldfires. The motorola app-engineer taunted us in a seminar "guess what application and customer those timer channels are for"
I think Motorola and GM did an ECU sometime in the mid 70's based on 6502
That page says they deployed the ECU in 1981. That was about the time when MOS parts specified down to -40C or -55C were getting to volume production.I doubt that Motorola used a 6502. Using any NMOS in the mid 70s was a big problem for a car engine. NMOS would only operate a few degrees below zero. It didn't take much cooling to make MPUs of the time, like the 6800 and 8085, crash. TI and others put a lot of effort into IIL versions of their processors, because it was a technology that worked over a wide temperature range. Eventually, the temperature range of MOS processes improved, but only a few suppliers, like Motorola and Hitachi, really got to grips with the qualification needs of the automotive industry. The cost structure of IIL was never really sorted out, and it quickly faded away.That's the birth of ECUs, when cars were moving from carburetor to EFI.
Only sort of contemporaneous, there were already ECUs designed in 82 or so. Typically either an 8051 or intel 8096. I am reasonably certain the 8096 was designed initially for a Robert Bosch ECU. The first 8051 ECUs were not fast enough for control loop every revolution, instead had averaged control. I don't think anyone used a 6805 for engine control but could be wrong. It was the start of a massive influx of mcu's in other secondary control circuits, from radios to what-ever. The first antilock brakes were already common but the first generation was not done with digital control. It was some type of analog mechanical mechanism. Then Motorola came out with the 68xxx around 1986, can't remember the part number but it was the initial mcu that offered the crazy complicated 4 channel timer control unit, before the first coldfires. The motorola app-engineer taunted us in a seminar "guess what application and customer those timer channels are for"
I think Motorola and GM did an ECU sometime in the mid 70's based on 6502
6802
http://www.chipsetc.com/computer-chips-inside-the-car.html (http://www.chipsetc.com/computer-chips-inside-the-car.html)
Turning that around and looking at starting requirements from the other end of the telescope, what if you want true 5V outputs and not just 5V tolerant and must have 32 bit core?
I know of:
Cypress PSOC4 arm cortex m0
Cypress PSOC5 arm cortex m3
Atmel/Microchip SAMC20 and 21 automotive series cortex m0+ these additionally are one of the few mcu that have CAN FD
Are there any others? including obscure non-arm 32 bitters?
Anything that runs off 5V.Turning that around and looking at starting requirements from the other end of the telescope, what if you want true 5V outputs and not just 5V tolerant and must have 32 bit core?
BTW: who is still making 5V designs nowadays anyway? Seems more like an old habit than a necessity.If you mix analog and digital, or for directly driving MOSFET gates it can be quite useful. Or when directly powered from lithium battery.
BTW: who is still making 5V designs nowadays anyway? Seems more like an old habit than a necessity.
BTW: who is still making 5V designs nowadays anyway? Seems more like an old habit than a necessity.For noisy environments, like automotive, white goods and motor control, 3.6V and lower voltage designs are much less robust that a similar 5V design would be. Most new designs do use lower voltages, because the designers have little choice. They don't like it, though, and they end up with longer design cycles trying to get their designs rock solid.
But you wouldn't use 5V USB power directly anyway because the cables and connectors can have a significant voltage drop. Using an LDO to get to 3.3V is a much safe bet.BTW: who is still making 5V designs nowadays anyway? Seems more like an old habit than a necessity.The whole world because of USB?
Most modern "5V" MCUs can work in wide voltage range. Therefore it's a non issue.But you wouldn't use 5V USB power directly anyway because the cables and connectors can have a significant voltage drop. Using an LDO to get to 3.3V is a much safe bet.BTW: who is still making 5V designs nowadays anyway? Seems more like an old habit than a necessity.The whole world because of USB?
BTW: who is still making 5V designs nowadays anyway? Seems more like an old habit than a necessity.
I understood that QFN is actually one of the most reliable packages from a mechanical point of view. I think I read about this in a paper somewhere, but I don't think I'd be able to find that again. I stand to be corrected.Wraper might have more info on this. I googled it after he claimed that you want to "float" a QFN off the board while hand-soldering on a specific thickness of solder because of thermal cycling. The first study/paper I found showed that QFN could have MTF of only 3000 cycles of going from something stupid like -30C to 100C. QFP might have MTF of 30,000 in same test. Also it was dependent on how big the die is compared to the package size, since the epoxy in the chip has a coefficient of expansion fairly similar to FR-4. So a tiny die would be better able to withstand flex from temp change in a given package size. And if you follow that logic, any QFN available in two different sizes, the larger might be more robust in this regard.
QFN is reliable, so long as you have soldermask spacing, e.g. not pad flat to copper with no gap for the solder, this gives it some distance to keep forces inside the elastic zone of deformation. similar to BGA, if you didn't have the balls at the right height they just rip pads off either the chip of the board when flexed
@Fungus:Never done it myself but I've run plenty of them off CR2032s (which can easily dip down to 2.0V under any load).
Did you actually try to use a 1.8V Atmel device at 1.8V? I know for a fact it won't work reliable because it is the absolute minimum voltage.
A few tens of millivolts less and you'll see devices starting to fail in subtile ways.
@Fungus:First of all, it's not that "wildly varying" and secondly there is a huge margin on the bottom side.
Did you actually try to use a 1.8V Atmel device at 1.8V? I know for a fact it won't work reliable because it is the absolute minimum voltage. A few tens of millivolts less and you'll see devices starting to fail in subtile ways. Also do you want to run a microcontroller circuit from a wildly varying supply voltage? Sounds like a recipy for dissaster to me.
Wiggle a poor/bad connector a little bit and you'll see a lot of spikes on the supply. What will happen with the logic levels between two chips on that board?@Fungus:First of all, it's not that "wildly varying" and secondly there is a huge margin on the bottom side.
Did you actually try to use a 1.8V Atmel device at 1.8V? I know for a fact it won't work reliable because it is the absolute minimum voltage. A few tens of millivolts less and you'll see devices starting to fail in subtile ways. Also do you want to run a microcontroller circuit from a wildly varying supply voltage? Sounds like a recipy for dissaster to me.
What about spikes on data pins? Power is at least decoupled.Wiggle a poor/bad connector a little bit and you'll see a lot of spikes on the supply. What will happen with the logic levels between two chips on that board?@Fungus:First of all, it's not that "wildly varying" and secondly there is a huge margin on the bottom side.
Did you actually try to use a 1.8V Atmel device at 1.8V? I know for a fact it won't work reliable because it is the absolute minimum voltage. A few tens of millivolts less and you'll see devices starting to fail in subtile ways. Also do you want to run a microcontroller circuit from a wildly varying supply voltage? Sounds like a recipy for dissaster to me.
BTW: who is still making 5V designs nowadays anyway? Seems more like an old habit than a necessity.
I guess that after 14 paged the conclusion is that there is a point to 8 bit microcontrollers.
Even 4 bit processors have at least 10 bits of program counter. The key thing is the software can only count to 15. :)I guess that after 14 paged the conclusion is that there is a point to 8 bit microcontrollers.
For sufficiently low-performance applications (of which there are untold number), what reasonable person could say otherwise?
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
In any *other* bitness for CPUs -- 16, 32, 64, 128 -- the reference number is the size of the directly-accessible address space and therefore of any registers that hold addresses, including the Program Counter.
The traditional 1970s "8 bit" processors *all* had at least some 16 bit registers internally and almost all brought out a full 16 bit address bus on their packages e.g. 2650, 8080, z80, 6800, 6502, 8051. The SC/MP only brought out a 12 bit address bus but it had a 16 bit PC and three 16 bit pointer registers and A12-A15 were multiplexed onto the data bus. The RCA 1802 had 16 registers of 16 bits each and a 16 bit address space, and it multiplexed the 16 bit addresses onto two phases of an 8 bit address bus.
Even 4 bit processors have at least 10 bits of program counter. The key thing is the software can only count to 15. :)I guess that after 14 paged the conclusion is that there is a point to 8 bit microcontrollers.
For sufficiently low-performance applications (of which there are untold number), what reasonable person could say otherwise?
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
In any *other* bitness for CPUs -- 16, 32, 64, 128 -- the reference number is the size of the directly-accessible address space and therefore of any registers that hold addresses, including the Program Counter.
The traditional 1970s "8 bit" processors *all* had at least some 16 bit registers internally and almost all brought out a full 16 bit address bus on their packages e.g. 2650, 8080, z80, 6800, 6502, 8051. The SC/MP only brought out a 12 bit address bus but it had a 16 bit PC and three 16 bit pointer registers and A12-A15 were multiplexed onto the data bus. The RCA 1802 had 16 registers of 16 bits each and a 16 bit address space, and it multiplexed the 16 bit addresses onto two phases of an 8 bit address bus.
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
For sufficiently low-performance applications (of which there are untold number), what reasonable person could say otherwise?Are we going for the "No true Schotsman" now? ;D
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
In any *other* bitness for CPUs -- 16, 32, 64, 128 -- the reference number is the size of the directly-accessible address space and therefore of any registers that hold addresses, including the Program Counter.
The traditional 1970s "8 bit" processors *all* had at least some 16 bit registers internally and almost all brought out a full 16 bit address bus on their packages e.g. 2650, 8080, z80, 6800, 6502, 8051. The SC/MP only brought out a 12 bit address bus but it had a 16 bit PC and three 16 bit pointer registers and A12-A15 were multiplexed onto the data bus. The RCA 1802 had 16 registers of 16 bits each and a 16 bit address space, and it multiplexed the 16 bit addresses onto two phases of an 8 bit address bus.
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
PIC10F320
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
PIC10F320
11 bit program address space, and the PC itself is 9 bits within a page. That's architecture. True, that particular chip only has 256 words of program space implemented, but you could always do that with an 8080 or 6502 too, if you wanted (and it was common to do so with the 1802).
Going back further, the PDP8 of course had a 12 bit address space and crude banking to extend it, very similar to PIC.
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
PIC10F320
11 bit program address space, and the PC itself is 9 bits within a page. That's architecture. True, that particular chip only has 256 words of program space implemented, but you could always do that with an 8080 or 6502 too, if you wanted (and it was common to do so with the 1802).
Going back further, the PDP8 of course had a 12 bit address space and crude banking to extend it, very similar to PIC.
You mean ISA address bus? I wouldn't single out 8-bitters. The mismatch is rather common:
16-bit 8086 had 20-bit address bus, PIC24 has 23-bit address bus.
Or the other way around:
32-bit MIPS has 29-bit address bus, 64-bit ARM has 48-bit address bus, x64 has 48-bit address bus.
5V designsnot been mentioned yet: Toshiba has some pretty substantial CM3 ARM chips that will run on 5V.
Usually architectures are ranked by the apparent size of their ALU. ("apparent", because it seems like the Z80 had a 4bit ALU, and just always did at least two operations...)How would you call a configurable soft processor that has 16-bit registers and program counter but selectable 8-bit or 16-bit ALU? (SBT-16 core)
I'd rate the 8bit PICs (PIC16f...) as true 8bit architectures. 8bit ALU, 8bit register(s), less than 8bit address space for data (until you add the banking.) The PC and instruction word are wider, but "Harvard architecture", so those are pretty separate from everything else. (and the baseline PICs with essentually 9bit PCs (plus another bank) and only 8 bits of address in a "call" instruction!)
Quote5V designsnot been mentioned yet: Toshiba has some pretty substantial CM3 ARM chips that will run on 5V.
The NXP ones are the "Kinetis E" series.So they're not uncommon. I think the surprising bit is that none of the "hobbyist" vendors (Arduino, Adafruit, Sparkfun, PJRC, etc) seems to have jumped on any of the 5V 32bit cpus...
4 bit MCUs were an excellent choice for many early MCU applications, because so many applications wanted them to behave as decimal machines (e.g. in calculators). They only really needed to count up to nine. :) If you've ever had access to much source code for applications on the TMS1000 (the first successful MCU), its surprising how often applications like shower heaters, and toaster controllers, where there was no decimal UI, made the chip mimic a BCD machine.Even 4 bit processors have at least 10 bits of program counter. The key thing is the software can only count to 15. :)I guess that after 14 paged the conclusion is that there is a point to 8 bit microcontrollers.
For sufficiently low-performance applications (of which there are untold number), what reasonable person could say otherwise?
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
In any *other* bitness for CPUs -- 16, 32, 64, 128 -- the reference number is the size of the directly-accessible address space and therefore of any registers that hold addresses, including the Program Counter.
The traditional 1970s "8 bit" processors *all* had at least some 16 bit registers internally and almost all brought out a full 16 bit address bus on their packages e.g. 2650, 8080, z80, 6800, 6502, 8051. The SC/MP only brought out a 12 bit address bus but it had a 16 bit PC and three 16 bit pointer registers and A12-A15 were multiplexed onto the data bus. The RCA 1802 had 16 registers of 16 bits each and a 16 bit address space, and it multiplexed the 16 bit addresses onto two phases of an 8 bit address bus.
As long as you've got either AND or OR plus NOT (possibly combined e.g. NAND), or the ability to test and branch based on a bit, even a 1-bit ALU lets software count to or add/sub/mul/div as large numbers as there are bits of memory to store.
Slowly.
But you still need a decent address space, especially for the program.
Usually architectures are ranked by the apparent size of their ALU. ("apparent", because it seems like the Z80 had a 4bit ALU, and just always did at least two operations...)How would you call a configurable soft processor that has 16-bit registers and program counter but selectable 8-bit or 16-bit ALU? (SBT-16 core)
I'd rate the 8bit PICs (PIC16f...) as true 8bit architectures. 8bit ALU, 8bit register(s), less than 8bit address space for data (until you add the banking.) The PC and instruction word are wider, but "Harvard architecture", so those are pretty separate from everything else. (and the baseline PICs with essentually 9bit PCs (plus another bank) and only 8 bits of address in a "call" instruction!)
I do note however that I never saw what I would call a true 8 bit processor -- they are all 8/16 hybrid designs with 8 bit ALUs and some 8 bit and some 16 bit registers.
PIC10F320
11 bit program address space, and the PC itself is 9 bits within a page. That's architecture. True, that particular chip only has 256 words of program space implemented, but you could always do that with an 8080 or 6502 too, if you wanted (and it was common to do so with the 1802).
Going back further, the PDP8 of course had a 12 bit address space and crude banking to extend it, very similar to PIC.
You mean ISA address bus? I wouldn't single out 8-bitters. The mismatch is rather common:
16-bit 8086 had 20-bit address bus, PIC24 has 23-bit address bus.
Or the other way around:
32-bit MIPS has 29-bit address bus, 64-bit ARM has 48-bit address bus, x64 has 48-bit address bus.
No, not address bus. That's irrelevant and changes frequently between software compatible processors. It's the size of the registers that counts, in particular the PC and registers used as data pointers.
Usually architectures are ranked by the apparent size of their ALU. ("apparent", because it seems like the Z80 had a 4bit ALU, and just always did at least two operations...)How would you call a configurable soft processor that has 16-bit registers and program counter but selectable 8-bit or 16-bit ALU? (SBT-16 core)
I'd rate the 8bit PICs (PIC16f...) as true 8bit architectures. 8bit ALU, 8bit register(s), less than 8bit address space for data (until you add the banking.) The PC and instruction word are wider, but "Harvard architecture", so those are pretty separate from everything else. (and the baseline PICs with essentually 9bit PCs (plus another bank) and only 8 bits of address in a "call" instruction!)
If it has 16-bit registers and a 16-bit ADD instruction then it's 16 bit.
z80 has some 16 bit registers and 16 bit add instructions. Specifically, you can add any of BC/DE/SP to any of HL/IX/IY, or add HL/IX/IY to themselves. And yet the z80 is commonly referred to as an 8 bit processor.
the so-called 8 bit processors were all 8/16 bit hybrids. (except the very smallest PICs. I'll give them that)
z80 has some 16 bit registers and 16 bit add instructions. Specifically, you can add any of BC/DE/SP to any of HL/IX/IY, or add HL/IX/IY to themselves. And yet the z80 is commonly referred to as an 8 bit processor.
What you're describing is what we call "CISC" - the Z80 was a "Complex Instruction Set" design, not a "hybrid".
The clue is in the fact that "ADD A,L" takes 4 clock cycles but "ADD HL,DE" takes 15.
Internally it's all 8 bits but it can perform a sequence of operations for a single instruction.
the so-called 8 bit processors were all 8/16 bit hybrids. (except the very smallest PICs. I'll give them that)
There was nothing 16-bit about (eg.) a MOS 6502.
Internally it's all 8 bits but it can perform a sequence of operations for a single instruction.That's one particular implementation. You could build a new implementation of the z80 instruction set today that did both 8 bit and 16 bit operations in 1 clock cycle.
There was nothing 16-bit about (eg.) a MOS 6502.Addresses are 16 bits. The PC is 16 bits. You can store 16 bit pointers in two consecutive memory locations in zero page and with a single LDA (zp),Y instruction fetch them into an internal unnamed 16 bit register, add the Y register to that register (*with* carry between pages), and use that register as the address of a byte to load (or store, or add etc).
nb. Nobody used "LDA (zp),Y" because it was too slow, we preferred self-modifying code instead (modify an "LDA absolute,Y" instruction).
SBT-16 has 16-bit registers which are byte addressable for narrower instructions, and its ALU instruction can do 16-bit single, 8-bit single or 8-bit x2 SIMD. Is it 8-bit or is it 16-bit?Usually architectures are ranked by the apparent size of their ALU. ("apparent", because it seems like the Z80 had a 4bit ALU, and just always did at least two operations...)How would you call a configurable soft processor that has 16-bit registers and program counter but selectable 8-bit or 16-bit ALU? (SBT-16 core)
I'd rate the 8bit PICs (PIC16f...) as true 8bit architectures. 8bit ALU, 8bit register(s), less than 8bit address space for data (until you add the banking.) The PC and instruction word are wider, but "Harvard architecture", so those are pretty separate from everything else. (and the baseline PICs with essentually 9bit PCs (plus another bank) and only 8 bits of address in a "call" instruction!)
If it has 16-bit registers and a 16-bit ADD instruction then it's 16 bit.
Woz used the (zp),Y addressing mode five times in SWEET16, widely acknowledged as some of the best 300 bytes of 6502 code (not to mention one of the best bytecode interpreters) you'll find anywhere.
SBT-16 has 16-bit registers which are byte addressable for narrower instructions, and its ALU instruction can do 16-bit single, 8-bit single or 8-bit x2 SIMD. Is it 8-bit or is it 16-bit?
Woz used the (zp),Y addressing mode five times in SWEET16, widely acknowledged as some of the best 300 bytes of 6502 code (not to mention one of the best bytecode interpreters) you'll find anywhere.
Argument from authority? :popcorn:
a) It fits the job really well in that instance, and
b) It was in ROM - no self-modifying code possible!
c) Note that it's a "solution to the problem of handling 16 bit data, notably pointers, with an 8 bit microprocessor" :P)
PS: I met Woz a couple of years back. :-+
To my eyes SBT-16 is a genuine 16-bit processor, and SBT-64 is a genuine 64-bit processor. What I feel here is that people is getting program counter length, data register length and ALU length confused, so I threw those SIMD-heavy processors to get some clarifications.SBT-16 has 16-bit registers which are byte addressable for narrower instructions, and its ALU instruction can do 16-bit single, 8-bit single or 8-bit x2 SIMD. Is it 8-bit or is it 16-bit?
Figure it out by yourself! There's enough information in this thread for you to do so. :popcorn:
=
confused
This whole thread started with the observation that by modern definitions as N-bit ISA is one which has N-bit pointers and the only exception to this is that the so-called 8 bit processors actually had 16 bit pointers.
the only exception to this is that the so-called 8 bit processors actually had 16 bit pointers.
There's another argument that it should be the size of the data bus but there's quite a few chips that had a half-size data bus to make the system cheaper to build.And Intel 80386SX too, a full 32-bit processor choked to a 16-bit data bus and 24-bit address bus.
eg. Intel 8088, Motorola 68008.
Those chips need to do two separate reads for every 16-bit memory access that the "full-width" versions do.
Both those chips are classed as 16-bit so that's not it, either.
(Somehow I have the feeling that 80386SX can make a better 32-bit retrocomputing processor than MC68SEC000 since that chip has internal MMU, easier to find compiler and existing OS. 80386SL even more since that is a 80386SX based SoC. 80386SL + SeaBIOS + TianoCore + Linux 2.6 = win?)
To me it's about the ALU and data processing, not the width of the address bus.Don't forget the Z80. I don't think anyone will argue that is something other than an 8-bit microprocessor, but its ALU is only 4 bits wide.
The 68k for instance was indeed a mix of 16-bit and 32-bit internally
To me it's about the ALU and data processing, not the width of the address bus.Don't forget the Z80. I don't think anyone will argue that is something other than an 8-bit microprocessor, but its ALU is only 4 bits wide.
To me it's about the ALU and data processing, not the width of the address bus.Don't forget the Z80. I don't think anyone will argue that is something other than an 8-bit microprocessor, but its ALU is only 4 bits wide.
Viewed as a black box: The Z80 ALU has 8 wires leading into it and 8 wires leading out.
Yeah, and it had a couple 16-bit registers.
Now was it a 4/8/16-bit CPU? ;D
oh, Arise-v2 (my softcore) comes with 32 registers of 32bit each, plus four coprocessors, and one of these (the Cordic UNIT) cops is 24bit, but the DSP engine is 60bit, and the TLB is 64bit address space(1), and the external bus there are only 24 bit(2), hence ...
... is it 23bit? 32bit? 60bit? 64bit? does it have an identity crisis and need a psychotherapy to sort it out? :-DD
Viewed as a black box: The Z80 ALU has 8 wires leading into it and 8 wires leading out.The published block diagrams of the Z80 are simplified. Where are the muxes needed to swap DE and HL quickly? 8 bit access to the low and upper halves of IX and IY?
I can actually tie a 80387SX to this too if I need FPU, and the 64MB RAM address space does allow for some curiosities. (Still for a i386 retrocomputer I would much prefer a Am486DX4-100 on a 50MHz bus tied to a XC6SLX16, which allows me full 4GB address space and the use of 1GB of DDR3 SDRAM.)(Somehow I have the feeling that 80386SX can make a better 32-bit retrocomputing processor than MC68SEC000 since that chip has internal MMU, easier to find compiler and existing OS. 80386SL even more since that is a 80386SX based SoC. 80386SL + SeaBIOS + TianoCore + Linux 2.6 = win?)
Maybe you want this: https://en.wikipedia.org/wiki/Intel_80386EX
In most current computers data registers and address registers has the same length. As of bus width... even today most amd64 processors doesn't implement full 64-bit address space, while back in P5 days x86 is already in 64-bit external data bus. While AArch64 does have full 64-bit AXI interface, my search for aarch4 chips has led me to a few too many Cortex-A53 choked to a 32-bit AXI and ended up capped at 3GB RAM.oh, Arise-v2 (my softcore) comes with 32 registers of 32bit each, plus four coprocessors, and one of these (the Cordic UNIT) cops is 24bit, but the DSP engine is 60bit, and the TLB is 64bit address space(1), and the external bus there are only 24 bit(2), hence ...
... is it 23bit? 32bit? 60bit? 64bit? does it have an identity crisis and need a psychotherapy to sort it out? :-DD
:-DD
To make matters worse, what is now often perceived as "32-bit" or "64-bit" in the popular language has more to do with the addressing width than the data bus width.
for a i386 retrocomputer I would much prefer a Am486DX4-100 on a 50MHz bus tied to a XC6SLX16, which allows me full 4GB address space and the use of 1GB of DDR3 SDRAM.
In most current computers data registers and address registers has the same length. As of bus width... even today most amd64 processors doesn't implement full 64-bit address space, while back in P5 days x86 is already in 64-bit external data bus. While AArch64 does have full 64-bit AXI interface, my search for aarch4 chips has led me to a few too many Cortex-A53 choked to a 32-bit AXI and ended up capped at 3GB RAM.
If you are measuring only the longest register visible to a programmer, there were 16384 bit processors in the 70s.In most current computers data registers and address registers has the same length. As of bus width... even today most amd64 processors doesn't implement full 64-bit address space, while back in P5 days x86 is already in 64-bit external data bus. While AArch64 does have full 64-bit AXI interface, my search for aarch4 chips has led me to a few too many Cortex-A53 choked to a 32-bit AXI and ended up capped at 3GB RAM.
If we only consider registers, that's yet another story. ;D
Intel processors have had 128-bit and even 256-bit data registers for a long time now. Even though those registers can't be used with quite all the same operations than the 64-bit ones, you can still do a great deal. In just one clock cycle. So are Intel processors 256-bit? :-DD
Well we are talking integer registers. FPU is a whole another beast usually and they are usually excluded.In most current computers data registers and address registers has the same length. As of bus width... even today most amd64 processors doesn't implement full 64-bit address space, while back in P5 days x86 is already in 64-bit external data bus. While AArch64 does have full 64-bit AXI interface, my search for aarch4 chips has led me to a few too many Cortex-A53 choked to a 32-bit AXI and ended up capped at 3GB RAM.
If we only consider registers, that's yet another story. ;D
Intel processors have had 128-bit and even 256-bit data registers for a long time now. Even though those registers can't be used with quite all the same operations than the 64-bit ones, you can still do a great deal. In just one clock cycle. So are Intel processors 256-bit? :-DD
If you are measuring only the longest register visible to a programmer, there were 16384 bit processors in the 70s.
Is that much RAM legal in a 'retro'computer?Using the internal DRAM controller of XC6SLX16-2CSG256C I can implement a DDR3 memory controller with a 32-bit interface. Now connect that to two 256M x16 DDR3 chips and there goes the 1GB RAM.
This whole thread started with the observation that by modern definitions as N-bit ISA is one which has N-bit pointers and the only exception to this is that the so-called 8 bit processors actually had 16 bit pointers.
That wouldn't be my definition at all. To me it's about the ALU and data processing, not the width of the address bus.
Having less wires going into the ALU directly affects processing power and speed of execution, having more or less address lines makes no difference at all.
the only exception to this is that the so-called 8 bit processors actually had 16 bit pointers.
Not true. The 8086/80286 had 16-bit registers, 16-bit ALU and 20-bit pointers (22-bit pointers on the 80286 IIRC).
Plus: Many, many 32 and 64 bit processors don't have the same number of address lines as bits in the address registers. eg. Are the Motorola 68000 and Intel 80386 24-bit processors? I don't think you'll find many people arguing that case.
oh, Arise-v2 (my softcore) comes with 32 registers of 32bit each, plus four coprocessors, and one of these (the Cordic UNIT) cops is 24bit, but the DSP engine is 60bit, and the TLB is 64bit address space(1), and the external bus there are only 24 bit(2), hence ...
... is it 23bit? 32bit? 60bit? 64bit? does it have an identity crisis and need a psychotherapy to sort it out? :-DD
:-DD
To make matters worse, what is now often perceived as "32-bit" or "64-bit" in the popular language has more to do with the addressing width than the data bus width.
In most current computers data registers and address registers has the same length. As of bus width... even today most amd64 processors doesn't implement full 64-bit address space, while back in P5 days x86 is already in 64-bit external data bus. While AArch64 does have full 64-bit AXI interface, my search for aarch4 chips has led me to a few too many Cortex-A53 choked to a 32-bit AXI and ended up capped at 3GB RAM.
If we only consider registers, that's yet another story. ;D
Intel processors have had 128-bit and even 256-bit data registers for a long time now. Even though those registers can't be used with quite all the same operations than the 64-bit ones, you can still do a great deal. In just one clock cycle. So are Intel processors 256-bit? :-DD
Intel processors have had 128-bit and even 256-bit data registers for a long time now. Even though those registers can't be used with quite all the same operations than the 64-bit ones, you can still do a great deal. In just one clock cycle. So are Intel processors 256-bit? :-DD
The 8086/286 have 16 bit pointer registers. The total address space is 20 bits because of a segmentation scheme. Such schemes were used at least as far back as the PDP11, which let you put a lot of RAM (megabytes) in a computer but gave each program (convenient) access to 64 KB of it at a time. It is still a 16 bit machine.
The relevant thing is the address space a program can conveniently use.
The relevant thing is the address space a program can conveniently use.
Most of the world would disagree with you, including the people who manufacture the chips.
The relevant thing is the address space a program can conveniently use.Most of the world would disagree with you, including the people who manufacture the chips.
I work for a small company that designs and manufactures CPUs, mostly at the moment ones in the 32 or 64 bit "microcontroller" class. Mostly I write compilers for them, but I also get a little involved in design of new instructions and hardware implementing them.
Before that I worked for a gigantic company that also designs and manufactures CPUs, also mostly writing compilers, but also being sometimes involved in the CPU design.
You can 'conveniently use' CPUs with paged memory via bios calls, etc.I like the use of quote marks there. I don't think I've ever seen as much loathing of a computer design as you get when you push a paged solution, or as much relief as you get when you tell people you're going to stretch the address registers to solve their memory constraints.
You can 'conveniently use' CPUs with paged memory via bios calls, etc.I like the use of quote marks there. I don't think I've ever seen as much loathing of a computer design as you get when you push a paged solution, or as much relief as you get when you tell people you're going to stretch the address registers to solve their memory constraints.
Using more memory than the natural hardware size of your memory addresses (16 bits in the case of the 6502, z80 etc) requires absolute contortions on the part of the programmer.
The relevant thing is the address space a program can conveniently use.
Most of the world would disagree with you, including the people who manufacture the chips.
I wouldn'tThe relevant thing is the address space a program can conveniently use.Most of the world would disagree with you
The SIMD registers are not really registers, but a combination of smaller largely independent registers. Even though they have 512-bit SIMD registers now, the longest addition or multiplication you can perform is still only 64-bit. And that's what determines the "bitness" of the CPU.
To get back to the topic, all 8-bit processors are not equal by any means. Some will even get you more effective performance and lower power draw than some 16-bit processors, whereas other 8-bit processors are usable only for the simplest tasks and actually draw more power than even a lot of more recent 32-bit ones.
To get back to the topic, all 8-bit processors are not equal by any means. Some will even get you more effective performance and lower power draw than some 16-bit processors, whereas other 8-bit processors are usable only for the simplest tasks and actually draw more power than even a lot of more recent 32-bit ones.
This is a question of technology. 32-bit processors typically use smaller transistors which are faster and consume less power. 8-bit processors are usually made of bigger transistors which consume more power an will be slower. If you build both on the same technology, there's no doubts that you can achieve lower power consumption and faster speeds with 8-bit processors compared to 16-bit processors, or with 16-bit processors compared to 32-bit processors.
Of course there is no 256-bit or 512-bit ALU, so ALU operations on them are limited to operations on chunks of the registers, but that doesn't make them any less of registers.
Maybe this can be tested on a FPGA. Throw various different processor cores on the same FPGA platform and run the same test program. Same FPGA platform means there would be no variance of process node.To get back to the topic, all 8-bit processors are not equal by any means. Some will even get you more effective performance and lower power draw than some 16-bit processors, whereas other 8-bit processors are usable only for the simplest tasks and actually draw more power than even a lot of more recent 32-bit ones.
This is a question of technology. 32-bit processors typically use smaller transistors which are faster and consume less power. 8-bit processors are usually made of bigger transistors which consume more power an will be slower. If you build both on the same technology, there's no doubts that you can achieve lower power consumption and faster speeds with 8-bit processors compared to 16-bit processors, or with 16-bit processors compared to 32-bit processors.
Sure. The point was not what is technologically possible, but rather what's actually available on the market.
As for speed, yes you can obviously achieve higher clock speeds on a given process node with a simpler architecture and smaller registers. As for the resulting performance, it's a trade-off. Higher clock speeds on a given process node for an equivalent "processing power" may actually draw more power than a more complex/wider architecture running at lower clock speeds. The sweet spot is not necessarily trivial to find IMO.
As for speed, yes you can obviously achieve higher clock speeds on a given process node with a simpler architecture and smaller registers. As for the resulting performance, it's a trade-off. Higher clock speeds on a given process node for an equivalent "processing power" may actually draw more power than a more complex/wider architecture running at lower clock speeds. The sweet spot is not necessarily trivial to find IMO.
32x32 multiplier takes 16 times more silicon that 8x8 multiplier. What would you rather have: one 1-GHz 32-bit CPU, or 16 1-GHz 8-bit CPUs running independently and controlling the peripherals? IMHO, for the majority of tasks the second would be more preferable.
Unfortunately you don't get to make that trade-off as instruction decode and control takes a large proportion of the area of a CPU, and doesn't vary much depending on whether the registers and ALU are 8, 16, 32 or 64 bits. For example, the decode and control on a 64 bit RISC-V Rocket core is just about identical to that on a 32 bit RISC-V Rocket core.
So, it's more likely that you'd actually get to choose between one 1-GHz 32-bit CPU, or two or maybe three 1-GHz 8-bit CPUs.
Three points about the multiplier argument:
3) a 32x32 multiply can be done with three 16x16 multiplies. Each 16x16 multiply can be done with three 8x8 multiplies. So actually you need nine 8x8 multipliers, not sixteen.
Also, few CPUs do single-cycle multiply. It's more likely to use a 16x16 multiplier three times and take three clock cycles. So that's only three times the area of an 8x8 multiplier.
Unfortunately you don't get to make that trade-off as instruction decode and control takes a large proportion of the area of a CPU, and doesn't vary much depending on whether the registers and ALU are 8, 16, 32 or 64 bits. For example, the decode and control on a 64 bit RISC-V Rocket core is just about identical to that on a 32 bit RISC-V Rocket core.
So, it's more likely that you'd actually get to choose between one 1-GHz 32-bit CPU, or two or maybe three 1-GHz 8-bit CPUs.
Most of the control that you're referring to is not required in light-weight cores:
- There's no pipeline, so no need for pipeline control.
- There's no cache, so there's no need for cache and big memory controllers.
- We won't worry about code density (because the performance doesn't depend
on decoding speed), so we can have really long instructions, such as 32-bit.
This way the instruction will come out already decoded to a great extent
- We'll also get rid of the bus with all the bus arbitrage and collisions, and
let our small cores communicate through dedicated FIFOs.
In the end, it'll be much less of silicon for each core. May be not 16, but close. More importantly, you
may be able to run the whole thing faster and will end up with 2 GHz 8-bit cores, or even 3 GHz 8-bit
cores. This would certainly beat the current behemoths.
You either pipeline it, then each stage requires its own multiplier, or you don't, in which case next multiply commands will be sitting there waiting for its turn.
ARM manuals say the M3/M4 has a single-cycle multiplier
There is this matrix adder design for integer multiplier, a lot of adders are involved but it is a fully combinatorial unit.ARM manuals say the M3/M4 has a single-cycle multiplier
how is it implemented?
ARM manuals say the M3/M4 has a single-cycle multiplier. But it's coming out considerably slower than the chip with the 4 cycle multiplier on benchmarks where multiplication is important (as seen by software emulation being much slower)
As in most real-case uses of multiply, Coremark (for example) is generally loading two values from memory, then multiplying them, then storing the result, and then looping. There are plenty of other instructions around the multiply that a four cycle latency is no big deal.
While compiler optimisers are capable of helping out with things like loop unrolling, re-implementing multiple chained algorithms to reduce loads and stores is not something a compiler optimiser is capable of.
There is this matrix adder design for integer multiplier, a lot of adders are involved but it is a fully combinatorial unit.ARM manuals say the M3/M4 has a single-cycle multiplier
how is it implemented?
That turns out not to be the case. On simple processors the control is the majority of the chip.
For example here's a labelled photo of a Z80
There is no reason for a 2x or 4x wider data path to run 50% more slowly. The only place it will make any difference at all is carry propagation in adders, but that's logarithmic if done correctly. Basically, a 32 bit adder has two gates more delay than an 8 bit adder. And the adder isn't usually the clock-limiting factor anyway.
Multiplies are rare. Two multiplies in a row are *very* rare.
ARM manuals say the M3/M4 has a single-cycle multiplier
how is it implemented?
but usually there is a limit for the number of numbers you can sum, how can they perform 40 add-operations in parallel and respecting the timing for data being stable?
How do they do it? Very carefully.
Its more likely they use a Wallace tree than a Booth tree. Booth is naturally 2's complement, and you need extra fudging to do unsigned multiplies. Wallace is naturally unsigned, and you need extra fudging to do 2's complement multiplies. When you need to do both types of multiply (which an ARM does) Wallace + fudging logic tends to work out better. For 2's complement only work (e.g. most DSP) Booth is the winner.ARM manuals say the M3/M4 has a single-cycle multiplier
how is it implemented?
afair ARM use some kind booth multiplier, basically a bunch of adders and some trickery
might not be exactly the algo they use but close enough, http://www.ellab.physics.upatras.gr/~bakalis/Eudoxus/MBM.html (http://www.ellab.physics.upatras.gr/~bakalis/Eudoxus/MBM.html)
The timing problem is O(logN). 64 bits isn't much more difficult than 16 bits.
How do they do it? Very carefully.
Actually it is easier to implement 1's complement calculations than 2's complement. Someone I knew in a distant past wrote a PhD thesis on that.
Instead of traditional adders they expand one of the operands into sums of powers of two. Now multiply by power of two is just shifting which can be done without propagation delay using cross wiring, and the matrix multiplier uses a one-counter (a special type of encoder) to reduce the amount of adders actually needed.There is this matrix adder design for integer multiplier, a lot of adders are involved but it is a fully combinatorial unit.ARM manuals say the M3/M4 has a single-cycle multiplier
how is it implemented?
but usually there is a limit for the number of numbers you can sum, how can they perform 40 add-operations in parallel and respecting the timing for data being stable?
That turns out not to be the case. On simple processors the control is the majority of the chip.
This can be reduced if you set it as a design goal.
For example here's a labelled photo of a Z80
If you want to use existing processors, look at this:
https://en.wikipedia.org/wiki/Transistor_count
Z80 had 8500 transistors. ARM Cortex A9 has 26 million transistors. You could have over 3000 Z80 cores instead of one ARM core, all running at the same clock speed or faster. This is a little bit more than 16, is it?
The A9 has a lot of transistors because it's a complex out-of-order CPU, has a lot of cache, has FPU, has MMU.
Virtually nothing to do with being 32 bit vs 8 bit.
with the software made able to handle "negative zero" :D
The A9 has a lot of transistors because it's a complex out-of-order CPU, has a lot of cache, has FPU, has MMU.
Virtually nothing to do with being 32 bit vs 8 bit.
You don't seem to like my examples. Would you pick a 32-bit processor which has something to do with being 32-bit?
with the software made able to handle "negative zero" :D
Is zero negative or positive?
0001 0110 22
+ 1111 1111 −0
=========== ====
1 0001 0101 21 —An end-around carry is produced. it's set to '1'
+ 0000 0001 1 — subtract it
=========== ====
0001 0110 22 —The correct result (22 + (−0) = 22)
with the software made able to handle "negative zero" :D
Is zero negative or positive?
You have 4 hours. >:D
Sure. An actually small 32 bit ARM
Are there any 3 cent 32 bit microcontrollers? >:DMaybe not quite three US cents...
Sure. An actually small 32 bit ARM such as the Cortex M0 or a simple 32 bit RISC-V would be appropriate to compare to the likes of Z80.
It's hard to know much about ARMs but in fully open RISC-V land you have for example https://github.com/SpinalHDL/VexRiscv which can be configured as RV32I at 346 MHz and 0.52 Dhrystone MIPS/MHz on an Artix 7 using 481 LUTs and 539 FFs.
LUTs don't convert conveniently to equivalent gates, but somewhere between 6 and 24 is about right, and probably 12 is a good average. So that's somewhere between 3000 and 12000 gates for the LUTs with 6000 probably being a good guess. D flip-flops are worth 4 gates each I guess, so that's 2000. Total maybe 8000.
Note that the 32 bit ARM1 is listed on the Wikipedia page you referenced as having 25000 transistors, about 3x the z80.
There is a certain thing 8-bit processors suffer: memory space. Certain programs eat RAM like candy (Google Chrome with more than a handful of tabs open) and 8-bit cores will quickly start to suffer even if it has 64-bit memory pointers.Sure. An actually small 32 bit ARM such as the Cortex M0 or a simple 32 bit RISC-V would be appropriate to compare to the likes of Z80.
It's hard to know much about ARMs but in fully open RISC-V land you have for example https://github.com/SpinalHDL/VexRiscv which can be configured as RV32I at 346 MHz and 0.52 Dhrystone MIPS/MHz on an Artix 7 using 481 LUTs and 539 FFs.
LUTs don't convert conveniently to equivalent gates, but somewhere between 6 and 24 is about right, and probably 12 is a good average. So that's somewhere between 3000 and 12000 gates for the LUTs with 6000 probably being a good guess. D flip-flops are worth 4 gates each I guess, so that's 2000. Total maybe 8000.
Physically a LUT consists of 64 config bits which are selected by 6 address lines. Thus it's 63 muxes, which is lot more than 12 gates. You may be able to get the same effect with discrete gates, or you may be not. It's like data compression - some data compresses well, some data doesn't compress at all. You only can tell, if all 6 inputs are used, you need at least 6 gates. Therefore comparing LUTs to gates is not a good idea.
If you compare to FPGA based cores, such as Picoblaze, your basic RV32I is equivalent of 5 Picoblazes (however I don't think Picoblaze can run at 350 MHz).
However, this is a very basic, very feeble processor. If you start adding features (look at the table you posted: https://github.com/SpinalHDL/VexRiscv ), by the time you add enough features to make it into a typical 32-bit processor, you get to 2000 LUTs and the speed deceases to 183 MHz - now it's equivalent to 20 Picoblazes and your RISC is now running slower than Picoblaze.
This is a very interesting table, by the way - adding features consumes lots of logic, but performance growth is not great - your fastest RISC is not even 50% faster than the feeble 346 MHz model.
And that's RISC - the best 32-bit CPU the humanity could come up with. If you look at others (such as ARM or Microblaze), the pattern will be the same, but the performance will be even lower.
Do keep in mind that one NMOS transistor can map to multiple CMOS transistors. The 25000 CMOS transistors in ARM1 and the 3500 NMOS transistors in 6502 maps to ARM1 having about three times the gate count.Note that the 32 bit ARM1 is listed on the Wikipedia page you referenced as having 25000 transistors, about 3x the z80.
If we use the smallest 32-bit processor (ARM1), shouldn't we pick the smallest 8-bit processor for comparison? The table shows 3500 transistors for 6502. Your ARM1 is roughly 7 times bigger.
Are there any 3 cent 32 bit microcontrollers? >:D
RISCV?Are there any 3 cent 32 bit microcontrollers? >:D
There won't be any ARM chips, the ARM royalties will be more than 3 cents.
RISCV?Are there any 3 cent 32 bit microcontrollers? >:D
There won't be any ARM chips, the ARM royalties will be more than 3 cents.
Are there any 3 cent 32 bit microcontrollers? >:D
Except it's not 1 cent, not a real product, only prototype and even ARM licence is more expensive than 1 cent. A few cents sometime in the future, maybe, just as a guy on the right said.Are there any 3 cent 32 bit microcontrollers? >:D
How about 1 cent, and flexible?
$0.01 Flexible Plastic ARM Processor by PragmatIC
They made an ARM1. That is licence free. To make a commercial product they have licence free options. like the RISC/V. People usually can't escape all royalties, because they will have to used some silicon IP, like a flash block. Since these people are not using any silicon technology, they will be creating 100% of their process related IP.Except it's not 1 cent, not a real product, only prototype and even ARM licence is more expensive than 1 cent. Sometime in the future, maybe, just a a guy on the right said.Are there any 3 cent 32 bit microcontrollers? >:D
How about 1 cent, and flexible?
$0.01 Flexible Plastic ARM Processor by PragmatIC
They made an ARM1. That is licence free. To make a commercial product they have licence free options. like the RISC/V. People usually can't escape all royalties, because they will have to used some silicon IP, like a flash block. Since these people are not using any silicon technology, they will be creating 100% of their process related IP.It is cortex M0, not ARM1. In video was even said that it will become commercially viable when they will make a single chip few mm in size. Which means it's not commercially viable currently. And who would make a real product based on ARM1 to begin with?
If it were ARM1 or ARM2 it could have been 1 cent: those cores have long past their patent expiry dates, and if you recreate the cores using a free implementation (e.g. the open source Amber core (https://opencores.org/project/amber), which implements ARM2) there is nothing you need to pay other than fab. If you can squeeze the economy of scale up you get that price point.They made an ARM1. That is licence free. To make a commercial product they have licence free options. like the RISC/V. People usually can't escape all royalties, because they will have to used some silicon IP, like a flash block. Since these people are not using any silicon technology, they will be creating 100% of their process related IP.It is cortex M0, not ARM1. In video was even said that it will become commercially viable when they will make a single chip few mm in size. Which means it's not commercially viable currently. And who would make a real product based on ARM1 to begin with?
EDIT: And anyway, it's not something you would use in a low and likely medium quantity product. It's a niche thing, flexible electronics. You won't solder this thing on the PCB.
If it were ARM1 or ARM2 it could have been 1 cent: those cores have long past their patent expiry dates, and if you recreate the cores using a free implementation (e.g. the open source Amber core (https://opencores.org/project/amber), which implements ARM2) there is nothing you need to pay other than fab. If you can squeeze the economy of scale up you get that price point.And no one would buy them because they would suck, therefore not possible to make such amounts to achieve low cost.
If it were ARM1 or ARM2 it could have been 1 cent: those cores have long past their patent expiry dates, and if you recreate the cores using a free implementation (e.g. the open source Amber core (https://opencores.org/project/amber), which implements ARM2) there is nothing you need to pay other than fab. If you can squeeze the economy of scale up you get that price point.
Although if ou use RV32IMC it would have the same benefit.
Then why is LPC2103 and AT91SAM7S128 still that damn expensive?If it were ARM1 or ARM2 it could have been 1 cent: those cores have long past their patent expiry dates, and if you recreate the cores using a free implementation (e.g. the open source Amber core (https://opencores.org/project/amber), which implements ARM2) there is nothing you need to pay other than fab. If you can squeeze the economy of scale up you get that price point.
ARM7TDMI was 1994. That should be out of protection now.
And no one would buy them because they would suck [...]
ARM7TDMI was 1994. That should be out of protection now.Then why is LPC2103 and AT91SAM7S128 still that damn expensive?
I don't think I've ever seen as much loathing of a computer design as you get when you push a paged solution, or as much relief as you get when you tell people you're going to stretch the address registers to solve their memory constraints.
Exactly this. Also lower volumes.ARM7TDMI was 1994. That should be out of protection now.Then why is LPC2103 and AT91SAM7S128 still that damn expensive?
Interesting. $8 - $10.
Is there any good reason to use one of those rather than a Cortex M, other than "we already have a product and don't want to redesign it"?
Probably because these are old MCUs and are now being milked to support legacy designs. They used to be a whole lot cheaper a decade ago.Then why is LPC2103 and AT91SAM7S128 still that damn expensive?If it were ARM1 or ARM2 it could have been 1 cent: those cores have long past their patent expiry dates, and if you recreate the cores using a free implementation (e.g. the open source Amber core (https://opencores.org/project/amber), which implements ARM2) there is nothing you need to pay other than fab. If you can squeeze the economy of scale up you get that price point.ARM7TDMI was 1994. That should be out of protection now.
They are probably cheap now, if you are a serious user still running a production line.Probably because these are old MCUs and are now being milked to support legacy designs. They used to be a whole lot cheaper a decade ago.Then why is LPC2103 and AT91SAM7S128 still that damn expensive?If it were ARM1 or ARM2 it could have been 1 cent: those cores have long past their patent expiry dates, and if you recreate the cores using a free implementation (e.g. the open source Amber core (https://opencores.org/project/amber), which implements ARM2) there is nothing you need to pay other than fab. If you can squeeze the economy of scale up you get that price point.ARM7TDMI was 1994. That should be out of protection now.
They are probably cheap now, if you are a serious user still running a production line.Probably because these are old MCUs and are now being milked to support legacy designs. They used to be a whole lot cheaper a decade ago.Then why is LPC2103 and AT91SAM7S128 still that damn expensive?If it were ARM1 or ARM2 it could have been 1 cent: those cores have long past their patent expiry dates, and if you recreate the cores using a free implementation (e.g. the open source Amber core (https://opencores.org/project/amber), which implements ARM2) there is nothing you need to pay other than fab. If you can squeeze the economy of scale up you get that price point.ARM7TDMI was 1994. That should be out of protection now.