Ada!? My wife asked why I burst out laughing. Unfortunately she doesn't get humor.
That article is from 2010, C++ has evolved since then, would we see C++11 on MCUs? probably, ADA not so much but maybe Erlang, Haskell, F#, Scala, or a ton of other functional languages that are coming out of the woods.
So in 5-10 years from now how will embedded world look like? Will there for example be 2 dollar MCU's containing
10 very fast CPU's and only rudimentary interface logic and SW based perihperials? Would'nt that be nice?
All the HAL/CMSIS crap is gone and you can decide your self what kind of pheripherials you need and how many.
author=miguelvp link=topic=56834.msg777916#msg777916 date=1444879221]
Current mobile chips already feature 8 cores or more. Will embedded MCUs follow that trend? that's the question at hand.
Myself I don't think that is going to happen in the next decade since 5-10 years is really not that big of a span of time to affect the MCU industry, but we are already seeing the migration to lower voltage cores and once they hit 200mV the only way to make more efficient and low power processors would be to add more cores.
Not sure what the physical limits are but 5GHz at 200mV core voltage rings a bell, so it's going to take more than 10 years in my opinion for this paradigm shift to affect MCUs.
The problem then will be that the hardware would surpass what the programmers can make use of the device using C or similar sequential oriented programming languages, maybe you can use 10 cores and keep them busy doing 10 separate tasks but what happens with 20, 50, 100, 1000 cores all with shared memory and resources? That's when functional and concurrent languages will come to play.
Will there for example be 2 dollar MCU's containing
10 very fast CPU's and only rudimentary interface logic and SW based perihperials? Would'nt that be nice?
Current mobile chips already feature 8 cores or more. Will embedded MCUs follow that trend? that's the question at hand.
So in 5-10 years from now how will embedded world look like? Will there for example be 2 dollar MCU's containing
10 very fast CPU's and only rudimentary interface logic and SW based perihperials? Would'nt that be nice?
After all we are born as concurrent beings...
why the "Current mobile chips already feature 8 cores or more" is not called embedded MCU? i will be surprised if its not.
The OP is talking on the hypothetical future of multicore MCUs, which happens to be the direction hardware is currently going since we are hitting the limit on how long it takes the information to travel the silicon, so the solution is either going 3d, but that just helps only a little, or adding more cores.So in 5-10 years from now how will embedded world look like? Will there for example be 2 dollar MCU's containing
10 very fast CPU's and only rudimentary interface logic and SW based perihperials? Would'nt that be nice?
All the HAL/CMSIS crap is gone and you can decide your self what kind of pheripherials you need and how many.
Current mobile chips already feature 8 cores or more. Will embedded MCUs follow that trend? that's the question at hand.
Myself I don't think that is going to happen in the next decade since 5-10 years is really not that big of a span of time to affect the MCU industry, but we are already seeing the migration to lower voltage cores and once they hit 200mV the only way to make more efficient and low power processors would be to add more cores.
Not sure what the physical limits are but 5GHz at 200mV core voltage rings a bell, so it's going to take more than 10 years in my opinion for this paradigm shift to affect MCUs.
The problem then will be that the hardware would surpass what the programmers can make use of the device using C or similar sequential oriented programming languages, maybe you can use 10 cores and keep them busy doing 10 separate tasks but what happens with 20, 50, 100, 1000 cores all with shared memory and resources? That's when functional and concurrent languages will come to play.
Again, I don't know if that will happen at the embedded level in the next 10 years, that sounds too aggressive of a timeline, but it will happen at the desktop and mobile level and it's going to be hard to make use of all that computer power with legacy sequential oriented languages.
Sooner or later, however, it will be the same paradigm shift for MCUs.
Fortunately a lot of people are dealing with HDL and are no strangers to concurrency versus sequential thinking. After all we are born as concurrent beings and we have to unlearn our instincts in order to learn sequential languages.
But whatever the future brings we will adapt.
Since it is cheaper and cheaper to put a core into your asic, we are basically outsourcing the MCU intensive tasks to the peripherals. The radio chip has an on-chip MCU. The USB controller has an on-chip MCU. The I2C extender is an MCU. The LCD controller has a built in controller, and it connects to the MCU with SPI instead of a parallel interface. They already make digital power management chips, with onboard logic.
The way I see it, it will be small microcontrollers distributing the tasks into smaller chunks which are easy to write, easy to handle. Why would you need a big FPGA to do all these stuff? Interfaces are standard, and MCUs come with dozen SPI, I2C, USART.
C and C++ are the cancer of the (embedded) computing. If the C / C++ would adopt Ada's strong type checking and runtime range checking, C/C++ would be quite usable languages for production work.
Ada-like strict type checking won't cost a thing as it will performed at compilation time. The runtine-checks can be enabled or disabled as needes per module so the impact of the runtime-checks are well controlled. Runtime exception handling is not necessary to keep things simple and avoid code-bloat and it can be handeled by a general trap-function which will be invoked when something nasty happens, print the error location and restart the device.
C and C++ are the cancer of the (embedded) computing. If the C / C++ would adopt Ada's strong type checking and runtime range checking, C/C++ would be quite usable languages for production work. And the macros with side-effects create quite often bugs that are hard to spot.
Although Linux is an example of a large-scale C project, even Linux would benefit for improved type checking, range checking etc.
Ada-like strict type checking won't cost a thing as it will performed at compilation time. The runtime-checks can be enabled or disabled as needs per module so the impact of the runtime-checks are well controlled. Runtime exception handling is not necessary to keep things simple and avoid code-bloat and it can be handled by a general trap-function which will be invoked when something nasty happens, print the error location and restart the device.