If the interactions between the sub-systems are well delineated then using multiple uC (perferably of the same type) will usually make for a more robust system and will be faster to develop.
What matters is that the interaction can be reduced to lower bandwidth protocols that reduce the real time requirements of the master controller.
It is exactly the opposite of what you describe. Using multiple controllers is much harder because suddenly you are dealing with several asynchronous processes.
I've seen both and done both.
Mr. Ganssle has talked about this on occasion:
http://www.ganssle.com/
All the benefits and hazards pointed out above apply. There are a few things that can help guide the choice:
1. Is there a process that is highly timing dependent, processes a lot of interrupts, run in "real-time", etc.? This may benefit from being it's own separate processor.
2. Is there a process that channels a lot of data, such as video or audio that is likely to cause system-wide issues if it's caught up in some waiting process? This may benefit from being it's own separate processor.
3. Are there boards connected by long cabling that have to do complex tasks but are relatively low bandwidth? A remote processor can turn a lot of parallel lines / inputs / outputs into a serial channel.
4. Are there boards connected to very different environments? For instance, one connects to a computer using USB while the other controls motors on 240 VAC. The necessity of optical isolation makes it attractive to use a separate processor.
Just like using multiple tasks on one processor, it's a matter of using the right set of tools for the job.
Figure that bootloading and updating complexity might go on the order of n2. However, that is somewhat mitigated by moving tasks to remote processors. So instead of 20 tasks on the main one, perhaps it's down to 10 with other tasks distributed across the others.
Depending on the organization you may be able to more readily split it up between engineers.
Depending on the product line it may also make sense. For instance, perhaps the base model needs only a minimum amount of I/O, while a more expanded model really only needs the same overall processing power but has extra modules to support the new options. This allows a solution with a common main processor board with peripherals that get added.
Jack Ganssle also on multiple occasions mentions how much faster it is to develop smaller systems and that putting everything into one processor is a recipe to have your project come in late with more bugs.
Separating out tasks allows for easy re-use of modules.
Jack Ganssle also on multiple occasions mentions how much faster it is to develop smaller systems and that putting everything into one processor is a recipe to have your project come in late with more bugs.
Separating out tasks allows for easy re-use of modules.
Modern cars have hundreds of microprocessors, not because they run out of processing power, but because it is much better to build them that way.
Hardware is cheap, developers are not.
For security and safety reasons many system solutions requires multi MCU's.
For security and safety reasons many system solutions requires multi MCU's.
Indeed we can find zillion various systems which really needs more than single MCU. Distributed systems which occupy more than single PCB also most likely requires multi MCU's. So reason to have multiple MCU's are system requirements, not some general/stupid rules of splitting system into multiplies, like "each developer or function of device needs own MCU"
It depends. There is no one size fits all answer for this. Typically, if you can solve it with one big MCU, then that is the good choice to do, but if it makes bottlenecks, or if the physical layout makes it different, then you need to divide it down.
A simple example:
For a design we did some time ago we replaced (i bossed my stubbornnessed contracting boss) loads of 4051 muxes for the cheapest with most ADC channeled STM32 , not only did we replace the cumbersome (code as well as PCB routing) 4051 multi channel level detecting problem into per single ADC channel/pin level detecting problem instead, so per 10 ADC ch's there is a dirt cheap MCU (a total of 12 MCU's) who does ADC atodetect, averaging, scaling instead of 2 4051 per field times 12 who does nothing. The master MCU talks to each slave MCU by 9 bit auto-address , master MCU just pings in a steady phase for new values or channel activity reports. A weekend Code+PCB replaced ca:1,5months of code+pcb of our contracting boss unfinished struggle.
The point are it depends on your/their design goals and how much time/money they/you want to spend as usually and what your management/contractor/etc dictates. This particular problem was solved with many MCU's way cheaper then one big MCU and complex solution.
I would have thought a 10ch+ i2c ADC would make more sense but looking at the prices they are more than a STM32.
The cheapest stm32 STM32F030F4P6 with 11 channels $1.85 CAD.
The cheapest ADC is at least $2-3.
Without comparing specs this comparison is meaningless if it is just based on price. There are many 'cheap' ADCs with several LSBs of error and I have no idea how good/bad the STM32's ADC is. A good ADC with less bits may be cheaper and get similar resolution with some oversampling.