Apparently our worlds differ. i don't do usb and ethernet stuff..
Here's how i partition it :
A microcontroller could be driving one motor in a robot arm. Each motor has its own microcontroller that does the operational functions of that particular motor : acceleration, deceleration , temperature and current monitoring , positional feedback, exception handling ,end detection ( reading optical encoders and switches etc ) etc... it accepts commands : go from x to y with this accel / decel profile and this max speed. let me know when done.
An embedded processor system would be talking to the invidual motor controllers. if you start giving that controller an ethernet port , serve up a webpage with buttons to control the motors .. that's no longer a microcontroller ... at least not in my definition. That is a computer. it may still be a single chip. but it is no longer a microcontroller. it is an embedded system.
Now, here is another point of view : if you are making a system that has usb hosts and ethernet you can't really call that a microcontroller anymore... that is a computer , most likely running some form of OS. the boundaries are fuzzy.
my definition of microcontroller is a single chip containing cpu / code / data memory and a bunch of IO that performs some operation invisible to the user. There may be a communication pathway in the form of a uart or some digital io but those are to instruct the controller what to do.
If at this level you use libraries fine.
But at the motor level you NEED to know what you are doing ! the 'embedded' system needs to have a certainty that the motor controller will behave .. if the embedded systems attempts to drive the motor over its endstops the microcontroller needs to say 'no'.
My work is mechantronical in nature (harddisks) and the firmware is bound very tightly to the hardware. The drive motors do not wait, mis a phase commutation and you just corrupted a sector on the disk. The software is profiled and timed. We need to make sure the routine terminates on all possible pathways before the next interrupt comes. Speedcontrol loops ,servoloops .. it's all timed out. Code needs to be clean , tight and is profiled the hell out of it. Superfluous constructions like calling 50 times a routine to set some io pin direction is simply not done.
We actually go through the trouble to make a BSP for the algoritmh coders. Making the BSP is my work. ( BSP : board support package ) This is in essence a mini library and a bunch of defines that is are optimized. One project has tons of 'define's in it. Memory is mapped by hand for the custom logic attached to an ARM. the gateway in and out use through double ported memory. so the real 'microcontroller' never has to wait. the system controller simply drops data and instructions on specific addresses and the microcontroller will pick those up automatically. There are no input/ouput routines.
how to explain :
target_speed is a memory location in dpram.
real_speed is a memory location that , in reality, is a block of hardware but it is mapped into ram and we have a define for it.
commutecounter is also a location in memory that is actually an interval control register for the PWM.
code for the microcontroller is simply
if (real_speed != target_speed) then
if (real_speed < target_speed) commute ++;
else commute --;
As for the cost of the devplatform. what is 3K for a compiler ? it's written off in one project.
right. i double checked it.
IRQ on an arm requires you to save R0 to R12
FIQ on an arm requires you to save R0 to R7 . 8 to 16 are dedicated to FIQ so you save time during the context swap as those registers are accessible only in the FIQ.
The arm also will complete whatever instructions are in the pipeline BEFORE it vectors off...
you were right about the LDM and STM instructions in all their wonderful flavours. i forgot about those.
but these instruction still take time to complete ( i dont know exaclty how many clockticks , i'd have to look it up ) and they take space on the stack...
An 8051 has 4 register banks and you can assign an interrupt handler to a specific bank. So there is no need to save anything on the stack ,you simply switch in and out.
It's a different approach.
And the cortex IS harvard, where Arm7 is von neumann as i stated. : i quote :
"The ARM Cortex-M3 is an implementation of the ARM7v architecture, the latest evolution of ARM’s embedded cores. It is a Harvard architecture, using separate buses for instructions and data (rather than a von Neumann architecture, where data and instructions share a bus). The Harvard architecture is intrinsically significantly faster but physically more complex."
and here's the other quote :
"Another innovation on the Cortex-M3 is the Nested Vector Interrupt Controller (NVIC). Unlike external interrupt controllers, as are used in ARM7TDMI implementations, this is integrated into the Cortex M3 core and can be configured by the silicon implementer to provide from a basic 32 physical interrupts with eight levels of pre-emption priority up to 240 physical interrupts and 256 levels of priority. The design is deterministic with low latency, making it particularly applicable to automotive applications"
with an arm7 the interrupt handling is something that is 'bolted-on' afterwards and depends on the chip vendor. In a cortex it is built in.
The above quotes come from Anders Lungren of IAR.
http://www.edn.com/article/459352-Choosing_between_an_ARM7_and_a_Cortex_M3_processor.phpWhich is exactly what i stated.