How long is it safe for a interrupt routine to process things in it? Let's say I have an interrupt every 300 us. It means I need to keep any processing time inside below 300 us, right?
How long is it safe for a interrupt routine to process things in it? Let's say I have an interrupt every 300 us. It means I need to keep any processing time inside below 300 us, right?
The way I look at interrupts is like having a time slicing / pre-emptive multitasking OS in hardware. Each interrupt source is a seperate process. If the hardware supports nested interrupts, you can give these processes priorities where a higher priority process can halt a lower priority process for a little while.
Some things to read for TS.
No RTOS needed, you can schedule tasks yourself with Rate Monotonic Analysis of the code you should make sure you will always stay within the uC timing requirements.
https://en.wikipedia.org/wiki/Rate-monotonic_scheduling
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.
And it is still valid to learn about this in 2022.
For instance TS could think about offloading processing outside the ISR.
If you need 32 samples and only then do a lot of calculation you do the time critical sampling in the ISR, save the data and offload the processing for another process, perhaps in small steps etc. In that case you might get the calculated data at a later stage but the samples are at least valid. You always have to take the whole design into consideration for these decisions.
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.
And it is still valid to learn about this in 2022.
For instance TS could think about offloading processing outside the ISR.
If you need 32 samples and only then do a lot of calculation you do the time critical sampling in the ISR, save the data and offload the processing for another process, perhaps in small steps etc. In that case you might get the calculated data at a later stage but the samples are at least valid. You always have to take the whole design into consideration for these decisions.Why would you do that? These are precisely the blanket statements that make a lot of simple signal processing devices overcomplicated. Basic rule: remove complexity. Your blanket statement only adds complexity by creating extra time sensitive processes and adding buffering / signaling between them.
How long is it safe for a interrupt routine to process things in it? Let's say I have an interrupt every 300 us. It means I need to keep any processing time inside below 300 us, right?
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.
I'm not, of course, since ARM cortex hasn't got these properties: 1-32 cores, 500-4000MIPS, guaranteed hard realtime operation, no RTOS available - since it isn't necessary and would only degrade performance
https://www.digikey.co.uk/en/products/filter/embedded-microcontrollers/685?s=N4IgjCBcpgnAHLKoDGUBmBDANgZwKYA0IA9lANogAMIAugL7EC0ATMiGpAC4BOArkVIUQAVjr1GINpEoAPFCR75xQA
https://www.eevblog.com/forum/microcontrollers/interrupt-routine-duration/msg4354999/#msg4354999
I've ignored your boring point about Amdahl's law, on the grounds that no technology solves all problems. What's interesting is the set of problems that can be avoided by use of appropriate technology.
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.
I'm not, of course, since ARM cortex hasn't got these properties: 1-32 cores, 500-4000MIPS, guaranteed hard realtime operation, no RTOS available - since it isn't necessary and would only degrade performance
https://www.digikey.co.uk/en/products/filter/embedded-microcontrollers/685?s=N4IgjCBcpgnAHLKoDGUBmBDANgZwKYA0IA9lANogAMIAugL7EC0ATMiGpAC4BOArkVIUQAVjr1GINpEoAPFCR75xQA
https://www.eevblog.com/forum/microcontrollers/interrupt-routine-duration/msg4354999/#msg4354999
I've ignored your boring point about Amdahl's law, on the grounds that no technology solves all problems. What's interesting is the set of problems that can be avoided by use of appropriate technology.But then I could also pitch how FPGAs are great for real-time control, because you can try to fit as much computation in 1 clock cycle as you specify. Those parts are very expensive. Most multi-core MCUs like the xCore, STM32H7, NXP parts (LPC4300), etc. are all expensive parts (10$+ each). Some exceptions exist like dsPIC33CH or RP2040, but still are relative beasts. Would you need that if you can solve a problem in firmware? 'The definition of "simple" is left as an exercise for the student.' is indeed very ambiguous. You could say that code implementation wise, a blinky in VHDL is less much work than on a MCU. But still I think only a few people would get out a whole FPGA board and toolchain to start doing those tasks.
The question of OP is still valid: how can you implement said problem on 1 CPU core without things starting to break.
This is the main reason I like to keep interrupts as short as possible. IRQ 'tasks' scheduled by hardware behave like RMS, and by keeping those CPU utilization factors low, the product in the hyperbolic bound stays as close to 1 as possible. Then in the main() code, if you can properly prioritize which process needs to be completed first (earliest deadline first scheduling), you can push that part of the application close to 100% utilization, as the CPU upper bound is not a product but a sum. This of course requires more firmware work for IPC between IRQ and main(), and also a main() that is able to switch context depending on arriving deadlines, for example using a preemptive RTOS with deadline knowledge.
The question is if you want to fuss around with RTOS for a theoretical 30% max profit in available CPU power (assuming all IPC and the IRQs are implemented with zero overhead). So both are valid options.
Why would you do that?
Basic rule: remove complexity.
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.What devices are you talking about ?
...
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.What devices are you talking about ?
...
He's a fanboi; don't expect reason to get involved.
Mind, fandom is strong with MCUs anyways; this is no accident. Whereas hammers and screwdrivers are fairly intuitive to pick up and use on anything remotely like a nail or screw; MCUs are very far from it, and so one tends to use whatever hammer they have available.
(I can't speak for others, but I for one, at least, appreciate the intent, and value, of the xMOS design. It's just that I'm essentially never going to need that value. Not with that price point, learning curve, or single source*.)
(*Not that most any MCU isn't single-sourced anyway, just that the more conventional designs are more likely easier to port between, if it comes to it.)
(*And not that it's necessarily a huge learning curve. It's just yet another new thing I don't have time to learn.)
Tim
When I returned to embedded programming after 25years, I was appalled that little had changed since the early 80s. People still programmed 8/16/32 bit processors in C. I was horrified at how little I had to re-learn.