Author Topic: Interrupt routine duration  (Read 6386 times)

0 Members and 1 Guest are viewing this topic.

Offline Red_MicroTopic starter

  • Regular Contributor
  • *
  • Posts: 121
  • Country: ca
Interrupt routine duration
« on: August 12, 2022, 05:32:28 pm »
How long is it safe for a interrupt routine to process things in it? Let's say I have an interrupt every 300 us. It means I need to keep any processing time inside below 300 us, right?
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #1 on: August 12, 2022, 05:49:42 pm »
Yes. Typically I try to keep total interrupt processing below 90% of all available CPU time so that there is room left to do housekeeping tasks (like handling buttons). If you have several time critical processes, you'll need to create a plan on how to deal with all the priorities. There is no general approach that works for all applications.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1637
  • Country: nl
Re: Interrupt routine duration
« Reply #2 on: August 12, 2022, 05:57:06 pm »
Yes.

But also to leave enough cycles for main() code or other interrupt vectors.

The main importance of  IRQs is to service the hardware to keep it operating properly. E.g. reading a received byte from an UART has some time sensitivity, as the buffer needs to be cleared before the next byte arrives. It's real-time :) But in this case it depends how bursty the incoming data stream is. If you can guarantee that only 1 byte arrives every 300us, then there is nothing stopping you from spending 300us in 1 IRQ. The CPU simply doesn't care if it's executing code in IRQ or main.

However, if you can't guarantee that (e.g. only in the happy path, not during fault etc.), then the firmware may break. It's  therefore still best practice to keep IRQs as short as possible though. On some MCUs without a nested interrupt controller, you may also block all other interrupts for that particular amount of processing time, which could introduce issues or timing jitter. Pushing data around with buffers between IRQ and main() can solve a lot of the problems associated with bursty interrupts and long processing times.
« Last Edit: August 12, 2022, 05:58:57 pm by hans »
 

Offline Ian.M

  • Super Contributor
  • ***
  • Posts: 12852
Re: Interrupt routine duration
« Reply #3 on: August 12, 2022, 06:00:24 pm »
It depends - if the peripheral raising the interrupt has a multi-item hardware buffer or queue, then it *may* be acceptable to occasionally take longer to handle an interrupt than the interval to the next one, as long as on average it takes less time and the buffer/queue never overflows.   
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #4 on: August 12, 2022, 06:08:56 pm »
How long is it safe for a interrupt routine to process things in it? Let's say I have an interrupt every 300 us. It means I need to keep any processing time inside below 300 us, right?

That would work if either there is no other processing to be done, or the other processing can be done on a separate core. But... If there is no other processing to be done, then it would be simpler and more predictable to spinloop waiting for the event that triggers the interrupt! Interrupts are a pragmatic complication, since in most applications it is not practical to do nothing until an input occurs.

If there is other processing to be done, especially if it is on another core, a sound technique is that the ISR captures the essential information contained in the event, puts that information in a mailbox or FIFO, and returns as fast as possible. The other core or background thread spinloops until there is something in the mailbox or FIFO, gets the event information, and processes it. That technique is easy to design, implement, verify, and (if necessary) debug. If the other system constraints allow it, then that technique can also allow for short bursts of interrupt events less than 300µs apart.

Naturally in some cases that is ideal, in some cases impossible.
« Last Edit: August 12, 2022, 06:11:31 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14445
  • Country: fr
Re: Interrupt routine duration
« Reply #5 on: August 12, 2022, 06:17:17 pm »
How long is it safe for a interrupt routine to process things in it? Let's say I have an interrupt every 300 us. It means I need to keep any processing time inside below 300 us, right?

Well, any processing inside the interrupt handler PLUS any processing outside of it must fit within the 300 us period. How you balance that completely depends on what your code is supposed to be doing. Obviously if you take all 300 us inside the interrupt handler, there won't be any CPU time left for anything outside of it. Which is probably not what you want, but we don't know.

So while there is absolutely no hard rule about this apart from what I just said above, the usual approach is to limit any processing inside interrupt handlers to the bare minimum. But, again, it all depends on your overall architecture.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #6 on: August 12, 2022, 07:29:54 pm »
The way I look at interrupts is like having a time slicing / pre-emptive multitasking OS in hardware. Each interrupt source is a seperate process. If the hardware supports nested interrupts, you can give these processes priorities where a higher priority process can halt a lower priority process for a little while. I don't see keeping interrupts as short as possible as some kind of ultimate goal; data will need to be processed one way or another.

One complicating factor can be getting large amounts of data in bursts that can't be handled as fast as the data comes in. In that case there is no alternative to buffer the data and handle it at a slower overall pace that on average keeps up with the incoming data. This can be done from the main loop but another solution is to use a timer interrupt to create a process that is run at preset intervals and thus can claim a fixed amount of CPU time without complicating housekeeping tasks in a main loop.

What is important to keep in mind is that a processor has X time to process an Y amount of instructions. That is a hard limit you can't go around. If you have two or more time critical processes, one will need to halt the other at some point. Solving this starts with obtaining accurate execution times and limits on maximum execution times of each cycle of the process. From there it can be determined whether the process can be halted long enough or that a different approach is needed (for example some form of hardware accelleration, multiple CPU cores, buffering, etc).
« Last Edit: August 12, 2022, 08:31:51 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: hans

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21658
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Interrupt routine duration
« Reply #7 on: August 12, 2022, 09:40:40 pm »
Short enough not to interfere with other things.

Based on the scant information provided, there are zero or more possible restrictions on execution time.

That it recurs periodically, isn't a reason to require it shorter than that.

Examples:
Maybe there's little or no penalty for failure: maybe a display flickers, or there's a momentary hiccup in a slow control loop, or a delayed or dropped packet in a UDP application.

Maybe it's just a timer that schedules work to be done, and most of the time it returns immediately, but occasionally it takes much longer to finish (due to various reasons: external data, internal sequencing, etc.).  And maybe that work must be done.  So it just continues until it finishes.  And in the mean time, the interrupt is skipped, or queued, or just keeps on running on top (the shorter passes finishing and returning to the longer run).

Thinking of things in terms of threads of execution, and concurrency, is useful, if a somewhat advanced topic.  I'm guessing discussing it won't be of immediate help, but introducing the concept and getting accustomed to it is.

Interrupts are a rather basic primitive, almost not worth thinking in terms of threads -- you can't create and destroy interrupts arbitrarily, and you might not be able to delay or sleep them, as such -- but the other aspects are no less relevant, concurrency in particular.

Indeed, the latter case (a recurring interrupt that is allowed to overlap itself) can be considered a source of concurrent threads, placed on the stack, and running in order until they finish (equivalent to cooperative multitasking).  You need to make very sure, of course, that the actions of each instance do not overlap and corrupt each others' state (including non-atomic / multi-word accesses that might get interrupted in the process).  Use buffers/queues, mutexes, etc. to manage shared objects.

As for the usual case, you want to handle a minimum in the interrupt itself, and handle the rest in main() loop, or at least a lower priority interrupt.  Example: put serial data into/out of a buffer, and interact only with the buffer from elsewhere.

I've also done before, an interrupt which dominates CPU usage.  This must be done carefully, at low enough priority that it doesn't block other functions, but driven by a high enough priority interrupt for the hard real-time application.  The XMEGA happened to have the right combination of features to do it:
https://github.com/T3sl4co1l/Reverb/
The ADC samples come into an accumulator (unnecessary, turns out the XMEGA ADC is pretty clean), which is copied down periodically (ADC samples counted by a timer-counter).  This gets a sample out of the super fast accumulator loop and into main memory.  The timer fires a low-priority interrupt halfway between (every 4 samples, either the accumulator value is copied and reset, or the copy is processed), and this allows the processing to happen over a longer time scale without delaying others.  In this way I can use, like, >90% of CPU cycles in that interrupt, without requiring main() to loop at high frequency (25kHz).

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Online Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Interrupt routine duration
« Reply #8 on: August 12, 2022, 09:44:57 pm »
Some things to read for TS.
No RTOS needed, you can schedule tasks yourself with Rate Monotonic Analysis of the code you should make sure you will always stay within the uC timing requirements.

https://en.wikipedia.org/wiki/Rate-monotonic_scheduling
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #9 on: August 12, 2022, 10:09:37 pm »
The way I look at interrupts is like having a time slicing / pre-emptive multitasking OS in hardware. Each interrupt source is a seperate process. If the hardware supports nested interrupts, you can give these processes priorities where a higher priority process can halt a lower priority process for a little while.

If you choose the right processor and language and design mentality, that is exactly the way you implement an application.

I refer, of course to the XMOS xCORE processors, xC , and CSP.

They completely avoid the need for interrupts, and the RTOS/scheduling is done in hardware. Plus, uniquely, the IDE guarantees execution times without resorting to measuring and hoping you've spotted the worse case.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #10 on: August 12, 2022, 10:11:10 pm »
Some things to read for TS.
No RTOS needed, you can schedule tasks yourself with Rate Monotonic Analysis of the code you should make sure you will always stay within the uC timing requirements.

https://en.wikipedia.org/wiki/Rate-monotonic_scheduling

You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8168
  • Country: fi
Re: Interrupt routine duration
« Reply #11 on: August 13, 2022, 06:11:29 am »
No need to make things difficult, write schedulers, use OS or buy esoteric special snowflake products however good they all are in some cases.

Just use a microcontroller with pre-emptive, prioritized interrupt controller (for example, ARM Cortex-M0 and above), and interrupts can interrupt other interrupts based on the priorities you assign to them. It is not a problem at all to spend "a lot" of time in an interrupt. My programs have all basically had empty while(1); loops for years, I do all in ISRs after init.

Instead of:

important_isr:
   important_flag=1
   return

less_important_isr:
   less_important_flag=1
   return

main() while(1)
    if(important_flag)
        do important things
        important_flag = 0
    if(less_important_flag)
        do less important things
        // what do you do if important_flag is set in the middle of things here?
        less_important_flag = 0

I do this:

important_isr()
    do important things

less_important_isr()
     do less important things

main()
    set_priority(important_isr, HIGH)
    set_priority(less_important_isr, LOW)
    while(1) do nothing


Also this is possible

important_isr1:
    do important things, quickly
    generate_software_interrupt(slow_processing_of_important1_data)

important_isr2:
    do unrelated important things, quickly

slow_processing_of_important1_data:
    do calculations after important_isr1, slowly, but of course must finish before next important_isr1

main:
    set_priority(important_isr1, HIGH)
    set_priority(slow_processing_of_important1_data, LOW)
    set_priority(important_isr2, MEDIUM)
    // thanks to SW interrupt, processing of isr1 data commences automagically after important_isr1 (and possibly pending important_isr2) returns,
    // important_isr2 can't interrupt interrupt_isr1, but can interrupt the processing part
    // slow_processing_of_important1_data won't block any other interrupt, so it can take its time no problem

    while(1) do nothing



Of course, you still can't spend more than 300µs if you want to react to the next interrupt with same priority after that 300µs in time.
« Last Edit: August 13, 2022, 06:13:41 am by Siwastaja »
 
The following users thanked this post: nctnico, voltsandjolts, tooki, uer166

Online Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Interrupt routine duration
« Reply #12 on: August 13, 2022, 07:05:11 am »
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.
And it is still valid to learn about this in 2022.
For instance TS could think about offloading processing outside the ISR.
If you need 32 samples and only then do a lot of calculation you do the time critical sampling in the ISR, save the data and offload the processing for another process, perhaps in small steps etc. In that case you might get the calculated data at a later stage but the samples are at least valid. You always have to take the whole design into consideration for these decisions.

And more cores can help but donot always solve your problems, just as that 9 women will not bear a child in one month  ;)
« Last Edit: August 13, 2022, 07:12:54 am by Kjelt »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #13 on: August 13, 2022, 08:34:39 am »
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.
And it is still valid to learn about this in 2022.
For instance TS could think about offloading processing outside the ISR.
If you need 32 samples and only then do a lot of calculation you do the time critical sampling in the ISR, save the data and offload the processing for another process, perhaps in small steps etc. In that case you might get the calculated data at a later stage but the samples are at least valid. You always have to take the whole design into consideration for these decisions.
Why would you do that? These are precisely the blanket statements that make a lot of simple signal processing devices overcomplicated.  Basic rule: remove complexity. Your blanket statement only adds complexity by creating extra time sensitive processes and adding buffering / signaling between them.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: hans, Siwastaja, uer166

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #14 on: August 13, 2022, 08:55:12 am »
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.

I'm not, of course, since ARM cortex hasn't got these properties: 1-32 cores, 500-4000MIPS, guaranteed hard realtime operation, no RTOS available -  since it isn't necessary and would only degrade performance :)
https://www.digikey.co.uk/en/products/filter/embedded-microcontrollers/685?s=N4IgjCBcpgnAHLKoDGUBmBDANgZwKYA0IA9lANogAMIAugL7EC0ATMiGpAC4BOArkVIUQAVjr1GINpEoAPFCR75xQA

https://www.eevblog.com/forum/microcontrollers/interrupt-routine-duration/msg4354999/#msg4354999

I've ignored your boring point about Amdahl's law, on the grounds that no technology solves all problems. What's interesting is the set of problems that can be avoided by use of appropriate technology.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #15 on: August 13, 2022, 08:56:47 am »
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.
And it is still valid to learn about this in 2022.
For instance TS could think about offloading processing outside the ISR.
If you need 32 samples and only then do a lot of calculation you do the time critical sampling in the ISR, save the data and offload the processing for another process, perhaps in small steps etc. In that case you might get the calculated data at a later stage but the samples are at least valid. You always have to take the whole design into consideration for these decisions.
Why would you do that? These are precisely the blanket statements that make a lot of simple signal processing devices overcomplicated.  Basic rule: remove complexity. Your blanket statement only adds complexity by creating extra time sensitive processes and adding buffering / signaling between them.

Things should be as simple as possible, but no simpler.

The definition of "simple" is left as an exercise for the student.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: tooki

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13736
  • Country: gb
    • Mike's Electric Stuff
Re: Interrupt routine duration
« Reply #16 on: August 13, 2022, 09:32:49 am »
How long is it safe for a interrupt routine to process things in it? Let's say I have an interrupt every 300 us. It means I need to keep any processing time inside below 300 us, right?
Essentially yes,though "time inside" also needs to include context save/restore time, which will typically be hidden by the compiler.
Although generally speaking it's good to minimise the amount of time spent in in ISR there are situations where doing more can be useful. The risk is that the time available, and latency of other tasks can start to suffer in ways that can be hard to predict.

An example I did a while ago, doing bit-bashed UART receive at 250kbaud on a PIC10F322 - the ISR entry/exit time took too long to do the traditional timer interrupt to sample each bit, so I generated an interrupt on the startbit falling edge, and the ISR read the whole byte before returning.
This only left a few cycles for the foreground task, but that didn't have much to do so it worked fine.
 
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1637
  • Country: nl
Re: Interrupt routine duration
« Reply #17 on: August 13, 2022, 11:42:24 am »
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.

I'm not, of course, since ARM cortex hasn't got these properties: 1-32 cores, 500-4000MIPS, guaranteed hard realtime operation, no RTOS available -  since it isn't necessary and would only degrade performance :)
https://www.digikey.co.uk/en/products/filter/embedded-microcontrollers/685?s=N4IgjCBcpgnAHLKoDGUBmBDANgZwKYA0IA9lANogAMIAugL7EC0ATMiGpAC4BOArkVIUQAVjr1GINpEoAPFCR75xQA

https://www.eevblog.com/forum/microcontrollers/interrupt-routine-duration/msg4354999/#msg4354999

I've ignored your boring point about Amdahl's law, on the grounds that no technology solves all problems. What's interesting is the set of problems that can be avoided by use of appropriate technology.
But then I could also pitch how FPGAs are great for real-time control, because you can try to fit as much computation in 1 clock cycle as you specify. Those parts are very expensive. Most multi-core MCUs like the xCore, STM32H7, NXP parts (LPC4300), etc. are all expensive parts (10$+ each). Some exceptions exist like dsPIC33CH or RP2040, but still are relative beasts. Would you need that if you can solve a problem in firmware? 'The definition of "simple" is left as an exercise for the student.' is indeed very ambiguous. You could say that code implementation wise, a blinky in VHDL is less much work than on a MCU. But still I think only a few people would get out a whole FPGA board and toolchain to start doing those tasks.

The question of OP is still valid: how can you implement said problem on 1 CPU core without things starting to break. If the hardware is computationally capable of doing so, it's an implementation detail how it's done. Choosing a different part that is far more capable an escape to not having to fuss with this stuff.

I fully agree here that it's okay to stretch IRQs if that makes sense for the application (e.g. straightforward to implement). The exact schedulability of tasks depends on the CPU utilization of each task, and how many there are. For example with a Rate Monotic Scheduling (behaviour), you can get away with 1 task having a very short run-time while the other takes a really long time to complete. The Liu&Layland upper bound is a least upper bound which decreases to ln(2)=69% for infinite tasks on 1 CPU. However the hyperbolic upper bound is tighter on the basis of the same amount of information.

E.g. if you have 2 tasks that consume 90% and 5% of CPU time, then Liu&Layland says these cannot not be scheduled because CPU utilization must stay below 82.8% for 2 tasks, which is grossly violated with 95% usage. However the hyperbolic upper bound says that {product[i=1..n] U_i+1}<2, which is 1.9*1.05=1.995. So it should work out fine!

This is the main reason I like to keep interrupts as short as possible. IRQ 'tasks' scheduled by hardware behave like RMS, and by keeping those CPU utilization factors low, the product in the hyperbolic bound stays as close to 1 as possible. Then in the main() code, if you can properly prioritize which process needs to be completed first (earliest deadline first scheduling), you can push that part of the application close to 100% utilization, as the CPU upper bound is not a product but a sum. This of course requires more firmware work for IPC between IRQ and main(), and also a main() that is able to switch context depending on arriving deadlines, for example using a preemptive RTOS with deadline knowledge.
The question is if you want to fuss around with RTOS for a theoretical 30% max profit in available CPU power (assuming all IPC and the IRQs are implemented with zero overhead). So both are valid options.
« Last Edit: August 13, 2022, 11:45:40 am by hans »
 
The following users thanked this post: emece67

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #18 on: August 13, 2022, 12:33:56 pm »
Interesting, but if you treat interrupts as processes that can interrupt eachother (using nested interrupts), you are back to the situation where you are scheduling using an OS... From a functional POV there is no difference.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #19 on: August 13, 2022, 01:34:07 pm »
Hell's teeth, that's a lot of randomly interleaved points, which I won't attempt to disentangle.

You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
What devices are you talking about ? I am talking about embedded devices like Arm cortex etc.

I'm not, of course, since ARM cortex hasn't got these properties: 1-32 cores, 500-4000MIPS, guaranteed hard realtime operation, no RTOS available -  since it isn't necessary and would only degrade performance :)
https://www.digikey.co.uk/en/products/filter/embedded-microcontrollers/685?s=N4IgjCBcpgnAHLKoDGUBmBDANgZwKYA0IA9lANogAMIAugL7EC0ATMiGpAC4BOArkVIUQAVjr1GINpEoAPFCR75xQA

https://www.eevblog.com/forum/microcontrollers/interrupt-routine-duration/msg4354999/#msg4354999

I've ignored your boring point about Amdahl's law, on the grounds that no technology solves all problems. What's interesting is the set of problems that can be avoided by use of appropriate technology.
But then I could also pitch how FPGAs are great for real-time control, because you can try to fit as much computation in 1 clock cycle as you specify. Those parts are very expensive. Most multi-core MCUs like the xCore, STM32H7, NXP parts (LPC4300), etc. are all expensive parts (10$+ each). Some exceptions exist like dsPIC33CH or RP2040, but still are relative beasts. Would you need that if you can solve a problem in firmware? 'The definition of "simple" is left as an exercise for the student.' is indeed very ambiguous. You could say that code implementation wise, a blinky in VHDL is less much work than on a MCU. But still I think only a few people would get out a whole FPGA board and toolchain to start doing those tasks.

This thread is about interrupts in MCUs.

Actually, FPGAs are cheaper than MCUs (and vice versa). It all depends on the application and constraints, and gross generalisations are silly.

The xCORE devices are MCUs that have many FPGA-like benefits without the pain of FPGAs, notably simulation, placing and routing. They are not a complete replacement for FPGAs, of course, but they do have beneficial characteristics that developer with only conventional MCU experience cannot believe exists. Wider experience is often beneficial, especially since technology changes regularly.

"...try to fit as much computation..." is right for FPGAs; placement and routing greatly affects what is possible. When pushing the limits, fmax can vary a lot with "trivial" placement changes.system

If I wanted to do a blinky, I would use one resistor, one transistor, one capacitor, plus an LED. A 555 - let alone an MCU - would be unimaginative overkill.

Of course "'The definition of "simple" is left as an exercise for the student.' is indeed very ambiguous." is ambiguous. It must be read in the context (which you chose to snip) of the preceding sentence famously said by Einstein.


Quote
The question of OP is still valid: how can you implement said problem on 1 CPU core without things starting to break.

You are inventing constraints. The OP did not mention the number of cores.

Quote
This is the main reason I like to keep interrupts as short as possible. IRQ 'tasks' scheduled by hardware behave like RMS, and by keeping those CPU utilization factors low, the product in the hyperbolic bound stays as close to 1 as possible. Then in the main() code, if you can properly prioritize which process needs to be completed first (earliest deadline first scheduling), you can push that part of the application close to 100% utilization, as the CPU upper bound is not a product but a sum. This of course requires more firmware work for IPC between IRQ and main(), and also a main() that is able to switch context depending on arriving deadlines, for example using a preemptive RTOS with deadline knowledge.
The question is if you want to fuss around with RTOS for a theoretical 30% max profit in available CPU power (assuming all IPC and the IRQs are implemented with zero overhead). So both are valid options.

While I agree with many of those points, they ignore the most important point. That can be summed up as "if it is allowable to get the wrong result, then I can make a design that is faster and cheaper than any different design". "Fitness for purpose" matters!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Interrupt routine duration
« Reply #20 on: August 13, 2022, 02:02:34 pm »
Why would you do that?
In case it does not fit. That is when MRA comes in.
If it does fit and you are swimming in idle cycles, then why open a topic  ;)

 
Quote
Basic rule: remove complexity.
Yes the KISS principle is always valid when you start. Then fifteen SW versions later, you run out of processing time somewhere and you have to get smart and creative. When is what time consuming calculation absolutely necessary? When can we stop sampling some input since that is in the current state not needed, etc. etc.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21658
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Interrupt routine duration
« Reply #21 on: August 13, 2022, 03:43:27 pm »
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
What devices are you talking about ?
...

He's a fanboi; don't expect reason to get involved.

Mind, fandom is strong with MCUs anyways; this is no accident.  Whereas hammers and screwdrivers are fairly intuitive to pick up and use on anything remotely like a nail or screw; MCUs are very far from it, and so one tends to use whatever hammer they have available.

(I can't speak for others, but I for one, at least, appreciate the intent, and value, of the xMOS design.  It's just that I'm essentially never going to need that value.  Not with that price point, learning curve, or single source*.)

(*Not that most any MCU isn't single-sourced anyway, just that the more conventional designs are more likely easier to port between, if it comes to it.)
(*And not that it's necessarily a huge learning curve. It's just yet another new thing I don't have time to learn.)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: Kjelt, Siwastaja, uer166

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #22 on: August 13, 2022, 05:23:26 pm »
You only need to do that if you haven't got enough cores. Nowadays cores are cheap.
What devices are you talking about ?
...

He's a fanboi; don't expect reason to get involved.

In some ways yes, in some no.

I get tired of comparing essentially similar MCUs (and programming languages for that matter) where people argue over trivial differences and ignore glaring flaws/gaps. How to achieve hard realtime guarantees is one such area.

I am interested in exploring significantly different approaches to a problem space, where the differences lead to significant advantages (with some disadvantages, of course).

Examples....

There used to be endless discussions as to whether Delphi was better than Pascal was better than C. To me they were boringly identical when compared with, say, Smalltalk/Objective-C or Prolog/LISP/ML etc.

When I returned to embedded programming after 25years, I was appalled that little had changed since the early 80s. People still programmed 8/16/32 bit processors in C. I was horrified at how little I had to re-learn.

People thinking that testing can ensure quality, in this case that measuring hard realtime performance is sufficient. Prediction is far more important than measurement, and much faster!

The point is to know that there are more than nails; screws and glue exist. Then choose which tool to use.

Quote
Mind, fandom is strong with MCUs anyways; this is no accident.  Whereas hammers and screwdrivers are fairly intuitive to pick up and use on anything remotely like a nail or screw; MCUs are very far from it, and so one tends to use whatever hammer they have available.

(I can't speak for others, but I for one, at least, appreciate the intent, and value, of the xMOS design.  It's just that I'm essentially never going to need that value.  Not with that price point, learning curve, or single source*.)

(*Not that most any MCU isn't single-sourced anyway, just that the more conventional designs are more likely easier to port between, if it comes to it.)
(*And not that it's necessarily a huge learning curve. It's just yet another new thing I don't have time to learn.)

Tim

Entirely reasonable.

The one point I'll make is that the XMOS mentality can be used in far more than XMOS processors. CSP style message passing is common in the HPC community, and TI DSPs have xC like channels for communication. (Both originated in the Transputer/CSP/Occam world in the early 80s). Such concepts can be a valuable way to structure designs and implementations on any embedded processor.

Single source is a very valid issue, one that always occurs when something is significantly different. Multi source must be "me too", for better for worse.

The lack of time is why I chose not to learn Delphi nor C++, a decision I've never regretted.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline TC

  • Contributor
  • Posts: 40
  • Country: us
Re: Interrupt routine duration
« Reply #23 on: August 13, 2022, 05:33:31 pm »
Its hard to know how experienced you are from your post. Assuming you don't have lots of experience, I'll suggest you look into the topic of deferred interrupt processing. I know FreeRTOS has a reasonable discussion of this on the web but this is a widely discussed topic and I'm sure that there are plenty of good web resources.

If you are opposed to the use of an RTOS then you might want to learn about co-routines. Miro Samek (Quantum Leaps) has an excellent book that explores this in detail with excellent examples... "state machines" or something like that in the book title.

With an understanding of deferred interrupt processing and some of these programming techniques you will understand the benefits of keeping processing in an ISR to a bare minimum and the techniques that you can use to do this.

Good practice is to do the time critical hardware servicing like getting the ISR status and clearing the interrupt, setting a flag (for deferred interrupt processing) and then exit the ISR. Do the rest of the processing when not in an interrupt service routine.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #24 on: August 14, 2022, 12:23:19 am »
When I returned to embedded programming after 25years, I was appalled that little had changed since the early 80s. People still programmed 8/16/32 bit processors in C. I was horrified at how little I had to re-learn.
Well, people that are programming microcontrollers in assembly seem to have gone during the last decade so there is progress  ;D

But I see your point. I was kind of hoping to see more like Ada to write software in a less messy, better controlled way. But the problem is the all-or-nothing approach that is generally followed. For example: Ada needs a large runtime environment which then needs to be ported to every microcontroller before it can be used. But I see the same where it comes to using languages like Python and Lua on microcontrollers. I've looked at various projects but they all go for the all-or-nothing approach where you either write the entire application in Lua / Python or not. C is much less demanding in that respect.

However, every now and then I have a project which would greatly benefit for having the business logic implemented as a script so I looked into using Lua on a microcontroller -again-. This time with the clear goal that Lua should have a supporting role. C does the heavy lifting and Lua just ties everything together by calling C functions, shove data around and make decissions. For this purpose I took the emblua project (https://github.com/szieke/embLua) and modified it so it can run a script in parallel with C code without needing an OS. It still needs a bit of testing and a few tweaks (to allow debugging) but I plan to put this on Github when it is finished.
« Last Edit: August 14, 2022, 12:41:28 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf