Are there good examples or scenarios, that you just can NOT escape from using interrupt in your previous projects ? Share it please.
Interrupts are very important and many times there's no alternative than to use them.They can always be avoided, but your hardware might make it impractical.They could be avoided given suitably designed hardware, but given a single core micro and high bandwidth asynchronous peripherals then impractical can easily become impossible.
^ It sure sounds like you are suggesting that resorting to the use of interrupts is indicative of using inadequate hardware from the start. Devices that include hardware interrupts enjoy a pretty huge market.
Polling in a tight loop is efficient if you have a tight and predictable loop. It becomes entirely more efficient to poll with a timer interrupt if your code has significant branches and potential delays in these detours. Your code loop may also be too fast. For instance, u may want a decreased frequency of say an ADC, since each reading draws current.
With interrupts you can decide on your frequency up front. Rather than implementing delays in code loop, which will then need to be tweaked every time you alter, edit, add any code.
With a little imagination it can be seen that doesn't need to be the case.
"Efficiency" is only one metric. Correctness is another.
Who cares about efficiency if it gives the wrong answer? Or - more corrosively - if you aren't sure whether it will always give a correct answer.
QuoteWith a little imagination it can be seen that doesn't need to be the case.I'll bow out. I don't have enough competency in C to go here. I can't even imagine how C manages to work with any degree of code/resource efficiency without using timer interrupts, itself.
Quote"Efficiency" is only one metric. Correctness is another.
Who cares about efficiency if it gives the wrong answer? Or - more corrosively - if you aren't sure whether it will always give a correct answer.OTOH, coding in assembly, I have absolutely no problem with using interrupts and getting correctness.
Prove it. Seriously. You will find it to be extraordinarily difficult.
My opinion regarding the use of interrupts is that you can/will make more complex use of them on less complex systems (=lower-end hardware.) The complexity will be easier to manage on a smaller system. You also won't likely have an RTOS to leverage.
They can always be avoided, but your hardware might make it impractical.
Well, if you choose inadequate hardware then of course it is impractical! But I already stated that.
avoid the interrupt handling overhead
IMHO the most important thing when using interrupts is that their maximum occurance frequency has to be predictable. From there you can calculate maximum response times and the maximum stack usage.
Eli5 me.
Silicon errata happens so interrupts suck?
But your IDE, compiler, libraries are 100% bug-free by default.
I didn't know we were arguing, let alone insulting each other. But if that's the case:QuoteThey can always be avoided, but your hardware might make it impractical.QuoteWell, if you choose inadequate hardware then of course it is impractical! But I already stated that.You basically said that choosing hardware with interrupts and having to use them is choosing inadequate hardware. You said this in response to an example of using a UART. Why does the UART generate an interrupt, in the first place? Just in case you chose inadequate hardware? And what if power consumption is a priority, and you do not want to run at upteen MHz? Or maybe you want the device to even be asleep and still receive communication?Quoteavoid the interrupt handling overheadAnd when you talk about overhead of an interrupt, I wonder if even you know what you're talking about. The resource overhead for, say independently PWMing some output pins using timer interrupt with concurrent, asynchronous processing is almost nothing. Do you imagine that your compiled code loop is going to be anywhere near as efficient as that? It won't be in the same universe. "It's not about efficiency." Ok, then. The interrupt code will also have more bandwidth leftover for concurrent processing. "But it's not about speed, either. It's about "correctness."" Ok, I missed that memo, and I don't think of "overhead" as correlating to "correctness," nor understand why that is the now the discussion. But since we're here, which solution do you think will be more "correct" regarding PWM timing accuracy, while also dealing with asynchronous events (hence the main code loop potentially taking many different branches)?
I didn't receive any notice that this thread was about life-critical proof-able code. If I responded to that question, it was inadvertent. OP asked when/where to use interrupts, and my answer is anywhere it makes my life easier. There are a lot of good examples in this thread.
Sigh. If you snip and combine partial quotes then you can "prove" anything
- but nobody is impressed with strawman arguments. Especially when you zoom off in directions more-or-less unrelated to the quotes.
Calculating those maxima is impractical on modern embedded processors - and is steadily becoming more impractical.
Consider an ARM A9 such as is found in phones and Zynq FPGAs, which is dual core, out-of-order, superscalar, dynamic-length pipeline (8-12 stages), L1 and L2 caches, shared memory. Add in that one core is probably running a linux and the other may be running an RTOS, and I challenge you to calculate (not measure) the maximum stack depth and interrupt latency time!
Or you can look at Intel's embedded x86 processors, and you will find the similar problems.
Of course if the discussion is limited to a subset of embedded systems without those features, then it is easier to calculate the maxima.
I think TerminalJack has an insightful observation:
Quote
My opinion regarding the use of interrupts is that you can/will make more complex use of them on less complex systems (=lower-end hardware.) The complexity will be easier to manage on a smaller system. You also won't likely have an RTOS to leverage.
IMHO the most important thing when using interrupts is that their maximum occurance frequency has to be predictable. From there you can calculate maximum response times and the maximum stack usage.Calculating those maxima is impractical on modern embedded processors - and is steadily becoming more impractical.
Consider an ARM A9 such as is found in phones and Zynq FPGAs, which is dual core, out-of-order, superscalar, dynamic-length pipeline (8-12 stages), L1 and L2 caches, shared memory. Add in that one core is probably running a linux and the other may be running an RTOS, and I challenge you to calculate (not measure) the maximum stack depth and interrupt latency time!
IMHO the most important thing when using interrupts is that their maximum occurance frequency has to be predictable. From there you can calculate maximum response times and the maximum stack usage.Calculating those maxima is impractical on modern embedded processors - and is steadily becoming more impractical.
Consider an ARM A9 such as is found in phones and Zynq FPGAs, which is dual core, out-of-order, superscalar, dynamic-length pipeline (8-12 stages), L1 and L2 caches, shared memory. Add in that one core is probably running a linux and the other may be running an RTOS, and I challenge you to calculate (not measure) the maximum stack depth and interrupt latency time!Latency is not worst case response time.
Worst case response time is the worst case scenario where other interrupts are handled first. In a good ISR the latency is dwarfed by the amount of code which needs to be executed anyway.
For a proof a system keeps working you are after the worst case scenario so assume all caches and pipelines are empty and need to be filled first.
Stack depth is easy as well: use a seperate stack for interrupts.
Of course if the discussion is limited to a subset of embedded systems...