Author Topic: Using Interrupt ... Where ? When ? Pro & Con ?  (Read 14707 times)

0 Members and 1 Guest are viewing this topic.

Offline boriz

  • Newbie
  • Posts: 3
  • Country: dk
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #25 on: October 26, 2015, 03:59:55 pm »
Are there good examples or scenarios, that you just can NOT escape from using interrupt in your previous projects ? Share it please.

It propably depends on how hard the mcu is occupied in your main loop, from having
to poll all sorts of inputs.

Personally i use interrupts where ever I can as I consider polling dirty business, taxing the
mcu when it isn't nessesary.

You might be fine not utilizing interrupts at all on your mcu, but then again you might be
able to get more responsiveness and performance out of your device, utilizing them or even
being able to use a lower priced mcu to do the same for less cost.

High speed UART you definately need interrupt based reception, else you wont be able to
get all the data from the point your polling loop detects reception till the point you read the
data out from the mcu's rx buffer. (that i have experienced myself)



 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #26 on: October 26, 2015, 04:26:38 pm »
Interrupts are very important and many times there's no alternative than to use them.
They can always be avoided, but your hardware might make it impractical.
They could be avoided given suitably designed hardware, but given a single core micro and high bandwidth asynchronous peripherals then impractical can easily become impossible.

Well, if you choose inadequate hardware then of course it is impractical! But I already stated that.

If the bandwidth is "high" for whatever processor has been chosen, then I question whether interrupts are sufficient. Polling in a tight loop would avoid the interrupt handling overhead, which can be very significant. Alternatively it is sometimes appropriate to have an FPGA deal with the high bandwidth i/o, e.g. see Xilinx Zynq devices.

If you want medium latency i/o handling while the processor is also occupied with background tasks, then interrupts are helpful - provided their well-known dysfunctional attributes are dealt with correctly.


There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4099
  • Country: us
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #27 on: October 26, 2015, 09:37:24 pm »
^ It sure sounds like you are suggesting that resorting to the use of interrupts is indicative of using inadequate hardware from the start. Devices that include hardware interrupts enjoy a pretty huge market.

Polling in a tight loop is efficient if you have a tight and predictable loop. It becomes entirely more efficient to poll with a timer interrupt if your code has significant branches and potential delays in these detours. Your code loop may also be too fast. For instance, u may want a decreased frequency of say an ADC, since each reading draws current.

With interrupts you can decide on your frequency up front, balancing current draw and latency, and adjust it independently from the main code loop. Rather than implementing delays or loop counts in code loop, which will then need to be tweaked every time you alter, edit, add any code. I'm not sure of what this interrupt overhead is, of which you speak. A few extra instructions for the jump and the servicing, is all.

I can not even comprehend NOT using timer interrupts, if given the availability. It would seem downright stupid. It would be like choosing a multicore device and only using one core. Or using a device with a hardware math processor and doing the calculations in the code.
« Last Edit: October 26, 2015, 10:13:26 pm by KL27x »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #28 on: October 26, 2015, 10:11:42 pm »
^ It sure sounds like you are suggesting that resorting to the use of interrupts is indicative of using inadequate hardware from the start. Devices that include hardware interrupts enjoy a pretty huge market.

I have neither suggested nor implied that, any more than the statement "all crows are black birds" implies "all black birds are crows".

The word "inadequate" is used in the context of the statment I made, not in any other sense.

Quote
Polling in a tight loop is efficient if you have a tight and predictable loop. It becomes entirely more efficient to poll with a timer interrupt if your code has significant branches and potential delays in these detours. Your code loop may also be too fast. For instance, u may want a decreased frequency of say an ADC, since each reading draws current.

"Efficiency" is only one metric. Correctness is another.

Who cares about efficiency if it gives the wrong answer? Or - more corrosively - if you aren't sure whether it will always give a correct answer.

Quote
With interrupts you can decide on your frequency up front. Rather than implementing delays in code loop, which will then need to be tweaked every time you alter, edit, add any code.

With a little imagination it can be seen that doesn't need to be the case.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4099
  • Country: us
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #29 on: October 26, 2015, 10:16:50 pm »
Quote
With a little imagination it can be seen that doesn't need to be the case.
I'll bow out. I don't have enough competency in C to go here. I can't even imagine how C manages to work with any degree of code/resource efficiency without using timer interrupts, itself. Take PWM an output as an example. I can imagine that a compiler could calculate and spit out perfectly calculated and balanced code loops for doing god knows how many things at once. To where all the branches are either the same number of instructions, or the code loop counter will keep tabs on the branches and add everything up. But it seems like that would be somewhat complex and code-inefficient, compared to using timers (or of course a hardware PWM module.)

Quote
"Efficiency" is only one metric. Correctness is another.

Who cares about efficiency if it gives the wrong answer? Or - more corrosively - if you aren't sure whether it will always give a correct answer.
OTOH, coding in assembly, I have absolutely no problem with using interrupts and getting correctness.

It seems like all the talk is interrupts vs polling. And in my mind that doesn't even make sense. Timer interrupts are perfect for polling.
« Last Edit: October 26, 2015, 10:39:07 pm by KL27x »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #30 on: October 26, 2015, 10:31:18 pm »
Quote
With a little imagination it can be seen that doesn't need to be the case.
I'll bow out. I don't have enough competency in C to go here. I can't even imagine how C manages to work with any degree of code/resource efficiency without using timer interrupts, itself.
My point is language agnostic. It is true whatever language is used, including assember.

Quote
Quote
"Efficiency" is only one metric. Correctness is another.
Who cares about efficiency if it gives the wrong answer? Or - more corrosively - if you aren't sure whether it will always give a correct answer.
OTOH, coding in assembly, I have absolutely no problem with using interrupts and getting correctness.

Prove it. Seriously. You will find it to be extraordinarily difficult. (Note: you cannot prove very much by testing something - all you can say is that a test didn't reveal a fault)

I suspect you are under the illusion that complex processors (and compilers for poorly specified languages) behave as you expect based on your reading their data sheets. Unfortunately life isn't that simple.

I suggest you have a look at the errata for a modern x86 processor, or, if you want a good laugh, at the Itanium. Or some PICs. Consider the effect of having an interrupt start in the middle of an instruction that causes page fault that causes a TLB miss. Usually it will work as expected.
« Last Edit: October 26, 2015, 10:33:56 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4099
  • Country: us
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #31 on: October 26, 2015, 10:46:26 pm »
I can totally see how a compiled language can be tricky with interrupts, since interrupts (and the specific problem you mentioned) are often device specific. I imagine compilers screw that pooch, quite frequently.

Quote
Prove it. Seriously. You will find it to be extraordinarily difficult.
Oh, I agree on this. Even if my code works as expected, I know there are bugs in there, somewhere. I have maintained complex code for years. But in assembly, the problems you mention are just another potential source of bugs out of a million. And interrupts make many tasks so much simpler to code, the coding (and the debugging) is much easier by employing them. I am more confident in my code when I'm using the tools that make it way, way simpler, shorter, more efficient, and easier to follow.

FWIW, the datasheet (hopefully, not the errata!) should specify which instructions must not be interrupted, and you should temporarily disable interrupts where needed. Things like this would be the least of my worries when coding a project.

I think TerminalJack has an insightful observation:
Quote
My opinion regarding the use of interrupts is that you can/will make more complex use of them on less complex systems (=lower-end hardware.)  The complexity will be easier to manage on a smaller system.  You also won't likely have an RTOS to leverage.

Maybe the question is loaded, depending on the application/device/language.

On another note, I'm still having a real hard time understanding how a RTOS operates without timers. It seems to me that the device must be using timers in the background, perhaps just hidden from the user.
Yes, you can easily have as many counters as you want in the code loop, but calculating the total delay with the variety of the code branches would be ludicrous. The clock is most certainly incrementing counters and prescalers and storing and acting on that information very similar to the way timers are implented directly by the user in lower level devices.
« Last Edit: October 26, 2015, 11:35:53 pm by KL27x »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #32 on: October 26, 2015, 11:34:24 pm »
Thereby demonstrating that you ( kl27x)  haven't understood what I wrote.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4099
  • Country: us
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #33 on: October 26, 2015, 11:38:42 pm »
Eli5 me.  :popcorn:

Silicon errata happens so interrupts suck?

But your IDE, compiler, libraries are 100% bug-free by default.

I didn't know we were arguing, let alone insulting each other. But if that's the case:
Quote
They can always be avoided, but your hardware might make it impractical.
Quote
Well, if you choose inadequate hardware then of course it is impractical! But I already stated that.
You basically said that choosing hardware with interrupts and having to use them is choosing inadequate hardware. You said this in response to an example of using a UART. Why does the UART generate an interrupt, in the first place? Just in case you chose inadequate hardware? And what if power consumption is a priority, and you do not want to run at upteen MHz? Or maybe you want the device to even be asleep and still receive communication?

Quote
avoid the interrupt handling overhead
And when you talk about overhead of an interrupt, I wonder if even you know what you're talking about. The resource overhead for, say independently PWMing some output pins using timer interrupt with concurrent, asynchronous processing is almost nothing. Do you imagine that your compiled code loop is going to be anywhere near as efficient as that? It won't be in the same universe. "It's not about efficiency." Ok, then. The interrupt code will also have more bandwidth leftover for concurrent processing. "But it's not about speed, either. It's about "correctness."" Ok, I missed that memo, and I don't think of "overhead" as correlating to "correctness," nor understand why that is the now the discussion. But since we're here, which solution do you think will be more "correct" regarding PWM timing accuracy, while also dealing with asynchronous events (hence the main code loop potentially taking many different branches)?

I didn't receive any notice that this thread was about life-critical proof-able code. If I responded to that question, it was inadvertent. OP asked when/where to use interrupts, and my answer is anywhere it makes my life easier. There are a lot of good examples in this thread.
« Last Edit: October 27, 2015, 01:33:36 am by KL27x »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #34 on: October 27, 2015, 01:33:01 am »
IMHO the most important thing when using interrupts is that their maximum occurance frequency has to be predictable. From there you can calculate maximum response times and the maximum stack usage.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #35 on: October 27, 2015, 07:34:25 am »
IMHO the most important thing when using interrupts is that their maximum occurance frequency has to be predictable. From there you can calculate maximum response times and the maximum stack usage.
Calculating those maxima is impractical on modern embedded processors - and is steadily becoming more impractical.

Consider an ARM A9 such as is found in phones and Zynq FPGAs, which is dual core, out-of-order, superscalar, dynamic-length pipeline (8-12 stages), L1 and L2 caches, shared memory. Add in that one core is probably running a linux and the other may be running an RTOS, and I challenge you to calculate (not measure) the maximum stack depth and interrupt latency time!

Or you can look at Intel's embedded x86 processors, and you will find the similar problems.

Of course if the discussion is limited to a subset of embedded systems without those features, then it is easier to calculate the maxima.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #36 on: October 27, 2015, 07:44:26 am »
Eli5 me.  :popcorn:

Silicon errata happens so interrupts suck?

But your IDE, compiler, libraries are 100% bug-free by default.

I didn't know we were arguing, let alone insulting each other. But if that's the case:
Quote
They can always be avoided, but your hardware might make it impractical.
Quote
Well, if you choose inadequate hardware then of course it is impractical! But I already stated that.
You basically said that choosing hardware with interrupts and having to use them is choosing inadequate hardware. You said this in response to an example of using a UART. Why does the UART generate an interrupt, in the first place? Just in case you chose inadequate hardware? And what if power consumption is a priority, and you do not want to run at upteen MHz? Or maybe you want the device to even be asleep and still receive communication?

Quote
avoid the interrupt handling overhead
And when you talk about overhead of an interrupt, I wonder if even you know what you're talking about. The resource overhead for, say independently PWMing some output pins using timer interrupt with concurrent, asynchronous processing is almost nothing. Do you imagine that your compiled code loop is going to be anywhere near as efficient as that? It won't be in the same universe. "It's not about efficiency." Ok, then. The interrupt code will also have more bandwidth leftover for concurrent processing. "But it's not about speed, either. It's about "correctness."" Ok, I missed that memo, and I don't think of "overhead" as correlating to "correctness," nor understand why that is the now the discussion. But since we're here, which solution do you think will be more "correct" regarding PWM timing accuracy, while also dealing with asynchronous events (hence the main code loop potentially taking many different branches)?

I didn't receive any notice that this thread was about life-critical proof-able code. If I responded to that question, it was inadvertent. OP asked when/where to use interrupts, and my answer is anywhere it makes my life easier. There are a lot of good examples in this thread.

Sigh. If you snip and combine partial quotes then you can "prove" anything - but nobody is impressed with strawman arguments. Especially when you zoom off in directions more-or-less unrelated to the quotes.

I think I have a pretty good idea of interrupt latency in a very wide range of systems; I've been creating embedded systems since the 1970s, using everything from 6800s/Z80s to "big iron" PA-RISC processors. Indeed I successfully persuaded companies to avoid the Itanium/Itanic because of its interrupt latency (how long does it take to save 4000(!) hidden registers?) and gross complexity of the hardware and compilers.

Finally, you made points without important caveats limiting their applicability. Don't complain if I show where your points aren't valid.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4099
  • Country: us
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #37 on: October 27, 2015, 07:59:46 am »
Quote
Sigh. If you snip and combine partial quotes then you can "prove" anything
I quoted you, because I was responding to those statements. You're the one that said these things. I quoted, and provided context. It's all in the thread, if you forgot.
 
Quote
- but nobody is impressed with strawman arguments. Especially when you zoom off in directions more-or-less unrelated to the quotes.
That's the kettle calling the pot black. You did not respond to my posts, at all. Well, I take that back. I'm not familiar with Itanic. If that's the chip you base your interrupt overhead on, well I'm not sure that it is representative in general. And I thought I responded very directly to w/e I quoted.

Quote
Calculating those maxima is impractical on modern embedded processors - and is steadily becoming more impractical.

Consider an ARM A9 such as is found in phones and Zynq FPGAs, which is dual core, out-of-order, superscalar, dynamic-length pipeline (8-12 stages), L1 and L2 caches, shared memory. Add in that one core is probably running a linux and the other may be running an RTOS, and I challenge you to calculate (not measure) the maximum stack depth and interrupt latency time!

Or you can look at Intel's embedded x86 processors, and you will find the similar problems.

Of course if the discussion is limited to a subset of embedded systems without those features, then it is easier to calculate the maxima.
Isn't this one of those strawman arguments you were speaking of? No one here ever tried to do such a thing. The post started out discussing AVR's, as well. I don't see any mention of this in your explanations. Do they fit with your overall argument? Or are they, indeed, different?
Quote
I think TerminalJack has an insightful observation:


Quote

My opinion regarding the use of interrupts is that you can/will make more complex use of them on less complex systems (=lower-end hardware.)  The complexity will be easier to manage on a smaller system.  You also won't likely have an RTOS to leverage.


I quoted Terminal Jack, here, even. Thoughts? Maybe you agree with him? You are undoubtedly knowledgeable. I know this because you posted your resume. Is anyone else on the right track?



« Last Edit: October 27, 2015, 08:23:01 am by KL27x »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #38 on: October 27, 2015, 11:17:35 am »
IMHO the most important thing when using interrupts is that their maximum occurance frequency has to be predictable. From there you can calculate maximum response times and the maximum stack usage.
Calculating those maxima is impractical on modern embedded processors - and is steadily becoming more impractical.

Consider an ARM A9 such as is found in phones and Zynq FPGAs, which is dual core, out-of-order, superscalar, dynamic-length pipeline (8-12 stages), L1 and L2 caches, shared memory. Add in that one core is probably running a linux and the other may be running an RTOS, and I challenge you to calculate (not measure) the maximum stack depth and interrupt latency time!
Latency is not worst case response time. Worst case response time is the worst case scenario where other interrupts are handled first. In a good ISR the latency is dwarfed by the amount of code which needs to be executed anyway. For a proof a system keeps working you are after the worst case scenario so assume all caches and pipelines are empty and need to be filled first. Stack depth is easy as well: use a seperate stack for interrupts.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19470
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #39 on: October 27, 2015, 12:09:34 pm »
IMHO the most important thing when using interrupts is that their maximum occurance frequency has to be predictable. From there you can calculate maximum response times and the maximum stack usage.
Calculating those maxima is impractical on modern embedded processors - and is steadily becoming more impractical.

Consider an ARM A9 such as is found in phones and Zynq FPGAs, which is dual core, out-of-order, superscalar, dynamic-length pipeline (8-12 stages), L1 and L2 caches, shared memory. Add in that one core is probably running a linux and the other may be running an RTOS, and I challenge you to calculate (not measure) the maximum stack depth and interrupt latency time!
Latency is not worst case response time.

Que? You have mean latency, mode latency, 95% percentile (etc) latency, worst case latency. And the "latency" is whatever you define it to be.

Try to calcuclate any of those with the h/w & s/w I mentioned, and see how far you ge!

Quote
Worst case response time is the worst case scenario where other interrupts are handled first. In a good ISR the latency is dwarfed by the amount of code which needs to be executed anyway.

That's entirely processor and application dependent. Note that all the hidden processor state has to be saved, and in Itaniums that is ~4000 64 bit registers plus who knows what else!

Quote
For a proof a system keeps working you are after the worst case scenario so assume all caches and pipelines are empty and need to be filled first.

No, that's a test not a proof, and only a single test at that. Good luck setting up the entire system to ensure your test preconditions - and demonstrating that you have done so.

Quote
Stack depth is easy as well: use a seperate stack for interrupts.

That is processor dependent and not changeable by the programmer.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4099
  • Country: us
Re: Using Interrupt ... Where ? When ? Pro & Con ?
« Reply #40 on: October 27, 2015, 08:50:31 pm »
Continually pushing an argument towards the way an RTOS, which is built to handle interrupts in a pre-specified way, or on Itanium and x86 microprocessors primarily used with an OS is... what it is.

An RTOS is another additional layer that is already built to handle interrupts in a certain, limited way. Of course the use of interrupts will be more limited. And being built to handle parallel tasks to begin with, of course using the RTOS tools to handle events is more appealing.

If you are have direct control over interrupt handling on your device, interrupts can be very broadly useful. You could implement interrupts for a specific application/task, rather than set up a handling algorhithm that works for a general purpose OS. TerminalJack pretty much covered it, and yet the hand keeps trying to talk to the face.
Quote
Of course if the discussion is limited to a subset of embedded systems...

is necessary context to make some of the posts on this thread sensible. Mostly yours.

OP: can we keep this limited to 8 bit microcontrollers through single core ARM, for instance?
- Well, sure, you don't need interrupts if you use a 32 core XMOS processor. And if you need to use interrupts you chose insufficient hardware.

The truth is that you're comparing XMOS architecture to a typical RTOS. (Even among RTOS's, there are different ways of managing multithreading and interrupts.) XMOS architecture and extended language can result in a shorter and more predictable latency than a given RTOS running on a single or multi core processor. This is because an RTOS on a multicore processor is doing the same thing, only it's fixed in one method of doing so. The XMOS extended language gives users the access to spinlock/loop and handle interrupts as they see fit for a given task/s. So the interrupt latency can be shorter and more predictable in this case because you're not fixed to a specific RTOS, which may be written with different priorities. Even (especially) with a single core, use of interrupts can be done with extremely short latency and high predictability... depending on the application and/or use (or non-use) of an OS. I imagine that in many specific applications, a single core micro using interrupts can be faster and more predictable even than an XMOS, given the same processor speed. If your app doesn't make good use of multiple cores, maybe you are choosing overadequate hardware. Is XMOS superior at multithreading than any given RTOS? More flexible, it sure seems. But some apps don't benefit from multithreading. For some hardware applications, interrupts are ideal.

I'll put it this way. For certain applications, using a certain, adequate single-core device, one MIGHT be able to accomplish said task satisfactorily using code loop polling. And to achieve this, you MIGHT need to be a genius or a masochist. Whereas if you cannot do said task better (lower and more predictable latency) with an interrupt, you would have to be a moron.
« Last Edit: October 29, 2015, 05:34:16 am by KL27x »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf