Author Topic: Interrupt routine duration  (Read 6436 times)

0 Members and 1 Guest are viewing this topic.

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #50 on: August 15, 2022, 07:31:31 pm »
None of those points is useful; they are all true in any embedded realtime system!

Huh? How are they not useful if they can create a provably correct realtime system;

You are going to have to explain how the techniques you mention can "create a provably correct realtime system".

Start by stating what you do and don't mean by "correct", then move on to outline what you do to assure correctness in that sense.

Finally explain why limited duration ISRs plus queue for the events for the scheduler fundamentally prevents anything in your preferred technique. Assume ISRs capture i/o and atomically deposit events in the queue, but no more.

You might not like the XMOS ecosystem. I don't like people ignoring/denying the problems with their preferred technique.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #51 on: August 15, 2022, 07:42:17 pm »
The answer is in 'view the interrupt controller as your OS scheduler'. On one hand you propose that hardware and software work in harmony (and part of the solution can be implemented in hardware) but on the other hand you propose a software only solution. In the end your OS scheduling comes from a (timer) interrupt as well so you can get rid of a lot of overhead by letting the hardware (= the interrupt controller and other peripherals) do the scheduling. Whether you do software scheduling or nested interrupts, the basic problem that needs to be solved stays the same because nested interrupts and parallel processes with different priorities are fundamentally the same from a functional point of view.

In the end it all comes down to having X processing power to do Y work. The best solution depends entirely on the actual problem at hand as I have outlined in my earlier posting.
« Last Edit: August 15, 2022, 08:04:54 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Siwastaja

Online uer166

  • Frequent Contributor
  • **
  • Posts: 893
  • Country: us
Re: Interrupt routine duration
« Reply #52 on: August 15, 2022, 08:14:05 pm »
You are going to have to explain how the techniques you mention can "create a provably correct realtime system".

You're not UL or VDE or the FAA, so I don't need to explain anything to you, I was just giving an example of an alternate system that I know works well in special contexts. I'm not saying your interrupt queue way (or literally any way), cannot accomplish a specific task or be provably correct, that's a strawman.

What I'm claiming is that a TT co-op scheduler makes it easy to prove system correctness at the general level. For example, in your proposed architecture you have to prove that your ISR deposits events atomically into a queue. In TT co-op you don't need to prove anything, it just does it by design with zero extra effort for any and all variable sizes and buffer lengths automatically, without mutexes or locks. Because you don't need mutexes/locks, you can't have priority inversion, you don't have to deal with deadlocks. Because you don't have interrupts, timing is fixed and task jitter has a tight upper bound, etc etc.. This argument is obviously a waste of server space so I'll shut up for now.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #53 on: August 15, 2022, 09:22:44 pm »
You are going to have to explain how the techniques you mention can "create a provably correct realtime system".

You're not UL or VDE or the FAA, so I don't need to explain anything to you, I was just giving an example of an alternate system that I know works well in special contexts. I'm not saying your interrupt queue way (or literally any way), cannot accomplish a specific task or be provably correct, that's a strawman.

And that last sentence is itself a strawman. You claimed (in context that you chose to snip) that the techniques I noted will remove benefits you assert about some other techniques. I challenged you to justify the benefits you asserted about "your" techniques - not about my techniques/assertions.

I believe you are wise to deflect attention from your assertions such as...

None of those points is useful; they are all true in any embedded realtime system!
Huh? How are they[1] not useful if they can create a provably correct realtime system; none of those points are true normally.. You don't get any of those points/guarantees in any run-of-the-mill embedded system. With time-triggered co-op you're getting true determinism at the expense of it being an absolute dog to design for and being generally inflexible/fragile, but that is a trade-off you might want to do in some contexts.

[1] "they" are
The issue with this is you've erased most of the guarantees that a "true" co-op scheduler if you add an interrupt. Namely stuff like:
  • No issues with data consistency. This means no physical way to get corrupt data due to non-atomicity
  • Fully deterministic execution: you can always say that at time T+<arbitrary ms>, you're executing task X
  • Worst case analysis triviality given some constraints
  • Easy runtime checking of health, such as a baked-in task sequence check, deadline check, etc etc


Quote
What I'm claiming is that a TT co-op scheduler makes it easy to prove system correctness at the general level. For example, in your proposed architecture you have to prove that your ISR deposits events atomically into a queue. In TT co-op you don't need to prove anything, it just does it by design with zero extra effort for any and all variable sizes and buffer lengths automatically, without mutexes or locks. Because you don't need mutexes/locks, you can't have priority inversion, you don't have to deal with deadlocks. Because you don't have interrupts, timing is fixed and task jitter has a tight upper bound, etc etc.. This argument is obviously a waste of server space so I'll shut up for now.

Wow, there are a lot of false assumptions and assertions there! I'll point out some and ignore others...

"In TT co-op you don't need to prove anything" - I think not. And adding ISRs+event queue to a TT co-op doesn't change the fundamental properties of a TT co-op.
"you don't have to deal with deadlocks" - I think not. You have to consider deadlocks and livelocks in the entire system.
"Because you don't have interrupts, timing is fixed and task jitter has a tight upper bound,"  - I think not, especially when you have to prove  the upper bound (i.e. not measure and hope you have spotted it by chance), doubly so with a modern processor.

Your last statement is wise, however.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #54 on: August 15, 2022, 09:32:03 pm »
The answer is in 'view the interrupt controller as your OS scheduler'. On one hand you propose that hardware and software work in harmony (and part of the solution can be implemented in hardware) but on the other hand you propose a software only solution. In the end your OS scheduling comes from a (timer) interrupt as well so you can get rid of a lot of overhead by letting the hardware (= the interrupt controller and other peripherals) do the scheduling. Whether you do software scheduling or nested interrupts, the basic problem that needs to be solved stays the same because nested interrupts and parallel processes with different priorities are fundamentally the same from a functional point of view.

In the end it all comes down to having X processing power to do Y work. The best solution depends entirely on the actual problem at hand as I have outlined in my earlier posting.

My world view was unpleasantly (and probably unjustifiably) shaped by the first interrupt controller I designed into a system. When I got the first device in my hand I noticed it was "Rev G", which in retrospect should have been a big red flag. It never worked to spec :)

I've always thought ny brain is too small to "reason" about the horrible graubly changing details of a system built around multiple interrrupt priorities and task priorities. IMNSHO in single purpose dedicated embedded systems there should only be "normal" and "panic" priorities. (That doesn't apply to desktop/general purpose desktop/computation systems.)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf