Author Topic: Interrupt routine duration  (Read 6384 times)

0 Members and 1 Guest are viewing this topic.

Offline uer166

  • Frequent Contributor
  • **
  • Posts: 888
  • Country: us
Re: Interrupt routine duration
« Reply #25 on: August 14, 2022, 12:50:32 am »
My programs have all basically had empty while(1); loops for years, I do all in ISRs after init.

I used to be a proponent for co-operative schedulers (no interrupts allowed at all). But with the exception of some regulatory and reliability edge cases, the nested ISR way you mention is generally easier to understand, and can still be designed to pretty high assurance levels.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21657
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Interrupt routine duration
« Reply #26 on: August 14, 2022, 12:52:19 am »
When I returned to embedded programming after 25years, I was appalled that little had changed since the early 80s. People still programmed 8/16/32 bit processors in C. I was horrified at how little I had to re-learn.
Well, people that are programming microcontrollers in assembly seem to have gone during the last decade so there is progress  ;D

But I see your point. I was kind of hoping to see more like Ada to write software in a less messy, better controlled way. But the problem is the all-or-nothing approach that is generally followed. For example: Ada needs a large runtime environment which then needs to be ported to every microcontroller before it can be used. But I see the same where it comes to using languages like Python and Lua on microcontrollers. I've looked at various projects but they all go for the all-or-nothing approach where you either write the entire application in Lua / Python or not. C is much less demanding in that respect.

However, every now and then I have a project which would greatly benefit for having the business logic implemented as a script so I looked into using Lua on a microcontroller -again-. This time with the clear goal that Lua should have a supporting role. C does the heavy lifting and Lua just ties everything together by calling C functions, shove data around and make decissions. For this purpose I took the emblua project (https://github.com/szieke/embLua) and modified it so it can run a script in parallel with C code without needing an OS. It still needs a bit of testing and a few tweaks (to allow debugging) but I plan to put this on Github when it is finished.

Kinda tempted to learn Rust and see how that goes.  Not very well supported on AVR though (just barely, I think? an LLVM outputter?), so that would assume, in addition to learning the whole language, potentially also grappling with compiler bugs, or just poor optimization (granted, not that avr-gcc is a high hurdle to clear).

Regarding scripts, there's also MicroPython, too big to put on small like 8-bit or entry level parts, but STM32F4 will certainly get you there IIRC.  Maybe not something you'd want to embed for production?, but a handy way to test out a lot of things quickly.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8167
  • Country: fi
Re: Interrupt routine duration
« Reply #27 on: August 14, 2022, 06:29:24 am »
My programs have all basically had empty while(1); loops for years, I do all in ISRs after init.

I used to be a proponent for co-operative schedulers (no interrupts allowed at all). But with the exception of some regulatory and reliability edge cases, the nested ISR way you mention is generally easier to understand, and can still be designed to pretty high assurance levels.

In very complex interdependence cases my interrupt way might fall apart; I don't know because I have not seen this happen.

The upside is simplicity, which eases analysis and reduces risks. For example, often you have say an analog signal which signifies overcurrent event, and say another digital signal which signifies Failure of a Safety System. It is trivially easy to configure both for highest priority, and assign them the same "safe shutdown" handler. They will pre-empt any other "normal operation" ISR.

Things could get difficult if safe recovery from an overcurrent event requires some weird, slow process, and recovery from Failure of a Safety System requires another, different kind of weird, slow process. Then you would need to choose which gets higher priority, which might not be allowable, they would need to run simultaneously.

But I have not encountered anything like this in real life. Then again, I also realize that sometimes you can't do everything on an MCU, but need a CPLD as a helper, or a full FPGA design. In fact, many VFDs (variable frequency drive) I have teared down seem to use MCU + CPLD, and I wonder why because at least I don't have any issue doing a VFD in full-MCU design, and have done many.
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Interrupt routine duration
« Reply #28 on: August 14, 2022, 08:18:45 am »
Well, people that are programming microcontrollers in assembly seem to have gone during the last decade so there is progress  ;D

But I see your point. I was kind of hoping to see more like Ada to write software in a less messy, better controlled way. 

My prediction from 2006 is failing miserably.
I expected in 2026 that there would be more manufacturer tied in abstract block tools like rational rose. That an engineer picked an all-ready premanufactured module based on analog inputs (choose your MHz/GHz sampling requirements) and digital processing capabilities (Gflops/RAM etc) from the manufacturer, drawn in an abstract model based program what it needs to do, presses a button and all the HW and SW code is generated, downloaded to the module and ready for testing.
A bit like a further mainstream low budget NI / labview development.

In that sense developments are slow  :)
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13736
  • Country: gb
    • Mike's Electric Stuff
Re: Interrupt routine duration
« Reply #29 on: August 14, 2022, 09:50:25 am »
But I have not encountered anything like this in real life. Then again, I also realize that sometimes you can't do everything on an MCU, but need a CPLD as a helper, or a full FPGA design. In fact, many VFDs (variable frequency drive) I have teared down seem to use MCU + CPLD, and I wonder why because at least I don't have any issue doing a VFD in full-MCU design, and have done many.
I suspect this is pragmatism and cautious design - they use hardware to enforce anything that could cause damage or present a safety hazard. It is very easy to mess up software in subtle ways. For long-lifetime industrial products like VFDs, it is quite likely that it needs to be maintained by different software people over time, and the risk of a software change causing problems in an industrial scenario, where it could literally cost millions in equipment damage & downtime is too high to be worth saving a few dollars on hardware.
Having all "safety" related stuff in hardware means your software people don't need to be as expert in dealing with low-level timings, interrupt priorities etc. I doubt VFD manufacturers are as attractive to good software talent as "sexier" industries.
It is also likely to be much easier to demonstrate to any internal or external QC inspection/review process that a hardware solution guarantees safe behaviour than software.
Knowing for sure that any software change can't melt anything  reduces the amount of testing needed for a new release.
Industrial manufacturers will be concerned about things like noise, transients etc. causing disruption to software flow - again having hardware (CPLD) controlling things is likely to be more robust as it's less dependent on state information that could get corrupted.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 
The following users thanked this post: Siwastaja

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #30 on: August 14, 2022, 09:50:50 am »
When I returned to embedded programming after 25years, I was appalled that little had changed since the early 80s. People still programmed 8/16/32 bit processors in C. I was horrified at how little I had to re-learn.
Well, people that are programming microcontrollers in assembly seem to have gone during the last decade so there is progress  ;D

But I see your point. I was kind of hoping to see more like Ada to write software in a less messy, better controlled way. But the problem is the all-or-nothing approach that is generally followed. For example: Ada needs a large runtime environment which then needs to be ported to every microcontroller before it can be used. But I see the same where it comes to using languages like Python and Lua on microcontrollers. I've looked at various projects but they all go for the all-or-nothing approach where you either write the entire application in Lua / Python or not. C is much less demanding in that respect.

However, every now and then I have a project which would greatly benefit for having the business logic implemented as a script so I looked into using Lua on a microcontroller -again-. This time with the clear goal that Lua should have a supporting role. C does the heavy lifting and Lua just ties everything together by calling C functions, shove data around and make decissions. For this purpose I took the emblua project (https://github.com/szieke/embLua) and modified it so it can run a script in parallel with C code without needing an OS. It still needs a bit of testing and a few tweaks (to allow debugging) but I plan to put this on Github when it is finished.

Kinda tempted to learn Rust and see how that goes.  Not very well supported on AVR though (just barely, I think? an LLVM outputter?), so that would assume, in addition to learning the whole language, potentially also grappling with compiler bugs, or just poor optimization (granted, not that avr-gcc is a high hurdle to clear).
Rust looks like it is another all-or-nothing approach. IMHO a better solution is to have a hybrid solution where you can mix another language with C / C++.

Quote
Regarding scripts, there's also MicroPython, too big to put on small like 8-bit or entry level parts, but STM32F4 will certainly get you there IIRC.  Maybe not something you'd want to embed for production?, but a handy way to test out a lot of things quickly.
Why wouldn't it be useful for production? Actually one of the goals for using a different language is to get rid of the pitfalls of C that can introduce bugs. IOW: make software more robust. Micropython and Lua both run in a sandbox (VM). The Lua project I'm working on uses a fixed memory pool as well so whatever the Lua code does, it can not interfere with safety related code.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #31 on: August 14, 2022, 09:56:45 am »
But I have not encountered anything like this in real life. Then again, I also realize that sometimes you can't do everything on an MCU, but need a CPLD as a helper, or a full FPGA design. In fact, many VFDs (variable frequency drive) I have teared down seem to use MCU + CPLD, and I wonder why because at least I don't have any issue doing a VFD in full-MCU design, and have done many.
I assume the CPLD has some protection logic inside. In the software controlled SMPS-ish devices I have designed so far, I always have some discrete logic to switch the power stage off when an overcurrent event occurs AND make sure that the control signals are valid (dissallow to enable the high & low side drivers simultaneously). It makes the hardware virtually indestructible.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8167
  • Country: fi
Re: Interrupt routine duration
« Reply #32 on: August 14, 2022, 12:24:39 pm »
But I have not encountered anything like this in real life. Then again, I also realize that sometimes you can't do everything on an MCU, but need a CPLD as a helper, or a full FPGA design. In fact, many VFDs (variable frequency drive) I have teared down seem to use MCU + CPLD, and I wonder why because at least I don't have any issue doing a VFD in full-MCU design, and have done many.
I assume the CPLD has some protection logic inside. In the software controlled SMPS-ish devices I have designed so far, I always have some discrete logic to switch the power stage off when an overcurrent event occurs AND make sure that the control signals are valid (dissallow to enable the high & low side drivers simultaneously). It makes the hardware virtually indestructible.

But this logic is usually available, in hardware, in the MCU peripherals. Of course there is a risk of misconfiguring it, but it's a simple "one line of code during init only", so it's much less likely than messing up the program state during run.

I suspect this is pragmatism and cautious design - they use hardware to enforce anything that could cause damage or present a safety hazard.

I kind of understand this, and kind-of don't. After all, say in motor controller, as long as the application MCU can control the speed setpoint, which it definitely does, then all the risk is there. CPLD being responsible for protecting IGBTs from blowing up in case of software error, this is much less of a safety concern, as it will only blow a fuse and brick the product, causing downtime. OTOH, instructing the wrong speed, or ignoring an enable input, can be actually pretty dangerous and well possible by a simple software bug even if CPLD handles the lowest level switching.

Hence my own guesstimate is, VFDs that do use CPLDs do this for legacy reasons, they have done that with earlier MCUs which simply lacked proper motor control HW (probably in the 1990's). Later, in 2000's, microcontroller manufacturers started to integrate this functionality.
 

Offline TC

  • Contributor
  • Posts: 40
  • Country: us
Re: Interrupt routine duration
« Reply #33 on: August 14, 2022, 12:51:07 pm »

I kind of understand this, and kind-of don't. After all, say in motor controller, as long as the application MCU can control the speed setpoint, which it definitely does, then all the risk is there. CPLD being responsible for protecting IGBTs from blowing up in case of software error, this is much less of a safety concern, as it will only blow a fuse and brick the product, causing downtime. OTOH, instructing the wrong speed, or ignoring an enable input, can be actually pretty dangerous and well possible by a simple software bug even if CPLD handles the lowest level switching.

Hence my own guesstimate is, VFDs that do use CPLDs do this for legacy reasons, they have done that with earlier MCUs which simply lacked proper motor control HW (probably in the 1990's). Later, in 2000's, microcontroller manufacturers started to integrate this functionality.
[/quote]

You don't fully understand requirements for safety in this context. Look up IEC 61508 and related standards (for example, the standard that includes "safe torque off" for motor safety. Things like unexpected startup, emergency stops, etc. are generally encompassed in motor safety.

In IEC 61508 there are both hardware and software requirements. As the level of risk reduction increases (the safety requirements increase) the rigor of the development processes for "safe" hardware and software increases. Traceability of safety requirements, validation, control of the manufacturing process, and control of the entire life-cycle of the device (including any changes or maintainance) are typically required.

So... when people say "safety"... this isn't something that you make "guesstimates" about. If you are developing motor drives familiarize yourself with safe-torque-off... then you will start to appreciate the simplicity of it and therefor the improved safety that such a hardware-based preventive measure can provide vs. software.
 
The following users thanked this post: nctnico

Offline TC

  • Contributor
  • Posts: 40
  • Country: us
Re: Interrupt routine duration
« Reply #34 on: August 14, 2022, 01:02:05 pm »
I should have also mentioned IEC 60204 Safety of Machinery... check that out, at least what the scope/purpose of the standard is.
 

Offline TC

  • Contributor
  • Posts: 40
  • Country: us
Re: Interrupt routine duration
« Reply #35 on: August 14, 2022, 01:14:10 pm »
RE: hardware vs. software safety...

Safety of software will generally require that the custom code is certified, that the tools that are used to develop it are certified, that numerous constraints for the development of that software are enforced (like no object-oriented languages, MISRA C, etc.), safety-certified RTOS, safety-certified hardware, etc.

In contrast, it is much easier to do a fault analysis of something like a CPLD that implements a safe-torque off function in hardware. Consider also the performance of the safety function (i.e. real-time performance of hardware vs. a bunch of code).

So this is a good example of why hardware-based safety-functions are so important.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8167
  • Country: fi
Re: Interrupt routine duration
« Reply #36 on: August 14, 2022, 03:45:03 pm »
You don't fully understand requirements for safety in this context.

Before judging what I understand and what I do not, maybe read the context and what I actually said?

I was replying to a post which said CPLD is used for semiconductor switching to prevent MCU from blowing up the semiconductors so that inexperienced software team can be used. I don't think this is a correct description at all. The safety you describe must follow into the MCU firmware, too, unless we talk about a completely separate infotainment part, which VFDs do not have.

As the configuration such as speed, or configuration of enable signals goes through that MCU, the software needs to comply to all relevant safety standards you discussed. Hence, I don't see any mandate to use a CPLD as long as the MCU provides all the required resources, and they do nowadays. Hardware inside the MCU will count as hardware as much as CPLD does.

Hence I vote for "legacy", as I wrote above.

But your reply is typical safety speak. Technical questions you are unable to answer are dodged by "you don't understand safety" and then throwing around standard numbers. I have done safety critical firmware and oh boy the biggest challenge is to find the balance between common sense and following the relevant standards, because standards rarely have 1:1 relevance on the actual product, so if followed blindly without understanding, have very high risk of reducing safety, sometimes significantly, and for me, most important is I can sleep well at night, and for me that means "nobody dies", instead of "if somebody dies, I have a good paper trail".
« Last Edit: August 14, 2022, 03:57:19 pm by Siwastaja »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #37 on: August 14, 2022, 06:37:50 pm »
When I returned to embedded programming after 25years, I was appalled that little had changed since the early 80s. People still programmed 8/16/32 bit processors in C. I was horrified at how little I had to re-learn.
Well, people that are programming microcontrollers in assembly seem to have gone during the last decade so there is progress  ;D

;D indeed :)

Quote
But I see your point. I was kind of hoping to see more like Ada to write software in a less messy, better controlled way. But the problem is the all-or-nothing approach that is generally followed. For example: Ada needs a large runtime environment which then needs to be ported to every microcontroller before it can be used. But I see the same where it comes to using languages like Python and Lua on microcontrollers. I've looked at various projects but they all go for the all-or-nothing approach where you either write the entire application in Lua / Python or not. C is much less demanding in that respect.

I'm enjoy applications that are tied tightly to the hardware and have hard realtime constraints, but which are specified at a much higher level. The interesting points then become hardware/software partitioning with as much in software as possible, plus ensuring that the high level specs are visibly implemented correctly in the application.

If I can arrange that changes to the high level spec are quick and easy to implement, so much the better.

Quote
However, every now and then I have a project which would greatly benefit for having the business logic implemented as a script so I looked into using Lua on a microcontroller -again-. This time with the clear goal that Lua should have a supporting role. C does the heavy lifting and Lua just ties everything together by calling C functions, shove data around and make decissions. For this purpose I took the emblua project (https://github.com/szieke/embLua) and modified it so it can run a script in parallel with C code without needing an OS. It still needs a bit of testing and a few tweaks (to allow debugging) but I plan to put this on Github when it is finished.

I haven't done that, but it seems sane and I would do it if the occasion arose.

The polar opposite, creating a DIY scripting language in C - um, not so much :) "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #38 on: August 14, 2022, 06:43:01 pm »
My programs have all basically had empty while(1); loops for years, I do all in ISRs after init.

I used to be a proponent for co-operative schedulers (no interrupts allowed at all). But with the exception of some regulatory and reliability edge cases, the nested ISR way you mention is generally easier to understand, and can still be designed to pretty high assurance levels.

If the customer doesn't need it to be reliable, then I can deliver something arbitrarily fast, cheap, and soon :)

I've seen nested ISRs become hairy and unpredictable, especially where realtime constraints and long-duration processing is involved.

Short ISRs generating events handled by a cooperative scheduler seems a good middle ground. Such events are processed identically to events generated by applications within the cooperative scheduler.
« Last Edit: August 14, 2022, 06:45:22 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #39 on: August 14, 2022, 06:48:36 pm »
Kinda tempted to learn Rust and see how that goes.  Not very well supported on AVR though (just barely, I think? an LLVM outputter?), so that would assume, in addition to learning the whole language, potentially also grappling with compiler bugs, or just poor optimization (granted, not that avr-gcc is a high hurdle to clear).

I have similar thoughts. Currently I'm content to watch other people push Rust as hard as possible as a C "replacement" (for want of a better word), to see where and how it is insufficient.

Quote
Regarding scripts, there's also MicroPython, too big to put on small like 8-bit or entry level parts, but STM32F4 will certainly get you there IIRC.  Maybe not something you'd want to embed for production?, but a handy way to test out a lot of things quickly.

Quick and dirty is certainly valid in some applications.
« Last Edit: August 14, 2022, 08:03:46 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #40 on: August 14, 2022, 06:59:49 pm »
Rust looks like it is another all-or-nothing approach. IMHO a better solution is to have a hybrid solution where you can mix another language with C / C++.

I regard C as an all-or-nothing approach, especially when considering hardware as the other half of such a hybrid solution :)

Alternatively, what about a hybrid solution where Java is mixed with C via JNI.

Oooh, tweaking people's tails is fun. I sometime have fun with softies, when having a drink in a pub. I tell them I can't tell the difference between hardware and software, then provide counter-examples to each definition they invent.

Quote
Why wouldn't it be useful for production? Actually one of the goals for using a different language is to get rid of the pitfalls of C that can introduce bugs. IOW: make software more robust. Micropython and Lua both run in a sandbox (VM). The Lua project I'm working on uses a fixed memory pool as well so whatever the Lua code does, it can not interfere with safety related code.

Yes indeed. I'm still looking for an ideal solution, but progress is being made!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Interrupt routine duration
« Reply #41 on: August 14, 2022, 07:08:45 pm »
Rust looks like it is another all-or-nothing approach. IMHO a better solution is to have a hybrid solution where you can mix another language with C / C++.

Alternatively, what about a hybrid solution where Java is mixed with C via JNI.
Has been tried already:
https://www.st.com/en/evaluation-tools/stm3220g-java.html
Obsolete but also seemed to be a costly solution to use.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21657
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Interrupt routine duration
« Reply #42 on: August 14, 2022, 10:37:52 pm »
I've seen nested ISRs become hairy and unpredictable, especially where realtime constraints and long-duration processing is involved.

Like, my examples use relatively few ISRs, and short ones (with Reverb's DSP being the outstanding exception); there's only so many combinations in which they can be overlapped.  (Also, my XMEGA+ projects, the PMIC/CPUINT prohibits self-interruption, so nesting a given ISR isn't even an option.  I've done it on ATMEGA though.*)  When you have small projects like these, it's quite feasible.

When you don't have small projects, well, the complexity grows exponentially.  It's ever harder to reason about, or test against, let alone organize, or control in any meaningful way!

*Which was almost certainly a bad idea, but it was an early project (back in college!) so it's hard to say exactly how many bugs I wrote into it and where, and if the nested update interrupt was terminally unfixable, or just poorly implemented or guarded.  I certainly wouldn't write it the same way today.  Which, indeed, I didn't: the Reverb project's character-LCD menu system basically represents the buggiest part of that early project (the menu system), but written more-or-less properly. :D


Quote
Short ISRs generating events handled by a cooperative scheduler seems a good middle ground. Such events are processed identically to events generated by applications within the cooperative scheduler.

To put this another way: if you can spare the extra response time (latency, propagation delay, etc.), you can trade priority CPU cycles for memory use.  Compact your received/ready-to-transmit data into buffers (be it a global event queue, or per device/peripheral/interface), and deal with it at a more relaxed pace, in an order you can comfortably reason about.  You can completely avoid interrupting one calculation partway through, and only need to concern yourself with the handoff from one event to the next; each event executes atomically, and the only shared state you need to think about is what's done after each one executes.

There's some abstraction/formatting that's worth thinking about here too, probably.  How do you queue an event -- what is an event?  When it's simple stream data, like a USART, it's just a circular char buffer, offset and length, easy.  But you could have things that are much less straightforward to describe; network packets, obviously, need to be buffered with arbitrary sizes, say with an array of offsets and lengths.  Maybe it should be packed into a stream of enumerated commands (e.g. display lists).  Or processed by OSI layers: maybe protocol-level character data (JSON? XML?) is parsed into objects, structures, pointers, etc., and those more abstract objects are passed to the application layer, and so on.  And maybe you have buffers on each layer, allowing things like resolving out-of-order packets, or merging/splitting requests, etc..

And, compare to the traditional JS VM: there's only ever one thread of execution*, and control is transferred to a received event after each function (or the global) finishes.

*Except Workers, but those are a newer feature, hence, "traditional JS"...

You can also figure out, if you don't have enough CPU cycles to handle things via scheduler, you almost certainly won't with them interrupting each other in (potentially) random order plus calling overhead.  And you have meaningful control over what happens when you do run out of CPU cycles: for example, you can always force a main() cycle every loop, using a round-robin scheduler say; and you can drop certain events by priority, or by limiting overflow of their buffers.  Interrupts are mostly a free-for-all, and anyway, platforms usually provide just a few levels of priority, so you have no way to organize more, or make them based on conditions -- it's the same call every time, you'd have to go well out of your way to be able to make further interrupts contingent on available resources.

Which isn't exactly intractable -- a common pattern in ISRs, is to finish up by preparing for the next cycle.  For example, an ADC ISR advancing to the next MUX channel; or the next-next, even, if the next is already sequenced (i.e., we're processing sample N, and it's already acquiring sample N+1, so we need to set it up to acquire sample N+2).  Well, we could add checks if resources will be available to handle it next time; and if not, e.g. disable the interrupt (dropping received data) if buffers are full, or if a higher-priority buffer is filling up and will need more CPU cycles committed to it, etc..  But then we also need something to re-enable the peripheral once resources are available again, and hopefully we don't have that spread across, you know, main(), and the scheduler, and a timer interrupt, and a half dozen other interrupts..... it can all get very messy very quickly, if you just try to wing it without an overarching structure to plan from.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #43 on: August 14, 2022, 11:10:40 pm »
I've seen nested ISRs become hairy and unpredictable, especially where realtime constraints and long-duration processing is involved.

When you don't have small projects, well, the complexity grows exponentially.  It's ever harder to reason about, or test against, let alone organize, or control in any meaningful way!

Small is trivial. Great for beginners, but professionals have to deal with more.

You can't test quality into a design. Tests can only prove the prescence of faults, not their absence.

I'm sure you are perfectly aware of that, but it horrifies me every time someone is surprised by that new (to them) concept.

As I taught my daughter, "let's make new mistakes", i.e. think about how X could have failed, and how to avoid that next time.

Quote
Quote
Short ISRs generating events handled by a cooperative scheduler seems a good middle ground. Such events are processed identically to events generated by applications within the cooperative scheduler.

To put this another way: if you can spare the extra response time (latency, propagation delay, etc.), you can trade priority CPU cycles for memory use. 

That consideration is often the key determinant of what is done in hardware, and what in software.

Quote
Compact your received/ready-to-transmit data into buffers (be it a global event queue, or per device/peripheral/interface), and deal with it at a more relaxed pace, in an order you can comfortably reason about.  You can completely avoid interrupting one calculation partway through, and only need to concern yourself with the handoff from one event to the next; each event executes atomically, and the only shared state you need to think about is what's done after each one executes.

Yup. The "half-async half-sync" design pattern is a standard technique that is useful in many applications.

Quote
There's some abstraction/formatting that's worth thinking about here too, probably.  How do you queue an event -- what is an event?  When it's simple stream data, like a USART, it's just a circular char buffer, offset and length, easy.  But you could have things that are much less straightforward to describe; network packets, obviously, need to be buffered with arbitrary sizes, say with an array of offsets and lengths.  Maybe it should be packed into a stream of enumerated commands (e.g. display lists).  Or processed by OSI layers: maybe protocol-level character data (JSON? XML?) is parsed into objects, structures, pointers, etc., and those more abstract objects are passed to the application layer, and so on.  And maybe you have buffers on each layer, allowing things like resolving out-of-order packets, or merging/splitting requests, etc..

Getting the right level/levels of abstraction is absolutely key to an understandable specification and design.

Frequently it is useful to have multiple chained FSMs. In yout example, the first FSM would gather bits into a char, then generate a "char received" event. Another FSM would react to char received events, and gather them into a "packet recieved" event. Another FSM would react to "packet received" events, and generate maybe "high level" events.

Some of those FSMs will be implemented in software, some in hardware; who cares: an FSM is an FSM.

That's nothing novel; the entire telecom system is specified in that way, where the "highest" level events night be "call connected" "call disconnected" or "money run out" events.

Some of the XML stream parsers (i.e not those that are DOM based) are based on exactly such event processing concepts.

Quote
And, compare to the traditional JS VM: there's only ever one thread of execution*, and control is transferred to a received event after each function (or the global) finishes.

*Except Workers, but those are a newer feature, hence, "traditional JS"...

I've always run away from anything JavaScript based. I have zero interest in interactive web pages.

Quote
You can also figure out, if you don't have enough CPU cycles to handle things via scheduler, you almost certainly won't with them interrupting each other in (potentially) random order plus calling overhead.  And you have meaningful control over what happens when you do run out of CPU cycles: for example, you can always force a main() cycle every loop, using a round-robin scheduler say; and you can drop certain events by priority, or by limiting overflow of their buffers.  Interrupts are mostly a free-for-all, and anyway, platforms usually provide just a few levels of priority, so you have no way to organize more, or make them based on conditions -- it's the same call every time, you'd have to go well out of your way to be able to make further interrupts contingent on available resources.

Which isn't exactly intractable -- a common pattern in ISRs, is to finish up by preparing for the next cycle.  For example, an ADC ISR advancing to the next MUX channel; or the next-next, even, if the next is already sequenced (i.e., we're processing sample N, and it's already acquiring sample N+1, so we need to set it up to acquire sample N+2).  Well, we could add checks if resources will be available to handle it next time; and if not, e.g. disable the interrupt (dropping received data) if buffers are full, or if a higher-priority buffer is filling up and will need more CPU cycles committed to it, etc..  But then we also need something to re-enable the peripheral once resources are available again, and hopefully we don't have that spread across, you know, main(), and the scheduler, and a timer interrupt, and a half dozen other interrupts..... it can all get very messy very quickly, if you just try to wing it without an overarching structure to plan from.

In my experience it is best to have only two levels of thread priority and/or interrupt priority: "normal" and "panic". You should be able to do anything with those, and if more are introduced then it is a sign the system needs radical refactoring before the technical debt becomes intractible.

I will listen to other cases, but will ignore claims of "convenience" and demand to be convinced claims of "necessity".
« Last Edit: August 14, 2022, 11:29:50 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21657
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Interrupt routine duration
« Reply #44 on: August 15, 2022, 02:07:16 am »
You can't test quality into a design. Tests can only prove the prescence of faults, not their absence.

I'm sure you are perfectly aware of that, but it horrifies me every time someone is surprised by that new (to them) concept.

Sure you can. That's what they said about semiconductors. The yield might've been shite early on (like the <1% of early JP transistor lines, or certain Intel lines, etc.), but all they needed was a few parts that worked, and refinements to gradually bring that up.

Mind, this is a case where the design is essentially correct, there's just finitely many errors that occur in manufacturing (process impurities, dust, etc.), and they only need to get lucky enough to find one free of defects.

Does that work in software?  Maybe.  But it's worth noting, if nothing else, the meaning behind that statement, and where it may or may not apply.  It was applied erroneously to them, by those righteous hardware reliability types.  We must always be cognizant of the limitations of our knowledge, like this.

So, software.  Well, fundamentally it's a design, not a production process.  So, the above is out (and, to be clear, I'm not trying to force the above meaning into current context!).

And, I know where you're coming from.  To be clear: software design is something that -- given adequately comprehensive specifications -- we can prove, perfectly to work.  Not just "beyond a shadow of a doubt", not anything that could be tested (complete test coverage is combinatorial in complexity, it can't be done in general!), perfect proof.

Assuming all the toolchain and stuff is working correctly, I mean, but a lot of work goes into those, along similar lines, when you're asking for something so reliable.  Provable stacks exist, from transistor level to compiler.

Now, I'm not quite sure if you're talking about formally proven systems here, or more informally, but it's good to know in any case that it's out there, and doable.

AFAIK, provable computing is not very often used, even in high-rel circles, just because it's, I don't know, so much of a pain, so different from regular development processes?

And most of the time, it doesn't matter: if the thing does what it needs to, most of the time, and is reasonably tolerant of nonstandard inputs (as fuzzing can cover -- whether formally, or by the crude efforts of testers), who cares, ship it.  Some customers will eventually hit the edge cases, and maybe you patch those things up on an as-needed basis.  Maybe the thing is still chock full of disastrous bugs (like RCE), but who's ever going to activate them?  And what does it matter if it's not a life-support function, or connected to billions of other nodes (as where viruses can spread)?

So, to be clear, it depends on the level of competency required.  Provable computing is just another option in the toolbox.

Clearly, you're approaching things from a high-rel standpoint.  That's an important responsibility.  But it's also not something that can be applied in general.  At least, not with developers and toolchains where they are right now.

And that's even assuming that every project was specified perfectly to begin with.  Clients or managers come to engineers for solutions, not for mathematical proofs; it's up to the engineers to figure out if proofs are warranted, or if winging it will suffice.  And for 99.9% of everything, the latter is true, and so things are.

And, I also mention testing for a couple reasons:
1. It's the most basic way to figure out how something works (or doesn't).  It can be exceedingly inefficient (trivially, say, how do you test a 1000 year timer?), but to the extent anything can be learned by doing it, in any particular case -- that's at least some information rather than complete ignorance, or guesswork.
2. There's "test driven development".  Which, I don't even have any good ways to do, in most embedded projects; most of the tools I have, don't come with test suites, so I can't even run tests to confirm they work on my platform.  And most embedded platforms have no meaningful way of confirming results, other than what I've put into them (e.g. debug port).  In relatively few cases, I can write a function in C, and test it on the PC -- exhaustively if need be (a lazy, and often infeasible method, but when it is, it's no less effective than direct proof).

TDD can be equivalent to proof, even without exhaustive testing, if all code paths can be interrogated and checked; granted, this is also, in general, not something you're often going to have (the code paths are invisible to the test harness, and highly nonlinear against the input, i.e. how the compiler decides to create branches may vary erratically with how the input is formulated or structured).  Though, this hints at something which can: if we add flags into every code path, and fuzz until we find the extent of which inputs, given other inputs, activate those paths, we can attempt to solve for all of them -- and as a result, know how many we're yet missing.

TDD I think is mainly a level-up in responsibility, where the project is persistent enough to not only be worth writing tests for, but to accumulate tests over time as bugs are found (write a test for it, to prevent it popping up in later refactoring!), while evolving new features -- extending an API while keeping it backwards-compatible, say.  It's far more agile than drawing up a comprehensive provable spec every time, and it's reliable enough for commercial application.  (So, it would figure that I haven't been exposed to it; I simply don't work on a scale where that's useful, besides the practicability issue.)

(And maybe I'm overstating how much trouble it is to do provable computing, or something in the spirit of it, if not formal.  I don't work with it either, and curious readers should read up on it instead.)

And fuzzing, while it's still not going to be exhaustive; anywhere that we can ensure, or at least expect, linearity between ranges (i.e., a contiguous range of inputs does nothing different with respect to execution), we are at least very unlikely to need test coverage there.  (Insert Pentium FDIV bug here. :P )

Actually, heh, I wonder how that's affected by branch-free programming.  One would want to include flags equivalent to program branches.  So, it's not something that can be obviously discovered from the machine code, for example; the compiler may emit branchless style instructions instead of literally implementing a control statement.  It might not even branch in the source, if similar [branch-free] techniques are used (like logical and bit operators, and bit or array vectorization tricks).



Quote
Frequently it is useful to have multiple chained FSMs. In yout example, the first FSM would gather bits into a char, then generate a "char received" event. Another FSM would react to char received events, and gather them into a "packet recieved" event. Another FSM would react to "packet received" events, and generate maybe "high level" events.

Some of those FSMs will be implemented in software, some in hardware; who cares: an FSM is an FSM.

That's nothing novel; the entire telecom system is specified in that way, where the "highest" level events night be "call connected" "call disconnected" or "money run out" events.

And, it's no accident that it's reminiscent of (or explicitly referencing..) the OSI model, which came out of telecom (more or less?)!


Quote
Quote
And, compare to the traditional JS VM: there's only ever one thread of execution*, and control is transferred to a received event after each function (or the global) finishes.

*Except Workers, but those are a newer feature, hence, "traditional JS"...

I've always run away from anything JavaScript based. I have zero interest in interactive web pages.

For point of reference, for those more on the software or web dev side of things, you understand. :)


Quote
In my experience it is best to have only two levels of thread priority and/or interrupt priority: "normal" and "panic". You should be able to do anything with those, and if more are introduced then it is a sign the system needs radical refactoring before the technical debt becomes intractible.

I will listen to other cases, but will ignore claims of "convenience" and demand to be convinced claims of "necessity".

Agreed.

Tim
« Last Edit: August 15, 2022, 02:11:07 am by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #45 on: August 15, 2022, 09:23:41 am »
You can't test quality into a design. Tests can only prove the prescence of faults, not their absence.

I'm sure you are perfectly aware of that, but it horrifies me every time someone is surprised by that new (to them) concept.

Sure you can. That's what they said about semiconductors. The yield might've been shite early on (like the <1% of early JP transistor lines, or certain Intel lines, etc.), but all they needed was a few parts that worked, and refinements to gradually bring that up.

Mind, this is a case where the design is essentially correct, there's just finitely many errors that occur in manufacturing (process impurities, dust, etc.), and they only need to get lucky enough to find one free of defects.

And as you realise, that testing is very different. Trying to use design validation/verification tests to detect replication errors is not fruitful!

Quote
Does that work in software?  Maybe.  But it's worth noting, if nothing else, the meaning behind that statement, and where it may or may not apply.  It was applied erroneously to them, by those righteous hardware reliability types.  We must always be cognizant of the limitations of our knowledge, like this.

So, software.  Well, fundamentally it's a design, not a production process.  So, the above is out (and, to be clear, I'm not trying to force the above meaning into current context!).

And, I know where you're coming from.  To be clear: software design is something that -- given adequately comprehensive specifications -- we can prove, perfectly to work.  Not just "beyond a shadow of a doubt", not anything that could be tested (complete test coverage is combinatorial in complexity, it can't be done in general!), perfect proof.

Validation vs verification is relevant at this point!

Quote
Assuming all the toolchain and stuff is working correctly, I mean, but a lot of work goes into those, along similar lines, when you're asking for something so reliable.  Provable stacks exist, from transistor level to compiler.

Now, I'm not quite sure if you're talking about formally proven systems here, or more informally, but it's good to know in any case that it's out there, and doable.

I've always rather liked the strategy Boeing used with the 777. Hatley & Pirbhai's technique was to have one stack for the specifications using whichever techniques were suitable for the component of the design. It was almost executable (unlike the later UML kitchen sink). They also had an separate independent stack for the implementation. The only part they thought worth automating was a database linking every specification artefact with a corresponding implementation artefact - which could be hardware/software/mechanical (or in the case of the 737 MAX, maybe wetware!). That ensured that nothing fell down the gaps between the floorboards.

Quote
AFAIK, provable computing is not very often used, even in high-rel circles, just because it's, I don't know, so much of a pain, so different from regular development processes?

It does seem to require specialists with domain knowledge and maths experience, which is usually an empty set!

Back in the 80s I was beguiled by the promise of formal methods. They did have some success (IBM's CICS, Transputer's floating point), but then at least one unpleasant failure (RSRE's VIPER processor, where they almost managed to have formal traceability from instruction set specification to transistors).

Then I realised that the even if formal methods could become practical, they would always run into the issue of interacting with things that aren't formally specified.

Quote
And most of the time, it doesn't matter: if the thing does what it needs to, most of the time, and is reasonably tolerant of nonstandard inputs (as fuzzing can cover -- whether formally, or by the crude efforts of testers), who cares, ship it.  Some customers will eventually hit the edge cases, and maybe you patch those things up on an as-needed basis.  Maybe the thing is still chock full of disastrous bugs (like RCE), but who's ever going to activate them?  And what does it matter if it's not a life-support function, or connected to billions of other nodes (as where viruses can spread)?

So, to be clear, it depends on the level of competency required.  Provable computing is just another option in the toolbox.

Clearly, you're approaching things from a high-rel standpoint.  That's an important responsibility.  But it's also not something that can be applied in general.  At least, not with developers and toolchains where they are right now.

And that's even assuming that every project was specified perfectly to begin with.  Clients or managers come to engineers for solutions, not for mathematical proofs; it's up to the engineers to figure out if proofs are warranted, or if winging it will suffice.  And for 99.9% of everything, the latter is true, and so things are.

How systems work is usually easy and boring. It is much more interesting and important to understand how it fails, and how that failure is detected and corrected.

I would be satisfied with people that
  • can spot where salesmen are effectively claiming they've solved the Byzantine Generals or Dining Philosophers problems
  • ensure there are manual correction mechanisms that can overcome inevitable automated failures
  • don't seriously believe that a system is working because none of the unit tests has failed
  • realise that unit tests aren't going to be much with, say, ACID transactional properties

Everybody has seen the first two; I've seen the last two :(

Quote
And, I also mention testing for a couple reasons:
1. It's the most basic way to figure out how something works (or doesn't). 

Not really w.r.t. "works" (doesn't work, yes) 

The nearest is using subtle failures (especially in wetware) to give glimpses as to how things normally operate.

Quote
It can be exceedingly inefficient (trivially, say, how do you test a 1000 year timer?), but to the extent anything can be learned by doing it, in any particular case -- that's at least some information rather than complete ignorance, or guesswork.
2. There's "test driven development".  Which, I don't even have any good ways to do, in most embedded projects; most of the tools I have, don't come with test suites, so I can't even run tests to confirm they work on my platform.  And most embedded platforms have no meaningful way of confirming results, other than what I've put into them (e.g. debug port).  In relatively few cases, I can write a function in C, and test it on the PC -- exhaustively if need be (a lazy, and often infeasible method, but when it is, it's no less effective than direct proof).

TDD can be equivalent to proof, even without exhaustive testing, if all code paths can be interrogated and checked; granted, this is also, in general, not something you're often going to have (the code paths are invisible to the test harness, and highly nonlinear against the input, i.e. how the compiler decides to create branches may vary erratically with how the input is formulated or structured).  Though, this hints at something which can: if we add flags into every code path, and fuzz until we find the extent of which inputs, given other inputs, activate those paths, we can attempt to solve for all of them -- and as a result, know how many we're yet missing.

In theory, yes. In practice, not really. There is neither the time nor expertise to create decent tests that will detect flaws. At best there will be "happy days" (?daze?) unit tests, and idiotically trivial unit tests that give an appearance of code coverage.

Whether or not the unit tests are sufficient to discover edge-case errors is rarely explored, since it is perceived as too expensive.

Quote
TDD I think is mainly a level-up in responsibility, where the project is persistent enough to not only be worth writing tests for, but to accumulate tests over time as bugs are found (write a test for it, to prevent it popping up in later refactoring!), while evolving new features -- extending an API while keeping it backwards-compatible, say.  It's far more agile than drawing up a comprehensive provable spec every time, and it's reliable enough for commercial application.  (So, it would figure that I haven't been exposed to it; I simply don't work on a scale where that's useful, besides the practicability issue.)

TDD is a useful organisational tool to break atherosclerotic waterfall processes. Too often converts then have a pseudo-religious faith in TDD's powers, and think it is sufficient. It isn't, of course.

TDD is highly beneficial, but neither necessary nor sufficient.

Quote
(And maybe I'm overstating how much trouble it is to do provable computing, or something in the spirit of it, if not formal.  I don't work with it either, and curious readers should read up on it instead.)

Having observed, from a distance, people paid to explore the possibilities - you're not overstating it.

Quote
And fuzzing, while it's still not going to be exhaustive; anywhere that we can ensure, or at least expect, linearity between ranges (i.e., a contiguous range of inputs does nothing different with respect to execution), we are at least very unlikely to need test coverage there.  (Insert Pentium FDIV bug here. :P )

Actually, heh, I wonder how that's affected by branch-free programming.  One would want to include flags equivalent to program branches.  So, it's not something that can be obviously discovered from the machine code, for example; the compiler may emit branchless style instructions instead of literally implementing a control statement.  It might not even branch in the source, if similar [branch-free] techniques are used (like logical and bit operators, and bit or array vectorization tricks).

Fuzzing, in any of its forms, is a useful extra tool in the armoury. Especially where it is able to spot cracks between organisational boundaries, or grossly naive programming presumptions.

<snipped points of violent agreement>
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline uer166

  • Frequent Contributor
  • **
  • Posts: 888
  • Country: us
Re: Interrupt routine duration
« Reply #46 on: August 15, 2022, 05:37:49 pm »
My programs have all basically had empty while(1); loops for years, I do all in ISRs after init.

I used to be a proponent for co-operative schedulers (no interrupts allowed at all). But with the exception of some regulatory and reliability edge cases, the nested ISR way you mention is generally easier to understand, and can still be designed to pretty high assurance levels.

If the customer doesn't need it to be reliable, then I can deliver something arbitrarily fast, cheap, and soon :)

I've seen nested ISRs become hairy and unpredictable, especially where realtime constraints and long-duration processing is involved.

Short ISRs generating events handled by a cooperative scheduler seems a good middle ground. Such events are processed identically to events generated by applications within the cooperative scheduler.

The issue with this is you've erased most of the guarantees that a "true" co-op scheduler if you add an interrupt. Namely stuff like:
  • No issues with data consistency. This means no physical way to get corrupt data due to non-atomicity
  • Fully deterministic execution: you can always say that at time T+<arbitrary ms>, you're executing task X
  • Worst case analysis triviality given some constraints
  • Easy runtime checking of health, such as a baked-in task sequence check, deadline check, etc etc

The problem with true co-op of course is the massive headache that it causes when you try to implement complex stuff: everything needs to be synchronized to the tick rate, no task can take longer to execute than the tick rate, total ban on interrupts, etc. Lots of fast stuff needs to be delegated to pure hardware. A lot can be done in this architecture, but even stuff like life-saving UL991/UL1998 GFCI equipment doesn't need this kind of assurance. Maybe a jet turbine FADEC is a different situation of course..
 

Offline uer166

  • Frequent Contributor
  • **
  • Posts: 888
  • Country: us
Re: Interrupt routine duration
« Reply #47 on: August 15, 2022, 05:50:31 pm »
RE: hardware vs. software safety...

Safety of software will generally require that the custom code is certified, that the tools that are used to develop it are certified, that numerous constraints for the development of that software are enforced (like no object-oriented languages, MISRA C, etc.), safety-certified RTOS, safety-certified hardware, etc.

Your code of course needs to be certified, but the tools generally don't need to be. It's perfectly fine to use a specific GCC release in 99.9% of cases, and stick with it. The exact same is true of the place/route tools for the CPLD: I bet you'd use the normal vendor tools to convert your VHDL into the CPLD config. Of course if you switch the GCC version + recompile, and the issued binary is different, you need to re-certify and get the notified bodies involved, but like, just don't change the code..

Quote
In contrast, it is much easier to do a fault analysis of something like a CPLD that implements a safe-torque off function in hardware. Consider also the performance of the safety function (i.e. real-time performance of hardware vs. a bunch of code).

So this is a good example of why hardware-based safety-functions are so important.

Is it? Modern MCUs are more ASICs than MCUs. There is plenty of hardware in the MCU to do a guaranteed shutdown to end up in a regulatory-defined risk-averse state. I've done UL2231 GFCI systems for shock protection implemented almost 100% in STM32G4 firmware and internal periphery since there's really no way to do it in pure hardware anymore. It used to be just some opamps and comparators, but times are changing: the new requirements are much too tight and complex nowadays.
 
The following users thanked this post: Siwastaja

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19468
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Interrupt routine duration
« Reply #48 on: August 15, 2022, 06:38:27 pm »
My programs have all basically had empty while(1); loops for years, I do all in ISRs after init.

I used to be a proponent for co-operative schedulers (no interrupts allowed at all). But with the exception of some regulatory and reliability edge cases, the nested ISR way you mention is generally easier to understand, and can still be designed to pretty high assurance levels.

If the customer doesn't need it to be reliable, then I can deliver something arbitrarily fast, cheap, and soon :)

I've seen nested ISRs become hairy and unpredictable, especially where realtime constraints and long-duration processing is involved.

Short ISRs generating events handled by a cooperative scheduler seems a good middle ground. Such events are processed identically to events generated by applications within the cooperative scheduler.

The issue with this is you've erased most of the guarantees that a "true" co-op scheduler if you add an interrupt. Namely stuff like:
  • No issues with data consistency. This means no physical way to get corrupt data due to non-atomicity
  • Fully deterministic execution: you can always say that at time T+<arbitrary ms>, you're executing task X
  • Worst case analysis triviality given some constraints
  • Easy runtime checking of health, such as a baked-in task sequence check, deadline check, etc etc

The problem with true co-op of course is the massive headache that it causes when you try to implement complex stuff: everything needs to be synchronized to the tick rate, no task can take longer to execute than the tick rate, total ban on interrupts, etc. Lots of fast stuff needs to be delegated to pure hardware. A lot can be done in this architecture, but even stuff like life-saving UL991/UL1998 GFCI equipment doesn't need this kind of assurance. Maybe a jet turbine FADEC is a different situation of course..

None of those points is useful; they are all true in any embedded realtime system! (Exception: xCORE+xC systems with their (regrettably) unique architecture concepts and design-time guarantees. See my other posts for those!)

The point is to ensure the ISRs merely read the peripheral and atomically insert a message in a queue; that queue is atomically read by the cooperative scheduler whenever it chooses what to do next.

Now I know you can't do such atomic actions is most C standards. In that case assembler is required, but that is eminently tractable since it executes only in a few well designated places.

See my other posts about interrupt and task priorities; there should be two of each: normal and panic.

There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline uer166

  • Frequent Contributor
  • **
  • Posts: 888
  • Country: us
Re: Interrupt routine duration
« Reply #49 on: August 15, 2022, 07:03:46 pm »
None of those points is useful; they are all true in any embedded realtime system!

Huh? How are they not useful if they can create a provably correct realtime system; none of those points are true normally.. You don't get any of those points/guarantees in any run-of-the-mill embedded system. With time-triggered co-op you're getting true determinism at the expense of it being an absolute dog to design for and being generally inflexible/fragile, but that is a trade-off you might want to do in some contexts.

Think about it: as you execute your code in a co-op scheduler, there can be nothing that can interrupt your instructions, change control flow, or modify any state. This reduces the overall state space of your system by many orders of magnitude. As soon as you have even one interrupt, your tasks can get interrupted at any point in control flow, and all those guarantees go out the window. TT co-op more-or-less erases a very large subset of hard to reproduce possible bugs, which means you don't need to mitigate them and deal with them in any way.

Now is that something you'd do as a matter of course? Hell no, it's in a similar category to provably correct, fully statically analyzed systems: it's limiting, difficult, slow, sub-optimal in terms of resource use, and inflexible. P.s.: quit shilling for xCORE, I've read enough about it and it's uninteresting to me and probably to most people on these forums.
 
The following users thanked this post: nctnico, Siwastaja


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf