Author Topic: How much code in an ISR is too much.  (Read 10603 times)

0 Members and 1 Guest are viewing this topic.

Offline blewisjr

  • Frequent Contributor
  • **
  • Posts: 301
How much code in an ISR is too much.
« on: May 19, 2013, 12:55:53 am »
Hello Again

Time for another awkward newbie question.

First Keep in mind the CPU clock is running at 1 MHZ!

I am working on an alarm clock project just as mentioned in my previous post.  Right now I have a working clock.  The clock updates from a hard coded value for now.  The clock updates great.  I am running it off a 16 bit timer interrupting currently every 4ms to reduce display flicker when all 4 digits of the 7 segment is lit.  Every 250 cycles = 1 second which allows me to update the time on an exact interval appropriately.  Now I am moving into the act of hooking up and coding the push buttons and switches.  I have 4 push buttons and 1 sliding switch.  2 push buttons will be for setting the time.  Another push button will be for toggling setting the alarm value and viewing the alarm time when held down.  The last button is for snooze and the switch is to turn alarm activation on or off.

The push buttons need to be de-bounced as for the slider switch I don't think it needs a de-bounce as it is either on or off.  All are set to active low as the pins have internal pull-ups.

This brings about my question.  I only have 1 16 bit timer and the approximate de-bounce time is usually about 10ms.  My 16 bit timer ISR has a handful with managing the time, the display, and the actual alarm itself.  Display updating every 4 ms and time handling every 1 second to update the seconds. Soon it will also handle alarm checking like stated to toggle the alarm.  Essentially the ISR itself has varying execution times depending on what needs to be done at what point. 

1. So being that the ISR is already running at a nice interval would it be possible to squeeze in the de-bouncing code every 12ms? 
2. Should I play it safe and do the extra work needed to keep track of 10ms in a separate 8 bit timer through an overflow interrupt?
 

Offline mariush

  • Super Contributor
  • ***
  • Posts: 3972
  • Country: ro
  • .
Re: How much code in an ISR is too much.
« Reply #1 on: May 19, 2013, 01:13:20 am »
I'm not sure about how much code should be in an ISR, but considering you run this thing at 1 Mhz, wouldn't adding a capacitor on each button be easier than doing the debouncing in the hardware?
Here's two examples:

the easy one using only a capacitor: http://www.all-electric.com/schematic/debounce.htm
and the variant where you go nuts with a schmitt triggers and charging diode and resistor or custom ics: http://www.labbookpages.co.uk/electronics/debounce.html

The simple cap. option works fine, it worked ok for me with 0.47uF-1uF ceramic capacitors, directly to the pic.

Here's a couple of videos with this hardware debouncing stuff, good videos




 

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2038
  • Country: au
Re: How much code in an ISR is too much.
« Reply #2 on: May 19, 2013, 01:19:29 am »
get a copy of the 8bit timer at the start of the isr and again at the end. If time elapsed is greater than the smallest interrupt interval you need to decrease isr code
 

Offline blewisjr

  • Frequent Contributor
  • **
  • Posts: 301
Re: How much code in an ISR is too much.
« Reply #3 on: May 19, 2013, 01:27:37 am »
I'm not sure about how much code should be in an ISR, but considering you run this thing at 1 Mhz, wouldn't adding a capacitor on each button be easier than doing the debouncing in the hardware?
Here's two examples:

the easy one using only a capacitor: http://www.all-electric.com/schematic/debounce.htm
and the variant where you go nuts with a schmitt triggers and charging diode and resistor or custom ics: http://www.labbookpages.co.uk/electronics/debounce.html

The simple cap. option works fine, it worked ok for me with 0.47uF-1uF ceramic capacitors, directly to the pic.

Here's a couple of videos with this hardware debouncing stuff, good videos





Nice I will check out these video's later if I can de-bounce the push buttons with a cap that will save some code.  As for the slider am I correct in that not needing a de-bounce since there is no spring action just completing the circuit contact?

AlfBaz: great idea that seems like a nice logical way to ensure my ISR is not overloaded.  I don't think it should be at the moment because everything is happening right on the dot according to my scope debugging.
 

Offline mariush

  • Super Contributor
  • ***
  • Posts: 3972
  • Country: ro
  • .
Re: How much code in an ISR is too much.
« Reply #4 on: May 19, 2013, 01:45:25 am »
Sliders have debouncing issues too or two positions could be on at the same time:



but considering you'd have two inputs to the microcontroller and one will always be the opposite of the other, you could just note when the pins data is changing and then keep reading several times in software for a ms or something like that to get the switch to settle down, the debouncing is much easier (less instructions).
capacitors on each slide position would also work (capacitors would charge for longer time than the time needed for slider to move completely to one contact in case it touches two at same time)
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2094
Re: How much code in an ISR is too much.
« Reply #5 on: May 19, 2013, 01:46:06 am »
I don't think it should be at the moment because everything is happening right on the dot according to my scope debugging.

If you have a scope set a pin at the start of the interrupt and clear it an the end, very easy to see the interrupt run time and how much you have spare.

As for how much code the is no reason not to have a lot of code in the interrupt. If that is the code you need to run it has to run somewhere. The limit would be when you miss the next interrupt. Even then as long as you don't miss it completely handling an interrupt a bit late now and then might not be much of an issue - maybe a bit of flicker if you are multiplexing the display in it.

 

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2038
  • Country: au
Re: How much code in an ISR is too much.
« Reply #6 on: May 19, 2013, 01:49:21 am »
As for the slider am I correct in that not needing a de-bounce since there is no spring action just completing the circuit contact?
While the actuator may be of sliding type are you sure the copper is sliding also? I guess if it is bouncing your code may toggle state but will settle on the final value, depends if this is important to your application. To see if it bounces, do a single shot capture with your scope
 

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2038
  • Country: au
Re: How much code in an ISR is too much.
« Reply #7 on: May 19, 2013, 01:54:31 am »
The limit would be when you miss the next interrupt. Even then as long as you don't miss it completely handling an interrupt a bit late now and then might not be much of an issue - maybe a bit of flicker if you are multiplexing the display in it.
To avoid missing an interrupt make sure you clear the IF at the start. That way if an event happens while you are in the ISR it will re-enter on exit. This can be problematic however as you may never get back to main
 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 892
  • Country: us
Re: How much code in an ISR is too much.
« Reply #8 on: May 19, 2013, 02:12:19 am »
How much code in an ISR is too much?
Short answer: any more that the minimum necessary to record the state change.

Generally speaking, you want to keep your ISRs short and sweet. Letting them run on risks blocking other interrupts and causing other bad behavior. It's best if ISRs run deterministically (i.e., always take the same amount of time/cycles to complete). Avoid things like dynamic memory allocation and spin loops if possible.

For example, suppose you've got an ISR that reads characters from a UART. When you see an end-of-line, you want to do some processing. You could do the processing while still in the interrupt context, but a better idea would be to set a flag saying an EOL has been seen and return from the ISR.

Elsewhere, you've got a loop of some sort that checks the status of the flag and does the processing when the flag is set. You'll need some synchronization primitives to make this work, but you'll end up with a more predictable system.
 

Offline blewisjr

  • Frequent Contributor
  • **
  • Posts: 301
Re: How much code in an ISR is too much.
« Reply #9 on: May 19, 2013, 02:20:58 am »
Ultimately I could eliminate the slider with another button that sets a enabled flag.  De-bounce with a cap on all the buttons and then execute a pin change interrupt that filters the processing to when the pin is driven low.  This way the interrupts for the button presses are instant.  The flag for the alarm enabled button would allow me to prevent it from disabling when the button goes high.  I can do the same for changing the time / alarm time.  Thanks for all the tips I definitely feel I can pull it off without missing an all important timing interrupt.  After all it is a clock and it needs to stay in synch.
 

Offline Psi

  • Super Contributor
  • ***
  • Posts: 7619
  • Country: nz
Re: How much code in an ISR is too much.
« Reply #10 on: May 19, 2013, 04:36:47 am »
Yep, interrupts are for settings flags/counters and running commands that are time sensitive down to near cpu clock level.
If it can wait then put it onto the main loop.
Anything to do with buttons (that a human interacts with) is super slow from the MCUs perspective.

By using multiple flags in the ISR you can have many different things in void main which are executed at different times.
You can even use one flag for multiple events when dealing with things that aren't start-time critical like LED flashing.
De-bounce however, is start-time critical, you need the count to start when the button is pressed/released and not when the next flag toggle just so happens to occur. So each button needs it's own flag variable.

A common way to use the flag variable is to have
0 = Idle state (nothing happens, ISR doesnt count),
1 = First count (This gets set manually to start the process. The ISR increments any value except 0 to some predefined value and then sets it back to zero).
X = Any other value means timing is in progress.

So from your main code you set it to 1 and when it's back to 0 the predefined time has expired.

Here's an similar example i wrote a while back for the wiki
http://www.eevblog.com/wiki/index.php?title=Embedded_Programming_Tips_and_Tricks_for_Beginners#Repeating_Code_Using_Timer_Overflow_Interrupts

A slightly different apporach is to keep a variable counting forever inside the ISR (it will eventually wrap around and start again.)
You can use this to get timestamps for various events. The advantage is that one of these variables can control as many events as you like in void main. The disadvantage is you need to handle the possible occurrence of a wrap-around in the middle of your timestamp.
« Last Edit: May 19, 2013, 05:03:00 am by Psi »
Greek letter 'Psi' (not Pounds per Square Inch)
 

Offline amyk

  • Super Contributor
  • ***
  • Posts: 6848
Re: How much code in an ISR is too much.
« Reply #11 on: May 19, 2013, 11:05:26 am »
I agree with the "it's too much when it takes too long" statement; if you can fit all the processing within the ISR's timing constraints then it makes sense to do so rather than contort the control flow and add complexity.
 

Offline Psi

  • Super Contributor
  • ***
  • Posts: 7619
  • Country: nz
Re: How much code in an ISR is too much.
« Reply #12 on: May 19, 2013, 11:16:01 am »
I agree with the "it's too much when it takes too long" statement; if you can fit all the processing within the ISR's timing constraints then it makes sense to do so rather than contort the control flow and add complexity.

While one ISRs is running others are blocked. So other interrupts that might be supposed to do time critical tasks don't occur exactly when they should. That's why you really want them as short as possible.

Of course things get more complicated if the mcu has an advanced interrupt controller.
But you still want to keep them short so they don't delay others of the same or less priority.
« Last Edit: May 19, 2013, 11:25:12 am by Psi »
Greek letter 'Psi' (not Pounds per Square Inch)
 

Offline Paul Price

  • Super Contributor
  • ***
  • Posts: 1419
Re: How much code in an ISR is too much.
« Reply #13 on: May 19, 2013, 12:04:43 pm »
The idea behind a MCU is to get it to monitor, respond to, always watch all changing events of the program logic and circuitry and respond as quickly as possible. Only the ISR can do that if it always checking what needs attention and what needs to be done.

The answer, as much ISR code as possible. Most professional coders put all the code into the ISR and just a single While(A==A);
statement in main();


How much is too much.  You have too code much when the ISR takes longer to execute than the ISR calling interval and so never exits the ISR. This will cause programmed timed events to be delayed or not serviced at all.

Have ISR set a pin high on entry into the ISR and set the pin low on exit you can exactly see the duration of the ISR by the pulse  width...if you have an oscilloscope. If you haven't got it to ever overflow the interrupt calling interval, you can add more code.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 19830
  • Country: nl
    • NCT Developments
Re: How much code in an ISR is too much.
« Reply #14 on: May 19, 2013, 12:33:02 pm »
I don't agree with those recommending doing as much as possible in the ISR. I say handle the interupt, set a flag to record the event if necessary or send/receive any data and get the hell out.
Not true. It depends on what you are making. For signal processing applications I do all the signal processing inside the ISR. In that case the interrupt routine becomes a parallel (higher priority) process. The overhead of swapping buffers and aligning execution is just too much and prone to errors.

For things that are not time critical I use a timer interrupt which increments a global timer counter (remember to declare it volatile) and use that for delays or time-outs. My software always consists of seperate modules. Each module has an init function which is called once at startup and a run function which is called continuously from main().

A run function should execute without any waiting. If there is need for a delay I sample the global timer counter and later on I check if the global timer counter has been incremented by a certain amount (amount of delay). For handling events this method requires the use of statemachines and a variable which remembers the state. The big advantage is that a lot of modules can perform their tasks without interfering with eachother.

In case of detecting keys I check the state 5 tot 10 times per second and if the key had been pressed for more than 100ms I declare it pressed. Its a simple counter which counts up when the key is pressed and reset when the key is released.
« Last Edit: May 19, 2013, 12:35:58 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2094
Re: How much code in an ISR is too much.
« Reply #15 on: May 19, 2013, 02:15:59 pm »
The answer, as much ISR code as possible. Most professional coders put all the code into the ISR and just a single While(A==A);
statement in main();

I have more than one commercial embedded system where main() has an endless for loop containing halt or sleep. I have more than several that don't. It isn't a wrong thing to do but not the right thing very often.

For something like an alarm clock which only needs one timed interrupt to keep time, pace hardware housekeeping tasks and pretty trivial processing in response to user switch inputs it looks like the right thing to do.


 

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2038
  • Country: au
Re: How much code in an ISR is too much.
« Reply #16 on: May 19, 2013, 02:46:03 pm »
I don't agree with those recommending doing as much as possible in the ISR. I say handle the interupt, set a flag to record the event if necessary or send/receive any data and get the hell out.
I often strive for that paradigm when coding ISR's but setting a flag is most often a waste of time as the peripheral sets it's interrupt flag regardless of wether you have interrupts enabled or not.

In short if your main code is polling a flag set in your ISR you are actually wasting time saving context when jumping into it. It may be a different case, however, if your main code is a state machine
 

Offline ecat

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: gb
Re: How much code in an ISR is too much.
« Reply #17 on: May 19, 2013, 03:18:27 pm »
First, the answer to the original question:
Q) How much code in an ISR is too much?
A) Exactly how much code you choose to put in the ISR is between you and your application, your hardware and your specification. The while( true ) Sleep(); implementation is as valid as forgoing interrupts altogether. As for some sort of limit: If it breaks your application or gets in the way debugging or makes the code unreadable then it's too much code.

In general.
ISRs can be tricky to debug so keeping them simple is a good idea.

ISRs divert flow from the rest of your code so keeping the time spent in an ISR to a minimum is generally a good idea.

ISRs are inefficient when using a high level language, there are quite a few instructions to be executed between the occurrence of the interrupt signal and the start of your code and possibly a similar number of instructions to be executed when the ISR returns.

Lets look at some PIC16 timer code as an example.

Code: [Select]
void interrupt isr(void)
{
// *************** Your Code **************
    if( PIR1bits.TMR1IF ) {   
        TMR1H = 231;            // 50ms = 59286 @ 1:8 divider
        TMR1L = 150;            // 50ms @ 1:8
........


20:            void interrupt isr(void)
21:            {
0004  00FE     MOVWF 0x7E
0005  0E03     SWAPF STATUS, W
0006  1283     BCF STATUS, 0x5
0007  1303     BCF STATUS, 0x6
0008  00A1     MOVWF 0x21
0009  0804     MOVF FSR, W
000A  00A2     MOVWF 0x22
000B  080A     MOVF PCLATH, W
000C  00A3     MOVWF 0x23
000D  087F     MOVF 0x7F, W
000E  00A4     MOVWF 0x24
000F  118A     BCF PCLATH, 0x3
0010  2EDA     GOTO 0x6DA
0011  158A     BSF PCLATH, 0x3
0012  2C87     GOTO 0x487
// *************** Your Code **************
22:                if( PIR1bits.TMR1IF ) { 
06DA  1C0C     BTFSS PIR1, 0x0
06DB  2EDD     GOTO 0x6DD
06DC  2EDE     GOTO 0x6DE
06DD  2F41     GOTO 0x741
23:                    TMR1H = 231;             // 50ms = 59286 @ 1:8 divider
06DE  30E7     MOVLW 0xE7
06DF  008F     MOVWF TMR1H
24:                    TMR1L = 150;             // 50ms @ 1:8
06E0  3096     MOVLW 0x96
06E1  008E     MOVWF TMR1L


That is an additional fifteen instructions of preamble on every interrupt plus whatever appears at the end of the ISR. This is not the end of the world but if all you are doing is setting a flag, that flag = true statement balloons from two instructions to more than seventeen. Worth remembering.

Also note in the above code the timer counter (TMR1H, TMR1L) reset appears as close to the start as possible, this simplifies our following maths. It most certainly appears before any additional conditional statements. Why? 

The timer count begins when the timer counter is set and the interrupt occurs when the count wraps around to zero, let's call this time Ti. There is interrupt latency Tl, the preamble shown above Tp and the time it takes your code to reach the counter reset Tr.

You want an interrupt time of Ti but you actually get a time of Ti + Tl + Tp + Tr. If this discrepancy is important to you then your reset counter value can be adjusted: ResetCounterValue = CalculatedResetCounterValue - Tl - Tp - Tr. (For PIC16 at least, your micro may count down instead of up, or reload the counter automatically which invalidates most of this discussion as it is then immune to all code overhead, or do something entirely unique).

Now consider the case where the timer counter reset code appears at the bottom of your interrupt code. If the code is free of all conditionals Tr is a larger value but still constant. If, however, the code contains one of more conditional statements Tr becomes variable which is not good if your goal is a precise and repeatable timer interrupt. In the simple implementation of this latter case the best you can hope for is an interrupt time Ti between the end of one interrupt and the start of the next. This may be what you need, especially if the time to execute you code is greater than your interrupt time, but it is not the way to count out absolute time.


Here's an similar example i wrote a while back for the wiki
http://www.eevblog.com/wiki/index.php?title=Embedded_Programming_Tips_and_Tricks_for_Beginners#Repeating_Code_Using_Timer_Overflow_Interrupts

Nice work :)

In light of the preceding discussion it's worth noting the there are 100us between interrupts, the interrupts themselves occur at intervals of (100 +  overhead)us where overhead = the time for 0, 1, 2 or 3 conditional statements to execute + the fixed code time.

Not a criticism but I would have gone for

Code: [Select]
#define complete 0

#define Restart1000ms  10000
#define Restart100ms    1000
#define Restart10ms      100

volatile uint16_t clk1000ms = Restart1000ms;     // initializing them to restart(1) insures there is an initial delay cycle when
volatile uint16_t clk100ms = Restart100mst;       // the code first starts, otherwise they would all happen at
volatile uint8_t clk10ms = Restart10ms;             // once during mcu poweron, which may not be desirable.

ISR (TIMER0_OVF_vect)
{
// Timer clock is 1mhz, timer is 8bit.
// Now we set the timer register to 156 so it takes 100 timer clocks to overflow.
  // This will mean the interrupt code executes at 1mhz/100 = 10000Hz
TCNT0 = 156;           // subtract ISR overhead from this value for more precise 100us interrupts

if ( clk1000ms != complete ) {
clk1000ms--;
}
if ( clk100ms != complete  ) {
clk100ms--;
}
if ( clk10ms != complete ) {
clk10ms--;
}
}


void main(void)
{
if ( clk1000ms == complete )  {
clk1000ms = Restart1000ms;

// put code here to run every second
}
........

... and yes, those reset counts may be out by 1.

If you want to get funky:


Code: [Select]
#define complete 0

#define clk1000ms_Restart 10000
#define clk100ms_Restart   1000
#define clk10ms_Restart     100

#define RestartClock( clk ) clk = clk##_Restart

volatile uint16_t clk1000ms =  clk1000ms_Restart;     // initializing them to restart(1) insures there is an initial delay cycle when
volatile uint16_t clk100ms = clk100ms_Restart;       // the code first starts, otherwise they would all happen at
volatile uint8_t clk10ms = clk10ms_Restart;             // once during mcu poweron, which may not be desirable.


void main(void)
{
if ( clk1000ms == complete )  {
RestartClock( clk1000ms );

// put code here to run every second
}
........

But fancy defines are often frowned upon these days and funky is not so good for tutorials ;)
« Last Edit: May 19, 2013, 03:26:01 pm by ecat »
 

Offline blewisjr

  • Frequent Contributor
  • **
  • Posts: 301
Re: How much code in an ISR is too much.
« Reply #18 on: May 19, 2013, 04:48:58 pm »
Really wow thanks for all the awesome discussion guys lots of info and I do agree with it depending on what needs to be done when.  So far the project has been a lot of fun and the best part about the project is I am learning lots of cool little debug tricks with the scope which really helps a lot in understanding what is going on.  I did do some testing on my ISR and it actually executes very very fast and until I saw it on the scope I really did not realize how much room there is to sleep.  I ISR is executing so fast there is lots of dead time.  Was really awesome to see that.  If I remember correctly my ISR was running under 100uS but I forget as I checked earlier today.  Just seeing the huge gap on the scope between executions was awesome in and of itself.  I will however probably be moving the clock over to a PIC.  Right now it is running on a ATmega328P but I really want to make a few of these and I only have 1 AVR chip on hand that has enough pins to handle the various inputs and outputs and I don't want to add a decoder chip for the display as that means I need to buy another component.  Either way it should be relatively easy to move it over as it is coded in C even though I really wanted to use this project to learn AS simply because the language fascinates me.  Either way I am happy I picked this up as a hobby have not had this much fun in quite a while.
 

Offline TheRevva

  • Regular Contributor
  • *
  • Posts: 87
Re: How much code in an ISR is too much.
« Reply #19 on: May 19, 2013, 05:21:56 pm »
This is really one of those "How long is a piece of string" type questions and there is no 100% correct or 100% incorrect answer!
It all boils down to YOUR application.
From what I can gather, it sounds like your application is an alarm clock application (You know the type.  The basic "wake me up tomorrow morning at 6AM")
A few years ago my nephew was getting rather interested in electronics and I.T., so I gave him a PIC and told him to go build a clock.
(I was expecting a clone of clone of the traditional alarm clock - I was rather surprised he'd elected to include a GPS time reference, but that's another story)
Anyway, back to the point.
With ANY asynchronous interrupt (event) source, you want to try to make sure your system takes as little as possible time running the ISR WITH INTERRUPTS STILL DISABLED
The (obvious?) reason is that you don't want to RISK missing the next interrupt.
In your 'alarm clock' example, I wouldn't think it likely that you'd ever 'run out of time' within an ISR, but once you start working on more complex projects, with significantly more asynchronous interrupt sources, you'll value the lessons learned in writing compact ISRs.
In some of the posts above references were made to 'setting a flag within the ISR'.
This is the classic 'bottom half' interrupt processing.  The ISR itself does the barest minimum to 'log' that the interrupt has occurred and control is returned from the ISR rapidly such that interrupts are re-enabled.  The 'bottom half' handler deals with the (proportionally) more time consuming aspects of processing, but by that time, interrupts are re-enabled in readiness.

You've mentioned that you're 'updating the display' with every timer tick and that these ticks occur every 4mS (i.e. 250Hz).
WHY?  Unless you're display has resolution down to 0.01 seconds, the vast majority of these 'display updates' won't actually CHANGE anything on the display!
(The atypical 'alarm clock' only has 4 digits, and traditionally 'flashes' the colon separator at 2Hz which implies an effective update rate 125 times less than what you seem to be doing)

The next aspect about asynchronous event  that's valuable to recognise is that they OFTEN need to be prioritised.
If your code was to ever 'miss' processing a 4mS tick, then it would create significant issues that become incrementally more significant.
But if it were to 'miss' processing of one display update, or a button press?  (I assume you get the point?)

Soooooo...  Just for fun, let's _ASSUME_ you've chosen to take the display updating out of the ISR and put it into the mainline code instead.
(I'm not saying you SHOULD or SHOULD NOT do this.  It's just to explain another issue you might face...)
Your ISR is still processing its regular 4mS tick and incrementing a master 'tick count'.
Since there are 86400 seconds in a day (24*60*60), this implies your tick counter will range from 0 [midnight] to 21,599,999 ticks [being 4mS before the NEXT midnight].
Whether it's a good idea to keep your tick count at 4mS resolution or not is an entirely different question.  For now, let's just assume you've chosen to do so.
Obviously, your 16-bit timer cannot hold a value that large, so you have to implement a memory-based 32-bit master-tick-counter
However, let's say that your microcontroller is an 8 bit CPU, and it requires several instructions to increment this master-tick-counter.  (Somewhere around 6-7 instructions would be normal)
The problem is that the 'display-update' code that you've 'pushed' out of the ISR will also require several instructions to 'read' this master-tick-counter.
If your display update code is runnning asynchronously (i.e. it's not directly 'tied' to the 4mS timer ticks), then it's quite possible that significant processing errors can ensue.
For example, assume the master tick counter at the beginning of a display update held a value of 0x00FFFFFF (18:38:28 approx)
The display update reads in the first byte (0x00) and then a timer tick occurs which updates the master tick counter to 0x01000000)
The display update will then process the remaining three bytes as being 0x00 (thereby updating the display to midnight!!!
There are two commonly used methods to overcome that 'race condition':
1: During the processing of the 'display update' it disables interrupts for a VERY short time while it takes a COPY of the master tick counter
2: The display update routine is SCHEDULED to only occur when it KNOWS that the master tick counter will not be incremented.  (This is also known as 'synchronising the bottom half')

Lastly, I read a post above that seemed to suggest you should push EVERYTHING into the ISR leaving you with a mainline such as:
while (1) sleep();
Perhaps I'm a bit limited with only 30 years experience including embedded development, but...
If ANY of my employees EVER proposed using such code on ANY of our complex systems, they would rapidly find themselves looking for a new job.
We always aim to do the BAREST minimum within an ISR.  The PRIMARY goal of our ISRs is to endeavour to re-enable interrupts ASAP.
IMO, leaving a system with interrupts disabled for excessive periods is a great way to write 'Micro$oft code'.
Would you accept your neighbourhood fireman not answering your emergency call because he's 'otherwise occupied' eating dinner?
Just because your 'clock' application can probably get away with being lazy doesn't give you the excuse to teach yourself bad habits that you will inevitably have to 'un-learn' in the future.

Hmmmm, perhaps this post is already too long...  I'm confident my internal 'virtual ISR' has missed processing HEAPS of ticks! <Grins>
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2094
Re: How much code in an ISR is too much.
« Reply #20 on: May 19, 2013, 05:34:40 pm »
I did do some testing on my ISR and it actually executes very very fast and until I saw it on the scope I really did not realize how much room there is to sleep.  I ISR is executing so fast there is lots of dead time.  Was really awesome to see that.

So you took my advice to flip a pin. If you have a fancy scope with measurements and statistics you can accurately determine min, max, and average run times. If you have persistence and the interrupt isn't too complicated you will see separate lines for the different execution paths taken in the interrupt.  Remember that you are not seeing the entry and exit overhead of the interrupt.

If you code has a main loop you can toggle a pin in it to see min, max and average loop times. If you code has nothing useful to do in the foreground you can toggle a pin as fast as possible and see interrupts making holes in the generated square wave.

In light of the preceding discussion it's worth noting the there are 100us between interrupts,

Also worth noting that there is little point running an interrupt at 10kHz when you are not doing anything useful in it faster than 1kHz. Also that on an 8 bit processor the 16 bit 'clocks' may be spuriously read as complete depending on the order the compiler chooses to read the 2 bytes. Also that they are only 'clocks' if the foreground code notices and services complete status within 100us, otherwise they will loose time.
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2094
Re: How much code in an ISR is too much.
« Reply #21 on: May 19, 2013, 06:25:23 pm »
Just because your 'clock' application can probably get away with being lazy doesn't give you the excuse to teach yourself bad habits that you will inevitably have to 'un-learn' in the future.

The bad habit (and being lazy) would be to not bother understanding why something is good or bad and always use what others tell you is more often good.
 

Offline blewisjr

  • Frequent Contributor
  • **
  • Posts: 301
Re: How much code in an ISR is too much.
« Reply #22 on: May 19, 2013, 06:29:45 pm »
This is really one of those "How long is a piece of string" type questions and there is no 100% correct or 100% incorrect answer!
It all boils down to YOUR application.
From what I can gather, it sounds like your application is an alarm clock application (You know the type.  The basic "wake me up tomorrow morning at 6AM")
A few years ago my nephew was getting rather interested in electronics and I.T., so I gave him a PIC and told him to go build a clock.
(I was expecting a clone of clone of the traditional alarm clock - I was rather surprised he'd elected to include a GPS time reference, but that's another story)
Anyway, back to the point.
With ANY asynchronous interrupt (event) source, you want to try to make sure your system takes as little as possible time running the ISR WITH INTERRUPTS STILL DISABLED
The (obvious?) reason is that you don't want to RISK missing the next interrupt.
In your 'alarm clock' example, I wouldn't think it likely that you'd ever 'run out of time' within an ISR, but once you start working on more complex projects, with significantly more asynchronous interrupt sources, you'll value the lessons learned in writing compact ISRs.
In some of the posts above references were made to 'setting a flag within the ISR'.
This is the classic 'bottom half' interrupt processing.  The ISR itself does the barest minimum to 'log' that the interrupt has occurred and control is returned from the ISR rapidly such that interrupts are re-enabled.  The 'bottom half' handler deals with the (proportionally) more time consuming aspects of processing, but by that time, interrupts are re-enabled in readiness.

You've mentioned that you're 'updating the display' with every timer tick and that these ticks occur every 4mS (i.e. 250Hz).
WHY?  Unless you're display has resolution down to 0.01 seconds, the vast majority of these 'display updates' won't actually CHANGE anything on the display!
(The atypical 'alarm clock' only has 4 digits, and traditionally 'flashes' the colon separator at 2Hz which implies an effective update rate 125 times less than what you seem to be doing)

The next aspect about asynchronous event  that's valuable to recognise is that they OFTEN need to be prioritised.
If your code was to ever 'miss' processing a 4mS tick, then it would create significant issues that become incrementally more significant.
But if it were to 'miss' processing of one display update, or a button press?  (I assume you get the point?)

Soooooo...  Just for fun, let's _ASSUME_ you've chosen to take the display updating out of the ISR and put it into the mainline code instead.
(I'm not saying you SHOULD or SHOULD NOT do this.  It's just to explain another issue you might face...)
Your ISR is still processing its regular 4mS tick and incrementing a master 'tick count'.
Since there are 86400 seconds in a day (24*60*60), this implies your tick counter will range from 0 [midnight] to 21,599,999 ticks [being 4mS before the NEXT midnight].
Whether it's a good idea to keep your tick count at 4mS resolution or not is an entirely different question.  For now, let's just assume you've chosen to do so.
Obviously, your 16-bit timer cannot hold a value that large, so you have to implement a memory-based 32-bit master-tick-counter
However, let's say that your microcontroller is an 8 bit CPU, and it requires several instructions to increment this master-tick-counter.  (Somewhere around 6-7 instructions would be normal)
The problem is that the 'display-update' code that you've 'pushed' out of the ISR will also require several instructions to 'read' this master-tick-counter.
If your display update code is runnning asynchronously (i.e. it's not directly 'tied' to the 4mS timer ticks), then it's quite possible that significant processing errors can ensue.
For example, assume the master tick counter at the beginning of a display update held a value of 0x00FFFFFF (18:38:28 approx)
The display update reads in the first byte (0x00) and then a timer tick occurs which updates the master tick counter to 0x01000000)
The display update will then process the remaining three bytes as being 0x00 (thereby updating the display to midnight!!!
There are two commonly used methods to overcome that 'race condition':
1: During the processing of the 'display update' it disables interrupts for a VERY short time while it takes a COPY of the master tick counter
2: The display update routine is SCHEDULED to only occur when it KNOWS that the master tick counter will not be incremented.  (This is also known as 'synchronising the bottom half')

Lastly, I read a post above that seemed to suggest you should push EVERYTHING into the ISR leaving you with a mainline such as:
while (1) sleep();
Perhaps I'm a bit limited with only 30 years experience including embedded development, but...
If ANY of my employees EVER proposed using such code on ANY of our complex systems, they would rapidly find themselves looking for a new job.
We always aim to do the BAREST minimum within an ISR.  The PRIMARY goal of our ISRs is to endeavour to re-enable interrupts ASAP.
IMO, leaving a system with interrupts disabled for excessive periods is a great way to write 'Micro$oft code'.
Would you accept your neighbourhood fireman not answering your emergency call because he's 'otherwise occupied' eating dinner?
Just because your 'clock' application can probably get away with being lazy doesn't give you the excuse to teach yourself bad habits that you will inevitably have to 'un-learn' in the future.

Hmmmm, perhaps this post is already too long...  I'm confident my internal 'virtual ISR' has missed processing HEAPS of ticks! <Grins>

I would like to touch on my display update code.  The display is a multiplexed 4 digit 7 segment display.  From what I witnessed is if I do not update it at a certain rate there is a flicker in the display that my eyes can pick up that causes me in particular to become irritated and get headaches.  When updating at a frequency of 2 Hertz you can literally see the display move digit to digit.  If you update at 50 HZ the display looks like a strobe light.  In order for my eyes not to see any flicker I needed to update the display at 250HZ.  There may be a way around this for instance using a discrete part like a shift register but with the display being updated by the MCU I had no choice but to update it at 250HZ.  It may be less noticeable also if I brought the brightness down but as it stands it is running at full brightness.
 

Online FrankBuss

  • Supporter
  • ****
  • Posts: 2321
  • Country: de
    • Frank Buss
Re: How much code in an ISR is too much.
« Reply #23 on: May 19, 2013, 08:14:17 pm »
Once I've implemented a nested interrupt: The interrupt function was called with 1 kHz. It then resets the interrupt flag, so that the interrupt could trigger again. The interrupt function itself was two parts: a fast part, which updated and averaged sensor values, timestamps etc., with the full 1 kHz. The other part was executed every 64th time, resulting in a frequency of 16 Hz (doing some floating point PID calculation, key debouncing, menu logic etc.). This slower part was interrupted by the fast part. So a kind of prioritized task manager, but with deterministic update rates for the fast and the slow part.

Maybe this could be useful for alarm clock, too. Do the display update with a high frequency, and the rest with a low frequency function, which can be interrupted by the high frequency interrupt. Needs not to be one interrupt, if your microcontroller has more than one timer interrupt with different priorities, but can be implemented in one interrupt, if necessary.
So Long, and Thanks for All the Fish
Electronics, hiking, retro-computing, electronic music etc.: https://www.youtube.com/c/FrankBussProgrammer
 

Offline miceuz

  • Frequent Contributor
  • **
  • Posts: 374
  • Country: lt
    • chirp - a soil moisture meter / plant watering alarm
Re: How much code in an ISR is too much.
« Reply #24 on: May 19, 2013, 09:49:18 pm »
I gather there is no real need to do user interface stuff in interrupts, unless you really want exact timing. User is slow and will not notice latency of 100ms. You can fly to the moon in 100ms on 1Mhz. I was fairly successful in one project that involves motor control, pulse counting and a bunch of buttons -- all time critical stuff (pulse counting) is done in ISR, button servicing is done in the main loop - no real notice of delay when pushing buttons.

Maybe this would not suit for something like a game, but for simple UI it's pretty ok.

you can check out my debouncing code: https://github.com/Miceuz/motoplugas/tree/master/src it even has a nice small library for debouncing.
debounce() reads the button state and ensures it's stable,
serviceButton() calls particular button servicing code on debounced events

I wanted to control the frequency at which button states are updated, so I've used another timer for that. But all the button logic is implemented in the main loop. This means - let's try not to miss a button press on reliable fashion, but do something about it when we have spare time.


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf