Author Topic: How are interrupts handlers implemented?  (Read 7719 times)

0 Members and 1 Guest are viewing this topic.

Online SimonTopic starter

  • Global Moderator
  • *****
  • Posts: 17816
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
How are interrupts handlers implemented?
« on: May 07, 2022, 09:22:31 pm »
I'm curious about what code is written in the back end (and beyond looking at how headers files go down a wormhole of bit's being defined in yet another file) for interrupts. So I know that when an interrupt occurs the processor saves the current state and runs off to a specified memory location. To me the user this translates into the automatic calling of a function that the chip manufacturer has predefined. But what code has the manufacturer written in order to have that function be placed in a certain physical location in memory?
 

Offline DavidAlfa

  • Super Contributor
  • ***
  • Posts: 5907
  • Country: es
Re: How are interrupts handlers implemented?
« Reply #1 on: May 07, 2022, 10:15:26 pm »
It depends on the architecture.
PICs have fixed ISR vectors, uart interrupt kicks in and the program will jump to address ex. 0x20, you cannot change that.
At 0x20 you have a branch/goto to your real uart_ISR_code", the one you declare in the compiler.

I think it also was fixed in older ARM cores, in newer can be configured.
This adds complexity, but reduces interrupt delay, as it jumps directly to the execution code.
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Online Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 3359
  • Country: nl
Re: How are interrupts handlers implemented?
« Reply #2 on: May 07, 2022, 10:45:44 pm »
There are also more architecture dependent implementations.

for some architectures, you have to add a previously #define'd "ISR" (or similar) to the ISR function itself to trigger some special behavior in the compiler.
Sometimes a reti (RETurn from Interrupt) instruction has to be added to the end of the ISR, while other architectures use a normal RET at the end of the ISR.

In C it has always been "undefined / processor specific".
But C++ is getting more standardized in the last 20 years or so and lots of things have been deprecated, changed or improved.  There may be standardized recommendations for compiler source code now.
 

Offline Benta

  • Super Contributor
  • ***
  • Posts: 5872
  • Country: de
Re: How are interrupts handlers implemented?
« Reply #3 on: May 07, 2022, 11:38:31 pm »
You're all talking "C"-level handling here, I think.
The manufatcurer hasn't written anything into the CPU, it's pure hardware handling and subsequent firmware/software handling.

For all CPU architectures that I know, an interrupt causes the CPU to push its current status onto the stack. It will then either indirectly (interrupt vector) or directly jump to a specific address and execute from there.
That's the interrupt handler or ISR subprogram. Very often written in assembler.
The ISR has to find out where the interrupt came from, why and what to do with it, and present its result to the main program somehow for further processing.
Returning from the interrupt, the CPU status is pulled from the stack, and normal program operation resumes.


 

Online jpanhalt

  • Super Contributor
  • ***
  • Posts: 3478
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #4 on: May 08, 2022, 01:29:26 am »
To me the user this translates into the automatic calling of a function that the chip manufacturer has predefined. But what code has the manufacturer written in order to have that function be placed in a certain physical location in memory?
I agree, an interrupt is very much like a call.  The main difference is that a call is initiated by code and an interrupt is initiated by an event.* (I am only familiar with 8-bit PIC's, but the principles are the same in other PIC's and probably most chips.)  As pointed out, it is hardware.

So, just write your service routine as you would any other subroutine, except as noted, you may need to add code to save context if that is important.  In the 16F1xxx and later chips, there is automatic context saving.  If you enable more than one event, you will may also need to identify which event caused the interrupt.

With 16F PIC's there are three return instructions: return, retlw (return with a literal in WREG), and retfie (return from interrupt and enable interrupts, i.e., set GIE bit in INTCON.  You can actually use any of those three instructions to return from an interrupt or a call.  With a call, one usually uses "return" to go back to its origin.  "retlw" is also used with calls, particularly when one is reading a table of values.  With an interrupt, retfie is more convenient.  Of course, if you just use "return" or "retlw"  from an interrupt, you will not automatically restore context nor will GIE be set.

*You can initiate an interrupt with code, but that is probably done rarely.
« Last Edit: May 08, 2022, 01:31:16 am by jpanhalt »
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11888
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #5 on: May 08, 2022, 01:50:35 am »
To me the user this translates into the automatic calling of a function that the chip manufacturer has predefined. But what code has the manufacturer written in order to have that function be placed in a certain physical location in memory?
I agree, an interrupt is very much like a call.  The main difference is that a call is initiated by code and an interrupt is initiated by an event.* (I am only familiar with 8-bit PIC's, but the principles are the same in other PIC's and probably most chips.)  As pointed out, it is hardware.

At the most primitive level, yes it is hardware, but not always.

For example, the OpenVMS operating system relied heavily on software interrupts, called AST's (Asynchronous System Traps). In common terms you might think of them as callbacks, but callbacks triggered by events. For example, you might set a timer, or wait for some I/O to complete, and register an AST to be called when the timer expired, or the I/O completed.

In a low level microcontroller context this would typically be a hardware interrupt, but in a high level operating system like OpenVMS you don't want ordinary users to mess with the hardware. The operating system is supposed to protect against that. So OpenVMS provides a carefully controlled way of achieving the same thing, safeguarded by the operating system.

But at the end of the day, an interrupt service routine is simply a subroutine called by the system in response to some event, rather than when decided by your program. Such an interrupt will cause your program to stop processing while the interrupt is handled. The usual work of an interrupt hander is to set some flags, so that your program can later check the flags and decide what to do about them at it's own convenience.

http://www.rlgsc.com/decus/usf96/ad039.pdf
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14474
  • Country: fr
Re: How are interrupts handlers implemented?
« Reply #6 on: May 08, 2022, 01:54:17 am »
And the context is not necessarily saved automatically by the CPU. It depends on the target, but for many, it just doesn't happen. Compilers (or yourself if using assembly) are responsible for this. And if using GCC, there are some attributes that enable you to control how context is saved (or not) and to what extent. (Unless by "context" you mean the return address - address of the instruction being interrupted, in which case, yes, that's usually the only thing that is pretty much universally done automatically, otherwise execution would be kinda screwed.)
« Last Edit: May 08, 2022, 01:56:46 am by SiliconWizard »
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11888
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #7 on: May 08, 2022, 01:57:26 am »
A footnote related to hardware access.

As a user of an OpenVMS system you could not get close to the hardware at all without elevated privileges. And if you did have elevated privileges and got close to the hardware, for example if you were trying to write and debug a device driver, the probability of you crashing the system became quite high.
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #8 on: May 08, 2022, 02:28:47 am »
MIPS and RISCV have a dedicated control register to store the return address when handling an interrupt.  SPARC shifts the register window to free up general purpose registers to store the PC.  This means they need no memory access to start an ISR, but the ISR needs to handle saving state as necessary.

ARM cortex M cores not only store the PC but all of the caller saved registers to the stack.  This allows interrupt handlers to be written in C since they follow the standard ABI.
 

Offline ledtester

  • Super Contributor
  • ***
  • Posts: 3036
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #9 on: May 08, 2022, 04:58:03 am »
... But what code has the manufacturer written in order to have that function be placed in a certain physical location in memory?

The generation of the interrupt vector table is done by the compiler.

For instance, consider the atmega238p:

https://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-7810-Automotive-Microcontrollers-ATmega328P_Datasheet.pdf

Pages 49-51 show the typical assembly code that is placed at the beginning of flash memory.

"avr-gcc" is a derivative of "gcc" which has been modified to recognize the "ISR" keyword and generate something like the code you see on page 50.

So, for example, when you write in your program:

Code: [Select]
ISR(TIMER1_OVF_vect) {
    ...some code...
}

avr-gcc will place a JMP instruction at address 0v001A which will jump to ...some code....
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: How are interrupts handlers implemented?
« Reply #10 on: May 08, 2022, 06:41:58 am »
Depending on architecture, the C compiler (or you, if you write assembly) may need to do some tricks, and as a result, the ISR handler is not a "normal" function. These tricks are usually small and simple, like push/pop of some registers, or using a different instruction to return from the function than you usually do. Nevertheless, in such architectures, you need to tell the C compiler "this function is an ISR".

ARM Cortex CPUs have been designed so that interrupt handlers can be completely normal functions. The CPU internally (in hardware, not software) saves the state of whatever was running, by pushing registers in stack and popping them back after the function returns. Also because the vector table is just a list of function addresses (and usually relocatable in RAM), this can't get any easier for the programmer.
 

Online SimonTopic starter

  • Global Moderator
  • *****
  • Posts: 17816
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: How are interrupts handlers implemented?
« Reply #11 on: May 08, 2022, 07:09:46 am »
Right so what I am getting at is how are these interrupts handlers defined. The manufacturer/whoever provided the IDE/compiler/toolchain/"whatever you don't write yourself" gives you a a function name to call. But what have they put behind that? it sounds like each architecture is different but lets take ARM that sounds like the simplest.

If you had a chip and a compiler what would you be writing to tell the compiler where to go when an interrupt triggers? As far as I am aware the hardware will go to a memory location, how does the compiler know to put that function name at that memory location?
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #12 on: May 08, 2022, 07:17:21 am »
I'm curious about what code is written in the back end (and beyond looking at how headers files go down a wormhole of bit's being defined in yet another file) for interrupts. So I know that when an interrupt occurs the processor saves the current state and runs off to a specified memory location. To me the user this translates into the automatic calling of a function that the chip manufacturer has predefined. But what code has the manufacturer written in order to have that function be placed in a certain physical location in memory?

I'll describe what RISC-V does, in a reasonably minimal but standard-compliant implementation that you might find in a microcontroller or a soft core in an FPGA. There are more sophisticated options, but what I describe here should work on any chip.

First you have to know what a CSR (Control and Status Register) is. It's a a group of 32 bits (on RV32) that directly control the operation of some part of the CPU core, or that show the status of some part of the CPU core. Each individual bit in a CSR might be read-only, and be connected directly to some part of the CPU's circuits. Or it might be read-only and always 0 or always 1. Or it might be a flip-flop that can be set by software, and then it directly feeds into controlling something in the logic. Or it might be a flip-flop that both software and hardware can change.

CSRs have numbers, a bit like memory addresses. On RISC-V the numbers go from 0 to 4095. The flip-flops for the CSRs might be scattered all around the chip, where they are convenient to monitor/control particular hardware. There is some kind of bus or buses to access them using special CSR read/write/update instructions. CSRs are used infrequently, so unlike RAM there is no guarantee that access to them is fast -- it could be over something like I2C or SPI. But on most practical cores it is not super slow.

Every RISC-V core should provide CSRs 0xC00 (cycle), 0xC01 (time), and 0xC02 (instret). These are all 64 bit counters and on 32 bit cores you can read the upper halves at 0xC80, 0xC81, 0xC82.  CSRs from 0xC03 to 0xC1F (and the corresponding upper halves) are for performance monitoring counters.

The most important Machine mode CSRs are:

Code: [Select]
0x300 mstatus  The keys to the machine!
0x301 misa     The instruction set and extensions supported
0x304 mie      Interrupt Enable
0x305 mtvec    trap handler base address (vector)
0x340 mscratch store anything you like here
0x341 mepc     the PC executing when an exception ocurred
0x342 mcause   what happened e.g. interrupt, illegal instruction, memory protection
0x343 mtval    the bad address or opcode
0x344 mip      bits indicating pending interrupts, if any

The mstatus CSR has bitfields for (among others)...

Code: [Select]
MIE  global interrupt enable. Both this and bits in the mie CSR need to be enabled
MPIE the MIE field before the current trap. (illegal instruction etc can happen with MIE=0)
MPP  the privilege level before the current trap (User, Supervisor, Hypervisor, Machine)

Note: there is deliberately no field for the current privilege level. It is stored elsewhere. The machine knows but it won't tell you.

The mtvec CSR controls where execution will go to on an interrupt or exception. The simplest use is to simply store the address of your interrupt handler, which must be a multiple of 4 bytes. All interrupts and exceptions will jump to this address. You can also set the LSB to 1 in which case program exceptions jump to the address but interrupts jump to the address plus 4x the value in mcause.

On a very low end core mtvec might be read-only, in which case you need to put your handler where it says instead of telling it where you put your handler. You can tell by trying to write a setting into it and then reading it back and check whether it's what you tried to write. This is a general principle on many RISC-V CSRs, called WARL (Write Any, Read Legal)  Worst case, just write a Jmp instruction to your handler at the fixed trap vector address it tells you.


When an interrupt is signalled the following happens:

- if mstatus.MIE is set and the mie bit for that interrupt is set and the interrupt level is >= the current interrupt level, the interrupt will be processed. Otherwise just set the pending bit for that interrupt.

- mcause and mtval are set. Interrupts set the hi bit in mcause, exceptions don't.

- mstatus.MIE is copied to mstatus.MPIE. mstatus.MIE is set to 0.

- the current privilege level is copied to mstatus.MPP and the current privilege level is set to M

- the PC is copied to the mepc CSR. mtvec (possibly plus 4x mcause) is copied to the PC

... and execution continues ...

That's it. That's all.

In particular, all user registers remain untouched. Nothing has happened to RAM -- nothing is pushed, nothing is written anywhere.

What happens next is up to you. Probably you want to free up some registers to work in. How many is up to you. Where you put them is up to you. If you never have nested interrupts then you could store some registers to absolute addresses -- but they better be in memory locations 0..2047 or in the top 2k of the address space because that's all you can access without getting a pointer into some register.

You can free up one register by writing it to the mscratch CSR. Then you can load a pointer into it and start storing other registers relative to that pointer.

You can keep a pointer to your register save area permanently in mscratch and just swap it with a register. You can use that register save area as a stack to support nested interrupts. Hey -- maybe the register you swap with mscratch is SP...

Or maybe you just trust that the running program always keeps SP valid, so you can just decrement it and save your registers there. The standard ABI says this should be ok. Compilers do make sure this is always true, and assembly language programmers should too. But careless or malicious code might not practice stack hygiene. Do you feel lucky?

Many embedded systems will just use a single stack and back themselves to get it right. But if you're paranoid you can keep another stack just for interrupt handling. And if you have U mode available you can prevent the normal code from messing with it.

How fast is this?

All the CSR shuffling happens in parallel. You should be up and running with a new PC value in 1 clock cycle. Then it's just a question of how long it takes to load the instruction from that address and start executing it. That should be the same as any unpredicted or mis-predicted branch/jump. Probably 2-3 clock cycles on a machine with SRAM and a short pipeline.

When you're done, restore any of the registers you touched and then the MRET instruction will take you back to the original program in 1 clock cycle (plus instruction fetch time)

This is real minimalist RISC stuff.

Current ARM Cortex-M CPUs do all kinds of fancy interrupt handling tricks. The CPU saves registers for you so you can jump right into C code. When you do a return from interrupt the CPU checks if other interrupts are pending and jumps right to them without pointlessly restoring and again saving registers. And a few other tricks. For example the CPU might start responding to a low priority interrupt and while the registers are being saved a higher priority interrupt comes in. The hardware can switch tracks and immediately go to the higher priority handler instead.

RISC-V hardware doesn't do any of that stuff. But because it does almost nothing at all, it is possible to write a standard interrupt entry handler that implements the same features in software -- and that runs just as quickly as the fancy ARM interrupt handling.

The interested can find details on various ways to do this here..

https://github.com/riscv/riscv-fast-interrupt/blob/master/clic.adoc#interrupt-handling-software
 

Online SimonTopic starter

  • Global Moderator
  • *****
  • Posts: 17816
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: How are interrupts handlers implemented?
« Reply #13 on: May 08, 2022, 08:09:35 am »
So if I understand correctly, taking the M0/M0+ as an example. It has 32 interrupt lines. The compiler is what is responsible for providing the code functionality to deal with the interrupt. The chip maker will create the function name and generally a dummy function that will be defined as a certain interrupt line number so the work is actually being done by the compiler that knows the architecture, the chip maker just assigns a name to a line number.
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: How are interrupts handlers implemented?
« Reply #14 on: May 08, 2022, 08:24:53 am »
When it involves fixed memory locations, in GCC it's defined by linker script, and specs files.

Which, I don't know the exact syntax of, I couldn't write one for 'ya, but you'll see the sections and locations in there.

Example: for AVR, you can see where .vectors is in your program:
1. Get a listing of your program.  I have this in my build script so it runs automatically every time:
Code: [Select]
avr-objdump -h -S $(TARGET_OUTPUT_FILE) > $(TARGET_OUTPUT_DIR)$(TARGET_OUTPUT_BASENAME).lssThe $() are replacers for (automatic) project variables in Code::Blocks, output_file being e.g. /Release/output.elf, basename being output.elf, etc.  See __vectors at the start, just where they should be.

2. Where is __vectors defined?

objdump gives locations by, whatever symbol(s) map to that location, maybe just the first (alphabetically??) if multiple map to the same location, not sure?  So, at least __vectors maps to 0x0.  Of course, it doesn't say what object (*.o) that came from.  (Can ld or g++ tell this I wonder?)  Well, probably you don't have this symbol in your code, so it's elsewhere.

Linker scripts are in $(GCC_ROOT)/$(PLATFORM)/lib/ldscripts.  In my case, avr/lib/ldscripts.  I... don't know what the difference is between the *.x* files honestly, but they're fairly self-explanatory at least(?).

Important part:
Code: [Select]
  /* Internal text space or external memory.  */
  .text   :
  {
    *(.vectors)
    KEEP(*(.vectors))

So the section .text starts with a (sub)section .vectors, which presumably has the __vectors symbol inside it.  OK.

Runtimes and specs, are in $(GCC_ROOT)/lib/gcc/$(PLATFORM)/$(GCC_VERSION)/.  In my case, lib/gcc/avr/8.1.0.  (There's also some runtimes under avr/lib/[CPU core], not sure which ones GCC chooses or why..)

C rarely runs completely bare metal, in the sense that everything in the program is in your source.  A run-time library supports it.  This is not the same as libc, which has all your memcpy, strlen and such.  If you need an #include, that's libc; but if you don't have hardware float support, or division, or any way to handle certain datatypes -- it's all gotta go in here!  Most of which, I think, gets included at the end of .text, as subroutines.  Possibly a few things get inlined, not sure.  It also provides, as you can guess, the interrupt vectors, initialization, etc. Everything main() needs to run.

So, like I'm currently working on a AVR64DA64 project, so let's see what's in that.  It's an xmega2 type so go under lib/gcc/avr/8.1.0/avrxmega2 and run,

Code: [Select]
>avr-nm crtavr64da64.o
00000000 T __bad_interrupt
00002000 W __DATA_REGION_LENGTH__
00806000 W __DATA_REGION_ORIGIN__
00000200 W __EEPROM_REGION_LENGTH__
00000010 W __FUSE_REGION_LENGTH__
00000000 W __heap_end
00000000 W __init
00007fff W __stack
00010000 W __TEXT_REGION_LENGTH__
00000000 W __TEXT_REGION_ORIGIN__
00000000 W __vector_1
00000000 W __vector_10
00000000 W __vector_11
00000000 W __vector_12
[...]
00000000 W __vector_7
00000000 W __vector_8
00000000 W __vector_9
00000000 W __vector_default
00000000 T __vectors
         U exit
         U main

(Or if we use avr-objdump -t ctravr64da64.o we get a bit more info.  Not really sure honestly what the point of all the different binutils is, they overlap a lot...)

Aha, there's __vectors.  And all the vectors in it (though not where).  (The 0's are, I think, default values, i.e., saying they're all pointing to 0 (__bad_interrupt, the reset vector), which is indeed the default.  When you ISR(TCA0_OVF_vect) {} it's declaring a new [duplicate] symbol over one of the __vector_s; the defaults are [W]eak so get overridden by your program code.)

All the arithmetic and other fill-ins, by the way, is in libgcc.a.  Disassemble if you dare (avr-objdump -d libgcc.a).  There's a lot of these, including one in the version root, and avr/lib... again, not sure how it chooses.

*.a files by the way are archives, so, packs of *.o's.  I'm not aware of any way to inspect one individual .o inside the .a using objdump(!??), but you can certainly unzip it (it's a standard Unix Archive) and inspect individual pieces.

And finally, specs files, I think are used to tweak things on top of whatever built-in defaults there are?

And then, for other platforms, similar stuff.  arm-no-eabi-gcc comes with the basic libraries for instruction sets, and you need device-specific CMSIS or (mfg-specific as well) HAL on top, plus ld script to get everything in the right places.  And so on.

Tim

(Actually, is that misrepresenting a bit, what all goes into libc?  All of it, actually?  There's quite a lot of symbols in all those libs, GCC has to know which ones to call to construct a given expression.  They must be tightly integrated.  Which, I suppose how else is it gonna be?  All the more reason these projects move slowly indeed (GCC, libc, etc.), besides the obvious complexity of a whole-ass compiler..)
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #15 on: May 08, 2022, 08:45:55 am »
Quote
For all CPU architectures that I know, an interrupt causes the CPU to push its current status onto the stack.
Note that the AMOUNT of "current status" that is saved by the hardware is pretty variable.  PIC8 and AVR pretty much save only the PC, just like a "call" instruction, and the code (compiler-generated code or user code) has to save all the registers that it touches, including the "processor status register" (carry flag/etc)   An ARM chip does that, plus prehaps switches stack point, saves the previous interrupt level, plus the "privilege level", plus all the registers that the normal C ABI would save.   This means that the ARM ISR code can be simpler, but also that latency is greater.

Quote
taking the M0/M0+ as an example. It has 32 interrupt lines. The compiler is what is responsible for providing the code functionality to deal with the interrupt.
Actually, it's sort-of the linker.  Included in an ARM build is usually a file startup_xxx.c (or .S) that contains the vector table, usually something like (this is for SAMD21):
Code: [Select]
__attribute__ ((section(".vectors")))
const DeviceVectors exception_table = {

        /* Configure Initial Stack Pointer, using linker-generated symbols */
        (void*) (&_estack),

        (void*) Reset_Handler,
        (void*) NMI_Handler,
         :
        (void*) PendSV_Handler,
        (void*) SysTick_Handler,

        /* Configurable interrupts */
        (void*) PM_Handler,             /*  0 Power Manager */
        (void*) SYSCTRL_Handler,        /*  1 System Control */
        (void*) WDT_Handler,            /*  2 Watchdog Timer */
        (void*) RTC_Handler,            /*  3 Real-Time Counter */
        (void*) EIC_Handler,            /*  4 External Interrupt Controller */
        (void*) NVMCTRL_Handler,        /*  5 Non-Volatile Memory Controller */
It will also contain "weak" default handlers:
Code: [Select]
/* Peripherals handlers */
void PM_Handler              ( void ) __attribute__ ((weak, alias("Dummy_Handler")));
void SYSCTRL_Handler         ( void ) __attribute__ ((weak, alias("Dummy_Handler")));
void WDT_Handler             ( void ) __attribute__ ((weak, alias("Dummy_Handler")));
void RTC_Handler             ( void ) __attribute__ ((weak, alias("Dummy_Handler")));
Between the two of these, you get behavior where a WDT interrupt will call the Dummy_Handler() function, UNLESS there is a function called WDT_Handler() defined elsewhere in the project's code.  Because of the ARM's extensive context saving I mentioned earlier, WDT_Handler() is just a normal C function - the hardware does all the needed bits "beyond" normal function handling.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #16 on: May 08, 2022, 09:09:23 am »
Quote
For all CPU architectures that I know, an interrupt causes the CPU to push its current status onto the stack.
Note that the AMOUNT of "current status" that is saved by the hardware is pretty variable.  PIC8 and AVR pretty much save only the PC, just like a "call" instruction, and the code (compiler-generated code or user code) has to save all the registers that it touches, including the "processor status register" (carry flag/etc)   An ARM chip does that, plus prehaps switches stack point, saves the previous interrupt level, plus the "privilege level", plus all the registers that the normal C ABI would save.   This means that the ARM ISR code can be simpler, but also that latency is greater.

As can be extracted somewhere from my long post, on an interrupt RISC-V doesn't touch RAM or save any of the normal integer (or FP) registers. Just the PC, priv level, interrupt level are saved to internal CSRs and execution jumps to the address stored in the mtvec CSR (plus 4x the interrupt number, if that mode is enabled)

Ultra-short latency -- as little as 2 or 3 cycles just like a mispredicted branch, but you have to clean up after yourself.
 

Offline emece67

  • Frequent Contributor
  • **
  • !
  • Posts: 614
  • Country: 00
Re: How are interrupts handlers implemented?
« Reply #17 on: May 08, 2022, 09:13:39 am »
.
« Last Edit: August 19, 2022, 05:24:27 pm by emece67 »
 

Online SimonTopic starter

  • Global Moderator
  • *****
  • Posts: 17816
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: How are interrupts handlers implemented?
« Reply #18 on: May 08, 2022, 09:17:13 am »
so basically it's the compiler + other code bundled up as a toolchain that handle the interrupts, this is of course independent of the IDE. What I am I suppose sort of thinking is say I wanted to use a particular IDE that is not the manufacturer one, What do I have to do to get up and running with a particular micro-controller. Manufacturers header files for register definitions, toolchain (compiler + device specific handling). That only leaves something to do the programming with?
 

Online mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13748
  • Country: gb
    • Mike's Electric Stuff
Re: How are interrupts handlers implemented?
« Reply #19 on: May 08, 2022, 09:41:21 am »
so basically it's the compiler + other code bundled up as a toolchain that handle the interrupts, this is of course independent of the IDE. What I am I suppose sort of thinking is say I wanted to use a particular IDE that is not the manufacturer one, What do I have to do to get up and running with a particular micro-controller. Manufacturers header files for register definitions, toolchain (compiler + device specific handling). That only leaves something to do the programming with?
Interrupt handling is usually architecture-specific rather than manufacturer specific, so a compiler for a given processor architecture will usually come with the necessary stuff to deal with interrupts.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: How are interrupts handlers implemented?
« Reply #20 on: May 08, 2022, 11:50:23 am »
Right so what I am getting at is how are these interrupts handlers defined. The manufacturer/whoever provided the IDE/compiler/toolchain/"whatever you don't write yourself" gives you a a function name to call. But what have they put behind that?

It's either by a compiler folks, or the MCU manufacturer. What they exactly do is this:
* Provide a linker script
* Provide startup code

Linker script tells what goes where:
* Place interrupt vectors starting at address x
* Place code starting at address y
* Place unitialized variables at address z, and initialized variables somewhere else again.

Startup code (can be in C, but sometimes in asm for tradition) would define the symbols for those interrupt vectors (function names, so you can use them in your code), and it will also contain first code to execute, which will take care of initializing variables and then calling main().

You totally can write them all by yourself, this is what I always do for ARM projects (basically copy-paste from earlier projects, of course). Call me a control freak if you like, but I think this is also simplest; I see exactly what happens.

In some architectures, interrupt handler vectors are actually jump instructions, but in ARM, they are the actual function addresses. Like this:

Code: [Select]
#define VECTOR_TBL_LEN 166

// Vector table on page 730 on the Reference Manual RM0433
unsigned int * the_nvic_vector[VECTOR_TBL_LEN] __attribute__ ((section(".nvic_vector"))) =
{
/* 0x0000                    */ (unsigned int *) &_STACKTOP,
/* 0x0004 RESET              */ (unsigned int *) stm32init,
/* 0x0008 NMI                */ (unsigned int *) nmi_handler,
/* 0x000C HARDFAULT          */ (unsigned int *) hardfault_handler,
/* 0x0010 MemManage          */ (unsigned int *) memmanage_handler,
/* 0x0014 BusFault           */ (unsigned int *) invalid_handler,
/* 0x0018 UsageFault         */ (unsigned int *) invalid_handler,
/* 0x001C                    */ (unsigned int *) invalid_handler,
/* 0x0020                    */ (unsigned int *) invalid_handler,
/* 0x0024                    */ (unsigned int *) invalid_handler,
/* 0x0028                    */ (unsigned int *) invalid_handler,
/* 0x002C SVcall             */ (unsigned int *) invalid_handler,
/* 0x0030 DebugMonitor       */ (unsigned int *) invalid_handler,
/* 0x0034                    */ (unsigned int *) invalid_handler,
/* 0x0038 PendSV             */ (unsigned int *) invalid_handler,
/* 0x003C SysTick            */ (unsigned int *) invalid_handler,
/* 0x0040 WWDG1              */ (unsigned int *) invalid_handler,
/* 0x0044 PVD (volt detector)*/ (unsigned int *) shutdown_handler,
. . .

The "__attribute__ ((section(".nvic_vector")))" is important because this allows the use of linker script to tell where this table exactly needs to go:

Code: [Select]
MEMORY
{
  ram_axi    (rwx)      : ORIGIN = 0x24000000, LENGTH = 512K
  ram_sram12 (rwx)      : ORIGIN = 0x30000000, LENGTH = 256K
  ram_dtcm   (rwx)      : ORIGIN = 0x20000000, LENGTH = 112K /* Leave 16k for stack*/
  ram_itcm   (rwx)      : ORIGIN = 0x00000000, LENGTH = 63K
  ram_vectors(rwx)      : ORIGIN = 0x0000FC00, LENGTH = 1K  /* last 1K of ITCM dedicated for relocated vector table*/
  stack(rwx)            : ORIGIN = 0x2001fff8, LENGTH = 0K /* In DTCM*/

  rom_b1s0 (rx)         : ORIGIN = 0x08000000, LENGTH = 128K
  rom_b1s1 (rx)         : ORIGIN = 0x08020000, LENGTH = 128K
  rom_b1s234567 (rx)    : ORIGIN = 0x08040000, LENGTH = 768K
}

SECTIONS
{
    .nvic_vector :
    {
        *(.nvic_vector)  /* THIS refers to the section name in the C code*/
    } >ram_vectors AT>rom_b1s0

. . .

So at the end of the day, it's a little bit of code, and understanding the tools, to make the function addresses go in the right place in the memory.
« Last Edit: May 08, 2022, 11:53:03 am by Siwastaja »
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: How are interrupts handlers implemented?
« Reply #21 on: May 08, 2022, 12:29:20 pm »
My old Atlas MIPS board comes with a very old C-toolchain, and you cannot define interrupt sections like you can do with Gcc, so ... what I am doing is a pretty wild hack:

crt0.s is written in assembly, its first instruction disables interrupts.

The main C-function contains a call to ISR_init(), which is where the interrupt table is defined.
ISR_init() does nothing but copying the address of every ISR_function into the proper address of the ISR_table.

Then it enables interrupts.

It wastes a bit of code-space, but not so much. I use it to make my code more portable among different toolchains.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: How are interrupts handlers implemented?
« Reply #22 on: May 08, 2022, 02:11:27 pm »
Of course, at the end of the day, all that matters is that the memory addresses of (or jump instructions to) the ISR functions go to the right place (usually near the beginning) of the flash memory. This part would be utterly trivial if you were programming by writing the output binary directly with a hex editor, without any tools (compilers or linkers) at all!

So the challenge is entirely figuring out how to use the tools to do this for you. Nowadays, it's very often gcc and ld. Linker scripts specifically seem weird even for quite seasoned people because often the built-in scripts are good, but you can figure out the basics by looking at examples and just modify them to your needs without completely understanding how ld works.
 
The following users thanked this post: DiTBho

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: How are interrupts handlers implemented?
« Reply #23 on: May 08, 2022, 07:14:14 pm »
My old Atlas MIPS board comes with a very old C-toolchain, and you cannot define interrupt sections like you can do with Gcc, so ... what I am doing is a pretty wild hack:

crt0.s is written in assembly, its first instruction disables interrupts.

The main C-function contains a call to ISR_init(), which is where the interrupt table is defined.
ISR_init() does nothing but copying the address of every ISR_function into the proper address of the ISR_table.

Then it enables interrupts.

It wastes a bit of code-space, but not so much. I use it to make my code more portable among different toolchains.

Mind, shouldn't be necessary on most any MCU that's got IVT in Flash -- but anything that has it in RAM, you bet.  For example, this is one of the first things the IBM-PC BIOS did -- 8086 boots to ffff0h (reset vector) with IVT hard wired at 0h and BIOS hard wired at f000:0-ffffh.  For basic operation (as in, RAM being usable at all), several peripherals need to be initialized first (PIT, DMA), and then interrupts can be installed (CPU faults, timers, DMA, keyboard..).  In the mean time, RAM is useless!  (It's DRAM: state leaks away over time.  Peripherals (PIT, DMA) have to be initialized to enable automatic refresh.)

Or likewise, anything that has IVT remappable into RAM -- but this is a simple matter of memcpy-ing the IVT and making whatever changes as needed, ahead of the remap.

Also, to further clarify this diversion --

In some architectures, interrupt handler vectors are actually jump instructions, but in ARM, they are the actual function addresses. Like this:

On 8086, they're DWORDs which give the "long" address, as is the usual format for the CPU.  (It's a 20 bit address space, but segmented, the top 4 bits only being accessible via segment registers (CS, DS, ES, SS).  The middle 12 bits overlap -- the physical address is ((segment) << 4) + (index).  Very redundant, but flexible in a backwards-compatible way I guess, which was the intent AFAIK.  So, "long" pointers are both 16-bit values together.  (It's different on 386+ protected mode, but, you're almost always going to be using an OS to handle that for you, so, unless you're writing the OS yourself -- the protected mode IVT isn't something to worry about.  Or the various other tables that PM uses.)

Also to be clear, AVR literally jumps to the interrupt address -- you could write the whole ISR right there in the IVT, if you guarantee nothing ever uses the intervening vectors and jumps into the middle of that ISR!  Neat, but not very useful. :D  So, 99.99% of use cases, you jump out into normal PROGMEM space and finish things up there.

Tim
« Last Edit: May 08, 2022, 07:22:30 pm by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: DiTBho

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: How are interrupts handlers implemented?
« Reply #24 on: May 08, 2022, 07:53:50 pm »
I'm curious about what code is written in the back end (and beyond looking at how headers files go down a wormhole of bit's being defined in yet another file) for interrupts. So I know that when an interrupt occurs the processor saves the current state and runs off to a specified memory location. To me the user this translates into the automatic calling of a function that the chip manufacturer has predefined. But what code has the manufacturer written in order to have that function be placed in a certain physical location in memory?
The only right answer to this question is: it depends. It depends on which microcontroller / processor is used; there is no universal way. So please specify the microcontroller / processor you are interested in.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11888
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #25 on: May 08, 2022, 08:33:31 pm »
I'm curious about what code is written in the back end (and beyond looking at how headers files go down a wormhole of bit's being defined in yet another file) for interrupts. So I know that when an interrupt occurs the processor saves the current state and runs off to a specified memory location. To me the user this translates into the automatic calling of a function that the chip manufacturer has predefined. But what code has the manufacturer written in order to have that function be placed in a certain physical location in memory?
The only right answer to this question is: it depends. It depends on which microcontroller / processor is used; there is no universal way. So please specify the microcontroller / processor you are interested in.

Furthermore, the technical answer to this question with any particular microcontroller lies in the datasheet. Processors do not execute the C language, they execute machine code. The datasheet will specify exactly how the hardware handles interrupts and what machine code you have to write to process those interrupts.

C compilers present a high level abstraction that eventually is translated to machine code. Any given C compiler and toolchain generating code for specific hardware will have a particular way to abstract the interrupt handling and make it available to your program.

It is sometimes suggested that to get a good understanding of microcontrollers you should write some simple programs in machine language/assembly language. When dealing with interrupts, I/O and peripheral interfacing, this is an especially good idea. Once you follow the datasheet and see what the hardware is doing, you can get a much better idea of what your development environment is doing behind the scenes.
 

Offline cv007

  • Frequent Contributor
  • **
  • Posts: 826
Re: How are interrupts handlers implemented?
« Reply #26 on: May 08, 2022, 08:52:41 pm »
Here is the smallest generated code that will function for a cortex-m, which it seems you are focused on-

https://godbolt.org/z/K6Pfqq5oW

Built with no default libraries, no other startup code- just a single source file and a linker script. There are no includes involved, there is no manufacturer code, the compiler is doing as its told but is adding nothing. This will only run an infinite loop, but it will run. This just points out for a cortex-m0/similar, its not a difficult job to get the vectors setup and you can do it yourself if you are inclined to do so. Not necessarily an easy thing initially, but once the idea hits home its certainly doable.

Using manufacturers code means you get a linker script and startup files, so now are only left with the task of creating an interrupt function with the same name as one you want to use, which then 'overrides' the weak function the startup code created with the same name. Certainly easier to use until you want to do something unusual.

Here is a stm32/cortex-m0plus startup/linker example (which is in C++, makes no difference)-
https://github.com/cv007/NUCLEO32_G031K8_B/blob/main/startup.cpp
in this case the ram is used for the vector table, and the example starts to look more complicated than the simple example above, but not difficult once you understand the vector table is just a collection of function addresses, plus the initial stack top value.


Getting a general answer to the question without a specific mcu in mind does not work.
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: How are interrupts handlers implemented?
« Reply #27 on: May 08, 2022, 09:12:50 pm »
Does that actually work, inline linker script?!  Or is that just an example (godbolt still uses whatever it uses stock)?

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #28 on: May 08, 2022, 09:24:44 pm »
Code: [Select]
#if 0 //linker script
   :
#endif

oooh...  That's sort-of cute!  Does anyone put their linker scripts in their C source and extract it with the C preprocessor for doing builds?
 

Offline cv007

  • Frequent Contributor
  • **
  • Posts: 826
Re: How are interrupts handlers implemented?
« Reply #29 on: May 08, 2022, 11:45:17 pm »
Quote
Or is that just an example
You cannot put a linker script in the online compiler, so its just a listing of the linker script as it will be in a linker script file. If one actually uses that linker script code (in a linker script) and compiles the startup code (in a source file), you get what is in the comments at the end of the online example (objdump).
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #30 on: May 09, 2022, 02:18:25 am »
Heh.  It's one of those contradictions of modern programming:
  • "To really understand how this works, you should write some code in bare assembly language with no vendor-provided code."
  • "This is a modern CPU; there is no reason for you to ever try to program it in assembly language!"
----
Creating and/or copying a vector table from Flash to RAM is pretty common (when possible.)  You need to do that, or something like that, to get maximum performance out of run-time changeable ISRs.  Or other reasons.

The CPUs that treat RESET as a type of exception somewhat complicate matters.  When that was less common, a CPU on reset would start at some location, and the vectors might be somewhere else (someone mentioned x86 with "start" in high memory and vectors in low memory...)  This means that "initial" vectors need to be in ROM/Flash somehow, which is ... so-so.
 

Offline cv007

  • Frequent Contributor
  • **
  • Posts: 826
Re: How are interrupts handlers implemented?
« Reply #31 on: May 09, 2022, 03:56:06 am »
Quote
"This is a modern CPU; there is no reason for you to ever try to program it in assembly language!"
So how does that idea fail when using a cortex-m? Except for some things like mrs/dsb/nop instructions, what else requires one to get into assembly if they would rather not?

Quote
This means that "initial" vectors need to be in ROM/Flash somehow, which is ... so-so.
Not all of them. In the link for the stm32 startup, only the stack, reset/nmi/hardfault addresses are in flash. Could probably get by with just the first two, but if a hardfault takes place in the code before the vectors are setup, then you get to a known location so can probably figure out what you did wrong instead of ending up who knows where. Once working, could probably eliminate the latter two, but makes little difference so they stay in place.
 

Offline HwAoRrDk

  • Super Contributor
  • ***
  • Posts: 1477
  • Country: gb
Re: How are interrupts handlers implemented?
« Reply #32 on: May 09, 2022, 03:59:59 am »
Also to be clear, AVR literally jumps to the interrupt address -- you could write the whole ISR right there in the IVT, if you guarantee nothing ever uses the intervening vectors and jumps into the middle of that ISR!  Neat, but not very useful. :D

I remember reading some blog post where the author did just this - put the whole ISR in the IVT. Don't remember what the overall purpose of the code was, but the author wanted super-minimal latency on the ISR, and it was small enough to put in the IVT. I think also it was the only interrupt to be handled (apart from reset vector, obviously), so the entire remaining table could be used.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Re: How are interrupts handlers implemented?
« Reply #33 on: May 09, 2022, 08:09:02 am »
Quote
ARM Cortex CPUs have been designed so that interrupt handlers can be completely normal functions. The CPU internally (in hardware, not software) saves the state of whatever was running, by pushing registers in stack and popping them back after the function returns. Also because the vector table is just a list of function addresses (and usually relocatable in RAM), this can't get any easier for the programmer.

Coming from an assembler background, and Z80 etc where you have to save everything yourself, it took me a while to realise this :)

However, the ISR still has to clear the interrupt source (the IP - interrupt pending - or whatever bit). And that in turn enables lower priority interrupts to get serviced, so you can choose the point at which you clear that bit. I often wrote ISRs where I cleared the IP right away, which I suspect few people do. It was often necessary because the old CPUs were relatively slow.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: How are interrupts handlers implemented?
« Reply #34 on: May 09, 2022, 08:43:34 am »
Quote
ARM Cortex CPUs have been designed so that interrupt handlers can be completely normal functions
However, the ISR still has to clear the interrupt source (the IP - interrupt pending - or whatever bit). And that in turn enables lower priority interrupts to get serviced, so you can choose the point at which you clear that bit.

This is incorrect, you don't need to clear anything. Lower priority interrupts gets served as soon as the higher priority ISR function returns.

However, some peripherals may need clearing their interrupt status bit, in the peripheral register, but this is completely manufacturer specific and not related to the ARM core. Often no such clear is needed, for example a data register read access often also clears the peripheral interrupt signal.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Re: How are interrupts handlers implemented?
« Reply #35 on: May 09, 2022, 09:15:16 am »
Quote
. Lower priority interrupts gets served as soon as the higher priority ISR function returns.

The CPU must then contain an up/down counter which counts calls and returns of nested function calls within the ISR, and enables lower priority interrupts when the counter returns to zero. Or maybe they save the SP and look for when it matches again. I looked through the ST HAL code ISRs and it is extremely convoluted but they seem to be clearing the IP bits when appropriate, but do nothing else regarding interrupts. Their ISRs are huge...

On the Z80 etc families, you have an IRET/RETI instruction which re-enabled the lower priority ints.

I did a google to try to find out how the ARM32 "RETI" (which doesn't exist as such) is implemented but found nothing. And obviously an ISR can call functions...
« Last Edit: May 09, 2022, 12:38:17 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: How are interrupts handlers implemented?
« Reply #36 on: May 09, 2022, 09:54:30 am »
Does anyone put their linker scripts in their C source and extract it with the C preprocessor for doing builds?

No, because it's evil and more prone to fail  :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #37 on: May 09, 2022, 03:27:47 pm »
Quote
. Lower priority interrupts gets served as soon as the higher priority ISR function returns.

The CPU must then contain an up/down counter which counts calls and returns of nested function calls within the ISR, and enables lower priority interrupts when the counter returns to zero.

They use a magic value in the return address register that tells it how to restore the state. Attempting to load that value to the PC by e.g. a conventional return  triggers the interrupt return behavior.
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #38 on: May 09, 2022, 04:23:18 pm »
Does anyone put their linker scripts in their C source and extract it with the C preprocessor for doing builds?

No, because it's evil and more prone to fail  :D

Then I'm convinced someone has not only done it but also mandated it as the only correct style within their tiny fiefdom :>
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: How are interrupts handlers implemented?
« Reply #39 on: May 09, 2022, 06:55:21 pm »
Then I'm convinced someone has not only done it but also mandated it as the only correct style within their tiny fiefdom :>

yup, like what Infineon did with their RAD  :o :o :o
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: How are interrupts handlers implemented?
« Reply #40 on: May 09, 2022, 09:48:47 pm »
I'm curious about what code is written in the back end (and beyond looking at how headers files go down a wormhole of bit's being defined in yet another file) for interrupts. So I know that when an interrupt occurs the processor saves the current state and runs off to a specified memory location. To me the user this translates into the automatic calling of a function that the chip manufacturer has predefined. But what code has the manufacturer written in order to have that function be placed in a certain physical location in memory?
The only right answer to this question is: it depends. It depends on which microcontroller / processor is used; there is no universal way. So please specify the microcontroller / processor you are interested in.

Furthermore, the technical answer to this question with any particular microcontroller lies in the datasheet. Processors do not execute the C language, they execute machine code. The datasheet will specify exactly how the hardware handles interrupts and what machine code you have to write to process those interrupts.

C compilers present a high level abstraction that eventually is translated to machine code. Any given C compiler and toolchain generating code for specific hardware will have a particular way to abstract the interrupt handling and make it available to your program.
And that isn't even true for all cases. On ARM Cortex-M microcontrollers you do not need assembly at all to get the microcontroller going. The CPU core is designed to call C functions from an interrupt vector directly (including main() ). Like I wrote: how interrupts are handled depends entirely on how the CPU core and interrupt handling is implemented. On some controllers the interrupts are handled by a seperate peripheral!
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14474
  • Country: fr
Re: How are interrupts handlers implemented?
« Reply #41 on: May 10, 2022, 02:17:45 am »
If you're specifically considering ARM Cortex-M targets, and have questions about priorities, the following may help: https://community.arm.com/arm-community-blogs/b/embedded-blog/posts/cutting-through-the-confusion-with-arm-cortex-m-interrupt-priorities
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #42 on: May 10, 2022, 02:35:51 am »
Quote
. Lower priority interrupts gets served as soon as the higher priority ISR function returns.

The CPU must then contain an up/down counter which counts calls and returns of nested function calls within the ISR, and enables lower priority interrupts when the counter returns to zero.

They use a magic value in the return address register that tells it how to restore the state. Attempting to load that value to the PC by e.g. a conventional return  triggers the interrupt return behavior.

What does this magic value look like?
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #44 on: May 10, 2022, 04:51:05 am »
Quote
What does this magic value look like?
From ARMv6m (CM0, CM0+) Architecture reference manual (section B.1.5.6 "Exception Entry Behavior")
Code: [Select]
If CONTROL.SPSEL == '0' then
    LR = 0xFFFFFFF9;
else
    LR = 0xFFFFFFFD;
SPSEL says which stack pointer is used (there are two.)


CM3/CM4 (ARMv7m) is slightly more complex, with 6 different magic values (From FFFFFFE1 to FFFFFFFD) depending on mode (Thread/Handler), Stack (Main/Process), and whether it's saving floating point context or not.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #45 on: May 10, 2022, 05:36:43 am »
Quote
What does this magic value look like?
From ARMv6m (CM0, CM0+) Architecture reference manual (section B.1.5.6 "Exception Entry Behavior")
Code: [Select]
If CONTROL.SPSEL == '0' then
    LR = 0xFFFFFFF9;
else
    LR = 0xFFFFFFFD;
SPSEL says which stack pointer is used (there are two.)


CM3/CM4 (ARMv7m) is slightly more complex, with 6 different magic values (From FFFFFFE1 to FFFFFFFD) depending on mode (Thread/Handler), Stack (Main/Process), and whether it's saving floating point context or not.

That's very interesting. That sounds to me like some ROM with 4 bytes of Thumb code at each entry point.

I don't think I have any boards with any of the above cores (my ARM stuff is all Cortex A). I have a Teensy with a CM7. Might be interesting to poke around.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: How are interrupts handlers implemented?
« Reply #46 on: May 10, 2022, 07:28:39 am »
I don't like it. ARM was simpler years ago.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #47 on: May 10, 2022, 08:48:27 am »
I think I'll continue this here rather than in the more specific thread about how the magic return value works.

I'm looking at an NXP document: https://www.nxp.com/docs/en/application-note/AN12078.pdf

It lists interrupt latency for various cores as:

CPU coreCycles
Cortex-M016
Cortex-M0+15
Cortex-M3/M412
Cortex-M710~12

This document shows toggling a GPIO pin on and off after a timer interrupt (which also sends a signal to an output pin) using the following code on an i.MX RT1050 (Cortex-M7) with zero wait state memory:

Code: [Select]
LDR.N R0, [PC, #0x78] ; GPIO2_DR
MOV.W R1, #8388608 ; 0x800000
STR R1, [R0]
MOVS R1, #0
STR R1, [R0]
BX LR ; not shown but I assume

With an oscilloscope they get figures of 10 cycles to enter the interrupt handler, 34 cycles to toggle the pin on, 32 cycles to toggle the pin off. (STR to IO space is much slower than the core speed)


Cortex-M is easy to use, and that's cool, but very "one size fits all". There WILL have been 8 words of stuff stacked by the time you get to the first instruction in your own handler code.

RISC-V instead puts you in the handler with only a pipeline flush of delay (typically 2-3 cycles), but nothing at all has been saved. But it does give you flexibility.

There are some examples in:

https://github.com/riscv/riscv-fast-interrupt/blob/master/clic.adoc#interrupt-handling-software

Here's a simple non-preemptable interrupt handler that just increments a counter in RAM.

Code: [Select]
      addi sp, sp, -8                # Create a frame on stack.
      sw a0, 0(sp)                   # Save working register.

      sw a1, 4(sp)                   # Save working register.
      lui a0, %hi(INTERRUPT_FLAG)

      sw x0, %lo(INTERRUPT_FLAG)(a0) # Clear interrupt flag.
      lui a1, %hi(COUNTER)

      addi a1, a1, %lo(COUNTER)      # Get counter address.
      li a0, 1

      amoadd.w x0, (a1), a0          # Increment counter in memory.

      lw a1, 4(sp)                   # Restore registers.
      lw a0, 0(sp)

      addi sp, sp, 8                 # Free stack frame.
      mret                           # Return from handler using saved mepc.

I've rearranged that slightly from the code at the link, expanding two pseudo-instructions, assigning concrete frame size, and scheduling and grouping instructions for a hypothetical simple in-order dual-issue core that can do two stores (into a store buffer) or two ALU ops in the same clock cycle, and the 2nd ALU op can depend on the first one (skewed pipes). If I understand the materials I found properly, this is right for the Cortex-M7, so I'm assuming similar µarch for a RISC-V.

What we see is that we're already into the first instruction of the actual useful interrupt code with two working registers available on the 6th clock cycle (3rd for dual-issue), or probably 9 and 6 cycles respectively once you add the pipeline refill.

This same example needs only the amoadd modified to instead set or clear a GPIO pin. Something like reading a character from a UART buffer and writing it into a software buffer could be done with the same two working registers and a handful more instructions.


There is example code at ...

https://github.com/riscv/riscv-fast-interrupt/blob/master/clic.adoc#c-abi-trampoline-code

... for enabling interrupt handlers to be written as standard ABI C functions, with support for interrupt chaining and late-arrival of high priority interrupts. There is extensive commentary there of which parts are run with interrupts disabled and which with interrupts enabled, and also how it all works in general.

The code there is for the standard RISC-V ABI, which requires 16 registers to be saved, vs 8 (including PSW) on Cortex-M.

There are proposals to define an "embedded ABI" with fewer argument registers (perhaps 4 like ARM, vs 8 normally) and fewer temporary registers (perhaps 2 instead of 7) so that only maybe 7 registers need to be saved. While this would certainly make interrupt latency for C handlers much lower, experiments with modifying the compiler for this ABI show slow down and code expansion of normal mainline (background code) of up to 30% because of all the extra register spills required.

So unless the interrupt rate is extremely high or the background processing undemanding it's probably better to stick with the standard ABI! And if there is some particular interrupt that needs very low latency, it can always be written in assembly language. Or in C using __attribute__((interrupt)), which saves only the registers the function actually uses -- calling a normal ABI function from the interrupt function results in a full register save.
« Last Edit: May 10, 2022, 08:52:25 am by brucehoult »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #48 on: May 10, 2022, 08:53:44 am »
I don't like it. ARM was simpler years ago.

Simpler internally, or simpler to use?
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: How are interrupts handlers implemented?
« Reply #49 on: May 10, 2022, 10:04:14 am »
Simpler internally, or simpler to use?

Internally. I am a RISC-purist, MIPS-addicted.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13748
  • Country: gb
    • Mike's Electric Stuff
Re: How are interrupts handlers implemented?
« Reply #50 on: May 10, 2022, 11:03:35 am »
Something I missed when they went to the Cortex architecture is the FIQ, with its dedicated register bank and ability to put the ISR directly at the vector address. This could achieve extremely low latency, allowing tricks like reading low-res data directly from cameras.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 
The following users thanked this post: hans

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #51 on: May 10, 2022, 11:24:59 am »
Simpler internally, or simpler to use?

Internally. I am a RISC-purist, MIPS-addicted.

Yup. MIPS and RISC-V are very similar in this. But always reserving $k0 and $k1 for interrupts seems a bit bodgy. The RISC-V solution to "how do I get a register to work with?" is the mscratch CSR ... in MIPS terms, an unused register in CP0 that the interrupt handler can just write a GPR into (or swap with).
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: How are interrupts handlers implemented?
« Reply #52 on: May 10, 2022, 12:23:18 pm »
PowerPC and POWER have a similar trick. Not tricky to implement in a Cop0, hence it's welcome for me  :D

The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #53 on: May 10, 2022, 10:19:02 pm »
Quote
I don't like it. ARM was simpler years ago.
I'm inclined to agree.
I tried writing a disassembler for CM4, and those thumb2 instruction encodings are just AWFUL, in ways that I thought RISC intentionally avoided.  Plain thumb (CM0) isn't too bad, but it has a lot of non-orthogonality and special casing that I again thought would have been foreign to "principles."
I guess the increase in code density is considered worthwhile (at a time when most of a microcontroller die is occupied by code memory), but it's pretty ugly.

Similarly, the NVIC is neat, but ... removes choices from the programmer.  I prefer the sort of "has vectors, but how much context to save is all up to you" of some of the simpler architectures.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4427
  • Country: dk
Re: How are interrupts handlers implemented?
« Reply #54 on: May 10, 2022, 10:31:39 pm »
I don't like it. ARM was simpler years ago.

it was also slower, used more memory and required you to jump through assembly hoops to get things done
 
The following users thanked this post: Siwastaja

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #55 on: May 11, 2022, 02:14:11 am »
I tried writing a disassembler for CM4, and those thumb2 instruction encodings are just AWFUL, in ways that I thought RISC intentionally avoided.  Plain thumb (CM0) isn't too bad, but it has a lot of non-orthogonality and special casing that I again thought would have been foreign to "principles."

Right.

T16 has a lot more complex encoding than A32, with 19 different instruction formats.

RISC-V C extension also has more complex encoding than the base ISA, with 8 instruction formats vs 4. At least it maintains the property of the base ISA of the bottom three bits of rs1 and rs2 always being in the same place (if they exist) and the MSB (sign bit) of immediate/offset always being in the same place. Which of the two possible register fields is rd (if it exists) does vary though.

T32 encoding seems just random and ugly. It has the excuse of having to fit in around the two actually independent instructions that make up each of the T16 BL/BLX instructions. In T16 you can separate the two instructions and they still work (though assemblers and compilers never do), but in T32 they are actual 32 bit instructions.

A64 encoding seems equally ugly, for no reason apparent to me.

Quote
I guess the increase in code density is considered worthwhile (at a time when most of a microcontroller die is occupied by code memory), but it's pretty ugly.

T16 was also constrained by having to be something close to a complete and efficient ISA in itself, at least for code that a C compiler would generate. The original CPUs could always switch back to A32 mode if you needed a weird thing (such as the hi half of a multiply, or some system function) but you couldn't just randomly and efficiently throw a 32 bit opcode in the middle of 16 bit code. The CM0 has a handful of T32 instructions for those purposes, and you can intermix them.

RVC was designed with the knowledge that it didn't have to be complete, because using a full size instruction instead is always possible at any point.

Quote
Similarly, the NVIC is neat, but ... removes choices from the programmer.  I prefer the sort of "has vectors, but how much context to save is all up to you" of some of the simpler architectures.

NVIC is easy, but one size fits all. Short simple functions that need only one or two working registers can get lower latency with a simpler mechanism.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #56 on: May 11, 2022, 02:18:23 am »
I don't like it. ARM was simpler years ago.

it was also slower, used more memory and required you to jump through assembly hoops to get things done

True, but the necessary assembly language to provide all NVIC features is very nearly as fast (maybe faster on a dual-issue or wider core), only a couple of dozen instructions long, and can literally be printed in the manual, or supplied in platform or compiler libraries so 99% of programmers don't have to write or understand it themselves.
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #57 on: May 11, 2022, 04:10:33 am »
I don't like it. ARM was simpler years ago.

it was also slower, used more memory and required you to jump through assembly hoops to get things done

I'm not any kind of CPU architecture expert, but the argument that "ARM Cortex-M interrupts follow the C calling convention so you can just write C handlers" always seemed a big pointless.  Plenty of other platforms let you write C ISRs with just a keyword or attribute to specify the calling convention.  It's trivial for the compiler to add a slightly different prologue / epilogue which can then be partially optimized away based on the registers actually used by the ISR.  Saving and restoring half the register file plus checking for magic values on branch instructions seems like an awful lot of baggage to avoid adding __attribute__((interrupt)) to at most a couple dozen functions in your project.  Again, I'm not an expert and there may be other reasons why the ARM approach is desirable, but that particular argument doesn't seem particularly compelling.
 
The following users thanked this post: JPortici

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: How are interrupts handlers implemented?
« Reply #58 on: May 11, 2022, 08:24:22 am »
I don't like it. ARM was simpler years ago.

it was also slower, used more memory and required you to jump through assembly hoops to get things done

I'm not any kind of CPU architecture expert, but the argument that "ARM Cortex-M interrupts follow the C calling convention so you can just write C handlers" always seemed a big pointless.  Plenty of other platforms let you write C ISRs with just a keyword or attribute to specify the calling convention.  It's trivial for the compiler to add a slightly different prologue / epilogue which can then be partially optimized away based on the registers actually used by the ISR.  Saving and restoring half the register file plus checking for magic values on branch instructions seems like an awful lot of baggage to avoid adding __attribute__((interrupt)) to at most a couple dozen functions in your project.  Again, I'm not an expert and there may be other reasons why the ARM approach is desirable, but that particular argument doesn't seem particularly compelling.
It is not that simple. On the older ARM architectures (like ARM7TDMI) you'll need a wrapper (written in assembler) to demultiplex the interrupts from a vectored interrupt handler peripheral. This more or less requires you to save all the registers on the stack anyway.  Also, many ARM Cortex controllers have DMA nowadays which takes away the need for interrupts with a high repetition rate.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: newbrain

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: How are interrupts handlers implemented?
« Reply #59 on: May 11, 2022, 10:38:40 am »
Also actual complex systems benefit from tail-chaining, which is realistically only possible if the hardware does the stacking (because hardware can trivially check if another interrupt is pending).

With software push/pop, you save a few cycles on some simple handlers by not stacking everything, but then if another IRQ gets pending during the first, the sooner you get there the better, but having the software stupidly pop all the registers just to push them all again in the next handler is wasted time and this happens in the worst case, making long wait even longer.
 
The following users thanked this post: nctnico, newbrain

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #60 on: May 11, 2022, 11:38:03 am »
Also actual complex systems benefit from tail-chaining, which is realistically only possible if the hardware does the stacking (because hardware can trivially check if another interrupt is pending).

With software push/pop, you save a few cycles on some simple handlers by not stacking everything, but then if another IRQ gets pending during the first, the sooner you get there the better, but having the software stupidly pop all the registers just to push them all again in the next handler is wasted time and this happens in the worst case, making long wait even longer.

That's easy to avoid to suitable hardware design.

Here, once again, is standard (i.e. published for all to use) RISC-V interrupt handler code that implements interrupt chaining and late arrival of high priority interrupts.

https://github.com/riscv/riscv-fast-interrupt/blob/master/clic.adoc#c-abi-trampoline-code

If an interrupt comes in while another handler is executing, there are 5 instructions from one handler returning to the new one being called.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4427
  • Country: dk
Re: How are interrupts handlers implemented?
« Reply #61 on: May 11, 2022, 02:30:50 pm »
Also actual complex systems benefit from tail-chaining, which is realistically only possible if the hardware does the stacking (because hardware can trivially check if another interrupt is pending).

With software push/pop, you save a few cycles on some simple handlers by not stacking everything, but then if another IRQ gets pending during the first, the sooner you get there the better, but having the software stupidly pop all the registers just to push them all again in the next handler is wasted time and this happens in the worst case, making long wait even longer.

That's easy to avoid to suitable hardware design.

Here, once again, is standard (i.e. published for all to use) RISC-V interrupt handler code that implements interrupt chaining and late arrival of high priority interrupts.

https://github.com/riscv/riscv-fast-interrupt/blob/master/clic.adoc#c-abi-trampoline-code

If an interrupt comes in while another handler is executing, there are 5 instructions from one handler returning to the new one being called.

so a bunch of carefully handcrafted assembly taking up code memory and probably with waitstates

 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: How are interrupts handlers implemented?
« Reply #62 on: May 11, 2022, 03:37:00 pm »
so a bunch of carefully handcrafted assembly taking up code memory and probably with waitstates

Indeed, a hardware stacker can run in parallel with flash controller prefetching the vector address, and then, prefetching the first instructions of the ISR. With software solution, you just wait for the flash, doing nothing, and then start stacking.

But as always, the devil is in the details, and I'm 100% there are many cases where the software solution ends up being faster. But I like the ARM Cortex way, really. It gives consistently small (albeit not always the absolute minimum) latency, minimizes code size and enables standard functions to be used as handlers, although as ejeffrey says, the last one isn't practically that important.
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #63 on: May 11, 2022, 03:46:10 pm »
It is not that simple. On the older ARM architectures (like ARM7TDMI) you'll need a wrapper (written in assembler) to demultiplex the interrupts from a vectored interrupt handler peripheral. This more or less requires you to save all the registers on the stack anyway.  Also, many ARM Cortex controllers have DMA nowadays which takes away the need for interrupts with a high repetition rate.

I've never used ARM7TDMI, but I wasn't really comparing a vectored vs. non-vectored controller, I was more comparing it to something like x86 or (to my understanding) 68k where there is an interrupt vector table but the CPU only saves minimal state and ISRs use a dedicated iret instruction to restore processor state and return.  On these systems you can still write ISRs in C if you just mark them as such to the compiler.  Like I said, I'm not an expert on the performance and complexity tradeoffs but "the CPU implements the platform C ABI and uses a magic return value so you can write handlers in C" is a bit of a silly argument on ARM's part because it's been possible to write ISRs in C for ages.

That said, an interrupt demultiplexer which could be but in no way needs to be written in assembly as long as you have macros/intrinsics for accessing the interrupt source is pretty simple and low overhead and only needs to be written once.  Maybe that makes ISR latency worse or is in some other way less desirable but simply avoiding that is not a very compelling argument to me.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4427
  • Country: dk
Re: How are interrupts handlers implemented?
« Reply #64 on: May 11, 2022, 04:09:40 pm »
It is not that simple. On the older ARM architectures (like ARM7TDMI) you'll need a wrapper (written in assembler) to demultiplex the interrupts from a vectored interrupt handler peripheral. This more or less requires you to save all the registers on the stack anyway.  Also, many ARM Cortex controllers have DMA nowadays which takes away the need for interrupts with a high repetition rate.

I've never used ARM7TDMI, but I wasn't really comparing a vectored vs. non-vectored controller, I was more comparing it to something like x86 or (to my understanding) 68k where there is an interrupt vector table but the CPU only saves minimal state and ISRs use a dedicated iret instruction to restore processor state and return.  On these systems you can still write ISRs in C if you just mark them as such to the compiler. 

but then you'll have multiple copies of the stacking/restoring code taking up flash, and probably with waitstates so it is slow

 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14474
  • Country: fr
Re: How are interrupts handlers implemented?
« Reply #65 on: May 11, 2022, 05:16:15 pm »
Also actual complex systems benefit from tail-chaining, which is realistically only possible if the hardware does the stacking (because hardware can trivially check if another interrupt is pending).

With software push/pop, you save a few cycles on some simple handlers by not stacking everything, but then if another IRQ gets pending during the first, the sooner you get there the better, but having the software stupidly pop all the registers just to push them all again in the next handler is wasted time and this happens in the worst case, making long wait even longer.

That's easy to avoid to suitable hardware design.

Here, once again, is standard (i.e. published for all to use) RISC-V interrupt handler code that implements interrupt chaining and late arrival of high priority interrupts.

https://github.com/riscv/riscv-fast-interrupt/blob/master/clic.adoc#c-abi-trampoline-code

If an interrupt comes in while another handler is executing, there are 5 instructions from one handler returning to the new one being called.

It's hard to beat a completely hardware solution here though. Now granted it may just be a matter of a couple cycles, and the software approach is more flexible.
The beauty of RISC-V apart from its simplicity is that you can easily extend it. With ARM, you get what they give (uh, sell) you.
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #66 on: May 11, 2022, 05:50:47 pm »
It is not that simple. On the older ARM architectures (like ARM7TDMI) you'll need a wrapper (written in assembler) to demultiplex the interrupts from a vectored interrupt handler peripheral. This more or less requires you to save all the registers on the stack anyway.  Also, many ARM Cortex controllers have DMA nowadays which takes away the need for interrupts with a high repetition rate.

I've never used ARM7TDMI, but I wasn't really comparing a vectored vs. non-vectored controller, I was more comparing it to something like x86 or (to my understanding) 68k where there is an interrupt vector table but the CPU only saves minimal state and ISRs use a dedicated iret instruction to restore processor state and return.  On these systems you can still write ISRs in C if you just mark them as such to the compiler. 

but then you'll have multiple copies of the stacking/restoring code taking up flash, and probably with waitstates so it is slow

Yes, it seems pretty likely that if your typical ISR requires all 8 caller saved registers it's probably more efficient to have the CPU do it, especially on a microcontroller that is executing from flash with wait states but can access the stack in SRAM in a single cycle.  That said ARM has STM/LDM that could do the register stacking/unstacking with a single instruction each so it wouldn't really be saving much space or time even executing from flash.  The tradeoff is that if your ISR only requires 2-3 registers -- which would be typical of an ISR that reads a word from an IO register and stores it in a buffer -- you are doing unnecessary save/restores.

I definitely don't know enough to know how often these situations come up in real applications.  My point was just that the argument "this approach is great because you can use C functions as ISRs" is both unimportant and disingenuous.  Interrupt latency and performance matter, the ability to use standard calling convention C functions as ISRs matter very little.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4427
  • Country: dk
Re: How are interrupts handlers implemented?
« Reply #67 on: May 11, 2022, 08:27:37 pm »
It is not that simple. On the older ARM architectures (like ARM7TDMI) you'll need a wrapper (written in assembler) to demultiplex the interrupts from a vectored interrupt handler peripheral. This more or less requires you to save all the registers on the stack anyway.  Also, many ARM Cortex controllers have DMA nowadays which takes away the need for interrupts with a high repetition rate.

I've never used ARM7TDMI, but I wasn't really comparing a vectored vs. non-vectored controller, I was more comparing it to something like x86 or (to my understanding) 68k where there is an interrupt vector table but the CPU only saves minimal state and ISRs use a dedicated iret instruction to restore processor state and return.  On these systems you can still write ISRs in C if you just mark them as such to the compiler. 

but then you'll have multiple copies of the stacking/restoring code taking up flash, and probably with waitstates so it is slow

Yes, it seems pretty likely that if your typical ISR requires all 8 caller saved registers it's probably more efficient to have the CPU do it, especially on a microcontroller that is executing from flash with wait states but can access the stack in SRAM in a single cycle.  That said ARM has STM/LDM that could do the register stacking/unstacking with a single instruction each so it wouldn't really be saving much space or time even executing from flash.  The tradeoff is that if your ISR only requires 2-3 registers -- which would be typical of an ISR that reads a word from an IO register and stores it in a buffer -- you are doing unnecessary save/restores.

but the hardware is clever enough to fetch instructions in parallel with the stacking (and unstacking) so there probably isn't much to gain


I definitely don't know enough to know how often these situations come up in real applications.  My point was just that the argument "this approach is great because you can use C functions as ISRs" is both unimportant and disingenuous.  Interrupt latency and performance matter, the ability to use standard calling convention C functions as ISRs matter very little.

maybe, but not requiring the compiler to support some special decoration of ISRs is convenient and the code generation should be very well optimized and debugged for the using registers according to the normal calling convention
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #68 on: May 11, 2022, 08:49:53 pm »
Also actual complex systems benefit from tail-chaining, which is realistically only possible if the hardware does the stacking (because hardware can trivially check if another interrupt is pending).

With software push/pop, you save a few cycles on some simple handlers by not stacking everything, but then if another IRQ gets pending during the first, the sooner you get there the better, but having the software stupidly pop all the registers just to push them all again in the next handler is wasted time and this happens in the worst case, making long wait even longer.

That's easy to avoid to suitable hardware design.

Here, once again, is standard (i.e. published for all to use) RISC-V interrupt handler code that implements interrupt chaining and late arrival of high priority interrupts.

https://github.com/riscv/riscv-fast-interrupt/blob/master/clic.adoc#c-abi-trampoline-code

If an interrupt comes in while another handler is executing, there are 5 instructions from one handler returning to the new one being called.

so a bunch of carefully handcrafted assembly taking up code memory and probably with waitstates

Yes, carefully crafted once by the experts who designed the interrupt hardware, in parallel with designing that hardware. For an ABI similar to the Cortex-M one I make it 94 bytes of code. Wait states is up to the implementation. A chip manufacturer could put the code in on-chip ROM. The linker script could put it in SRAM -- ITCM in ARM terminology, ITIM in the RISC-V world (or at least SiFive)
« Last Edit: May 11, 2022, 10:34:56 pm by brucehoult »
 

Offline MadScientist

  • Frequent Contributor
  • **
  • Posts: 439
  • Country: 00
Re: How are interrupts handlers implemented?
« Reply #69 on: May 11, 2022, 09:45:01 pm »
To get back to the core question. Interrupt programming doesn’t require any “ manufacturers “ code or predetermined setup. If you are writing in assembler you just write your interrupt function following the constraints for the processor architecture.
If writing in c , then the startup code , which you can equally write from scratch yourself , will typically place dummy nul Vectors and you code your C function instructing the linker to link the file appropriately.

Again there is no “ magic “ code. Hence any IDE or toolchain can be used.

Of course C compilers are provided with “ typical “ startup code or manufacturers will provide” canned “ sample code etc. But this isn’t needed , you can write your own quite easy. ( and yes it can all be in C , no assembly required ! )

At the end of the day , all interrupts are , is essentially an abrupt change of the program counter , the processor then expects instructions or code to be at that location. All languages designed for embedded use have facilities to place code at specific memory locations.
« Last Edit: May 11, 2022, 09:52:14 pm by MadScientist »
EE's: We use silicon to make things  smaller!
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: How are interrupts handlers implemented?
« Reply #70 on: May 12, 2022, 05:18:00 am »
Quote
yes it can all be in C , no assembly required !
That depends on the architecture.  You can't do a PIC8 or AVR8 ISR without either ASM, or hooks built in to the compiler (the hypothetical "ISR" attribute/tag/macro/pragma/whatever.)  I don't think you could do an ARM32 (pre NVIC), either.  C does not have explicit access to the stack, nor to some of the context that needs saved.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: How are interrupts handlers implemented?
« Reply #71 on: May 12, 2022, 11:57:02 am »
R16K was the last MIPS-IV released to the public, before its dead end, MIPS had then moved on to MIPS32 and MIPS64.

My Atlas-YellowKnife accepts CPU modules and I received a MIPS-IV R18200 prototype as sample from a little company.

It's a n FPGA-CPU board, but adapted for the Atlas motherboard released years ago by MIPS inc.. Same connectors, etc.

After R16K there was a plan to add a Nested Vectored Interrupt Controller, but it has been removed, and cop0 has no hardware support for nested interrupts. According to the user manual, they also sound banned from the software side.

MIPS is dead like a walking-dead, if you listen to carefully, you can hear from its tomb what it's saying - "nested interrupts are eeevvviiilll"!

harsh words, not to be underestimated for personal ego :o :o :o
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: How are interrupts handlers implemented?
« Reply #72 on: May 12, 2022, 10:46:56 pm »
MIPS is dead like a walking-dead, if you listen to carefully, you can hear from its tomb what it's saying - "nested interrupts are eeevvviiilll"!

They're making RISC-V stuff now (first chips were announced this week, shipping in September) so they don't have a choice :-)
 
The following users thanked this post: DiTBho

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: How are interrupts handlers implemented?
« Reply #73 on: May 13, 2022, 08:26:43 am »
It is not that simple. On the older ARM architectures (like ARM7TDMI) you'll need a wrapper (written in assembler) to demultiplex the interrupts from a vectored interrupt handler peripheral. This more or less requires you to save all the registers on the stack anyway.  Also, many ARM Cortex controllers have DMA nowadays which takes away the need for interrupts with a high repetition rate.

I've never used ARM7TDMI, but I wasn't really comparing a vectored vs. non-vectored controller, I was more comparing it to something like x86 or (to my understanding) 68k where there is an interrupt vector table but the CPU only saves minimal state and ISRs use a dedicated iret instruction to restore processor state and return.  On these systems you can still write ISRs in C if you just mark them as such to the compiler. 

but then you'll have multiple copies of the stacking/restoring code taking up flash, and probably with waitstates so it is slow

Yes, it seems pretty likely that if your typical ISR requires all 8 caller saved registers it's probably more efficient to have the CPU do it, especially on a microcontroller that is executing from flash with wait states but can access the stack in SRAM in a single cycle.  That said ARM has STM/LDM that could do the register stacking/unstacking with a single instruction each so it wouldn't really be saving much space or time even executing from flash.  The tradeoff is that if your ISR only requires 2-3 registers -- which would be typical of an ISR that reads a word from an IO register and stores it in a buffer -- you are doing unnecessary save/restores.

I definitely don't know enough to know how often these situations come up in real applications.  My point was just that the argument "this approach is great because you can use C functions as ISRs" is both unimportant and disingenuous.  Interrupt latency and performance matter, the ability to use standard calling convention C functions as ISRs matter very little.
It actually makes life a whole lot easier. In many cases the compiler provided way (on older microcontrollers) depends on having seperate vectors for each interrupt which typically require a jump (increasing interrupt latency) to the actual routine as well. Add nested interrupts to that and things get complicated quickly. When you are going to add in stuff like naked C functions (which could be prone error due to the next software engineer not understanding what is going on) things can get really messy in terms of maintainability. OTOH the NVIC found in ARM Cortex-Mx microcontrollers solves all this in hardware and offers a very clean interface to the software developer. What is not to like about that?

On top of that, interrupt latency is highly overrated where it comes to modern microcontroller runnings at 10's of MHz. If your application depends on interrupt latency on a modern microcontroller, then there is something seriously wrong with how the system (hardware + software) has been designed. There are better ways to achieve the same goal (DMA for example).
« Last Edit: May 13, 2022, 09:28:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: hans, cv007, Siwastaja


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf