Software interrupts are... Idunno, rather passe nowadays, anyway?
Like, in the x86 days, there were 256 slots to use, and the BIOS and system only used the first couple dozen, leaving everything else open for software use. No way someone would build a system that actually uses all of them in hardware, but you could if you wanted to, and anything leftover is usable by software interrupt. This was also in analogy to earlier (x80) systems of the time, where zero-page memory ($00 to $FF) might've been used to store important data like BIOS entry points and system variables. For example, MS-DOS loaded executables such that a
JMP 0 in the default segment (initial CS value) would terminate the program. See:
https://en.wikipedia.org/wiki/Program_Segment_PrefixI'm not familiar with 6502, but I expect they used similar things, so this will all be familiar; in which case, this serves as a reminder of where we're coming from, and a more concrete example of how things might've been done.
AFAIK, modern systems largely link API calls into the executable -- the EXE contains a table of symbols, and locations they're called from, that the OS needs to patch with its call addresses. (Which may be stubs in user space -- 32-bit Windows for example only allocates 3.5GB to the application. What the rest is mapped to--you're, I think, not supposed to know? Or they may be task gates or whatever, but I think those were mostly frowned upon for performance reasons?) There is still SYSCALL on *nix, which, I think differs in x64 because it's been improved and just kinda works better? (And maybe Windows uses it too?) But the executor supports all the usual load-time and dynamic linking that's been used over supported history, so you can still do whatever, give or take x86 and x64 conventions. Anyway, I'm hardly an expert on modern systems, even 386 era, so, just for flavor, more examples.
But if you're linking a binary blob, and that's all that's ever loaded onto the chip, it really doesn't matter how you jump to a particular piece of code, just call it from wherever: the linker has the address, ready to go, no indirection necessary.
And it's so easy to compile/link and program a new blob onto a chip of this size, that it's hardly worth doing anything, or more than a basic bootloader for example.
Then, if you are actually going to the trouble of crafting dynamic code updates, and need to load a symbol/jump table and entry points and all that, or want to write a full-blown operating system of sorts -- yeah, you may want to craft something like that.
If nothing else, you could override and extend the IVT and just append your own dummy entries, and then a
(void (*)(void))(interrupt_number * 2)();
will simply jump to and execute any one. (For ATMEGA, the IVT is traditionally populated with JMP $(ISR) instructions, which occupy two words. This will show as *4 in the disassembler.) But, beware the extraneous
RETI at the end of the ISR may acknowledge the next pending interrupt, causing inconsistent behavior.
You also can't declare your own ISRs beyond what are defined for the platform, so you're kind of on your own there. You're just making more named functions, in this case not overriding the default stub but creating new ones, and you'll want to review the header files, and a bit of the C Runtime library, to see how it's done.
So, I would strongly recommend just using interrupts the way they're intended, and not playing with it in software. At worst, use something like a spare timer set to single-cycle countdown, or a pin change interrupt triggered by a port pin toggle, if you really must.
Anyway, you can craft your own jump table, and even assign it a fixed address across projects, if you really like. (It can be forced to a fixed base address in the same way the built-in IVT is.) There's no standard here, nor any expectation for where one should be placed, like, you won't find libraries that rely on this (well, probably not?).
This is of course very different on ARM, for example the IVT is variable on some platforms. I forget which all (Cortex-M0? M4? A-anything? etc.) Arduino is on, or most details of any of these platforms really, but suffice it to say there are various ways one could approach this. There is more/better support of dynamic objects on ARM, for example because Flash is often shadowed into RAM for performance reasons, and the program and linker need to be aware of both spaces. Or you can go so far as loading Linux entirely, on supported devices (usually M4 and up, with MMU, something like that?), and work from there.
Anyway, as for stuff like keyboards: if it's polled, you control the timing. Simple as that. Run a heartbeat timer interrupt, scan the key matrix, debounce algorithm, and update key state in a global array. This can be turned into a proper BIOS function if you like (translating scan codes to key codes, ASCII or ANSI, buffering inputs), or left as is. Or sent through a callback table for event-driven (e.g.
onpress(KEY_NUM0), etc.) logic if you like.
Or if it's asynchronous, simply use the USART interrupt to fill a buffer, and poll for bytes from time to time in the
main() loop. Or parse it during the interrupt, and translate into above functionality, also fine.
A heartbeat interrupt is useful in general, for a wide variety of applications. If you want to implement a task system, it's a good way to trigger context switching. Most embedded applications, that you'd use one of these MCUs for in the first place, will have some manner of housekeeping activity in
main(), which can be triggered by waiting (whether
sleep() or spin looping) until a trigger (such as heartbeat setting a flag which is then cleared by
main(): perhaps the simplest realization of the generator-consumer design pattern) and then state updates are made (e.g. polling high-latency devices, filtering variables, updating display, queuing serial data, etc.).
Tim