multiple devices sharing one interrupt and software demultiplexes the interrupt towards the various interrupt handlers
The Atlas board is a good old-school example. When an interrupt arises, a signal on the Cop0 goes down to "0" (it's negative logic), and the Cop0 decodes on which line (16lines-to-4binary), then it issues a machine exception to the RISC-ish CPU.
The pipeline is stalled, the BU is flushed, the PC is saved into EPC, and reloaded with the proper exception address, whose first instruction is usually a "disable interrupts". At this point the CPU checks on which line the exception cames from, and which subtype of exception is. There could be many pending events here, waiting to be served. You need to clear the bit to say "event got processed". Once understood and decided which one needs to be processed (priorities here are flexible), there is a simple "function call" to the proper sub exception routine that handles the whole task, and once completed, it returns back to the handler, that clears the proper event-bit and resumes the CPU from the exception mode. Exceptions are then re-enabled by the last instruction, and PC is assigned with EPC, thus the CPU is again ready to go on as nothing had ever happened (from the user-mode point of view).
It's a simple, efficient, and deterministic way to process things. And note that nothing is interruptible in exception-time (it might be, the hardware support it, but it's discouraged since it also confuses the debugger). One event triggers the machine-exception, with a priority, of course, and nothing can interrupt it. It's the best way to debug complex stuff happening in kernel space.
On my MIPS manual for the Atlas board, there is the same warning that is also written on the Microchip/Atmel manual about their XMega.
I think, there must be a reason for that.