Author Topic: Why does ARM mbed rely on C++? Is C++ the future?  (Read 29909 times)

0 Members and 1 Guest are viewing this topic.

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #75 on: March 27, 2018, 11:29:40 pm »
There is always a problem with queue code if one end is in an ISR.  You can't compare pointers if one is changing out from under the evaluation.  Yes, multiple evaluations is one way to handle it but not the only way.

It's very easy to implement a thread-safe FIFO (provided that one thread only writes and the other only reads). And it will work fine between two ISRs (despite statements here about the horrors of sharing variables between interrupts). The only thing you need is atomic read and atomic write with the variables which represent sizes and offsets (say 8-bit variables would be ok on 8-bit MCU, but larger sizes wouldn't work).

You can do it in assembler. You can do it in C. You can do it in C++. You could do it in Pascal if you had a compiler. So, all these languages are good for me for writing FIFO code - they let me do things that I want to do.

You couldn't do it with Java, Python, C# or anything of that sort. Instead of writing simple code, I would need to write something complicated, and even then I would get something which works worse than my simple code would. Thus, these languages are bad for me. Instead of helping me, they stand on my way. Why would I ever need them?

 

Online rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #76 on: March 27, 2018, 11:46:21 pm »

It's very easy to implement a thread-safe FIFO (provided that one thread only writes and the other only reads). And it will work fine between two ISRs (despite statements here about the horrors of sharing variables between interrupts). The only thing you need is atomic read and atomic write with the variables which represent sizes and offsets (say 8-bit variables would be ok on 8-bit MCU, but larger sizes wouldn't work).

I don't see how a language, any language, creates an atomic read-modify-write (or replace-add-one) if the underlying hardware doesn't have such a feature.  Some hardware can increment a memory location in a single instruction, some can not.  If the hardware can't provide the atomic increment/decrement, there is every probability that an interrupt will occur during the instruction sequence.

I took a quick look at the ARM Instruction Set and I didn't find such an instruction.  Maybe someone with more experience can point it out.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #77 on: March 27, 2018, 11:57:36 pm »
There is always a problem with queue code if one end is in an ISR.  You can't compare pointers if one is changing out from under the evaluation.  Yes, multiple evaluations is one way to handle it but not the only way.

It's very easy to implement a thread-safe FIFO (provided that one thread only writes and the other only reads). And it will work fine between two ISRs (despite statements here about the horrors of sharing variables between interrupts). The only thing you need is atomic read and atomic write with the variables which represent sizes and offsets (say 8-bit variables would be ok on 8-bit MCU, but larger sizes wouldn't work).

You can do it in assembler. You can do it in C. You can do it in C++. You could do it in Pascal if you had a compiler. So, all these languages are good for me for writing FIFO code - they let me do things that I want to do.

You couldn't do it with Java, Python, C# or anything of that sort. Instead of writing simple code, I would need to write something complicated, and even then I would get something which works worse than my simple code would. Thus, these languages are bad for me. Instead of helping me, they stand on my way. Why would I ever need them?

I'm afraid you've got that the wrong way round w.r.t. C/C++/Java and I presume C#. I make no comment about Python..

Until very recently you could not implement any threading code in C (including FIFOs)unless you relied on implemenation specifics outside the C specification. See the Hans Boehm paper I've referred to twice in this thread for the subtle reasons why not. Until you have understood what he is saying, you are living in blissful ignorance. You should realise that Boehm knows more about C/C++ that most people: google him!

OTOH, Java was specifically designed to allow threading and context switches: it has the necessary primitive instructions and memory model. Note: to a thread on a multiprocessor machine, there is no difference between a context switch due to an interrupt and a context switch due to another processor's activity. Think of an interrupt as being a context switch from a very very simple non-homogenous processor.

C/C++ did not, by design, have any concept of a memory model. I'm told C/C++ has now caught up with Java in this respect - two decades later and five decades after the problems were understood (in the 60s).

There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #78 on: March 28, 2018, 12:03:59 am »
To get atomic you need a good base.

For example the Z80 can read & write 8-bit or 16-bit values with out interruption. DMA can only happen after the complete instruction.
So one side modifies variable and other side just reads variable.

The problem is that the Z80 has no read-modify-write as an atomic operation.
Many documents state ways around lack of atomic operations. Care must be taken for all.
Z80 could modify 8-bits and Check 16-bits
Need to remember that Time passes between the read & write with out atomic operation.
When you add a language that reorders code, you have problems that can be good at assembly level.

For larger processors, look for something like
Test & Set
The CPU reads memory and changes it, in the process CPU flags are often set and/or registers are changed.

Google "arm atomic operations"
A PDF ARM Synchronization Primitives Development Article - ARM Infocenter
http://infocenter.arm.com/help/topic/com.arm.doc.dht0008a/DHT0008A_arm_synchronization_primitives.pdf


« Last Edit: March 28, 2018, 12:12:26 am by C »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #79 on: March 28, 2018, 12:04:59 am »
I don't see how a language, any language, creates an atomic read-modify-write (or replace-add-one) if the underlying hardware doesn't have such a feature.

You do not need atomic read-modify-write for FIFO. You need two separate things:

- atomic read
- atomic write

Any hardware I worked with had this.

The language doesn't create anything for you. It lets you using the things which are already in the hardware, or it doesn't. Any C compiler I worked with didn't have any problems with that.

If you wanted atomic read-modify-write and your CPU had this, then the C compiler might give you access to it, or it might not (thus forcing you into assembler).
 

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #80 on: March 28, 2018, 12:20:03 am »

NorthGuy

You should be aware that I came across a problem talked about on a linux thread where Atomic.read - Atomic. write became a fail do to code reorder putting code in different order.

Some places you have to have a Atomic.Test&Set or something like this with compiler code reordering things.

 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #81 on: March 28, 2018, 12:22:07 am »
If the hardware can't provide the atomic increment/decrement, there is every probability that an interrupt will occur during the instruction sequence.

I took a quick look at the ARM Instruction Set and I didn't find such an instruction.  Maybe someone with more experience can point it out.
See LDREX and STREX.
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #82 on: March 28, 2018, 03:00:40 am »
Quote
It's more likely that the STL implementation is correct than any personal equivalent we might write. I'm not knocking your code specifically, but STL has gotten a great deal of exposure and most of the kinks have (hopefully) been worked out.
But not for embedded...  A lot of "taught" C++ is about the STL and various other libraries, and they're swell until you try to use them on a chip with severely limited resources, no "exception" OS support (or no OS at all), requirements that prohibit dynamic allocation, and etc.  Then you're back to implementing things yourself.
(I suppose that arguably, "severely limited environments" will just disappear.  (hah!))


Quote
you could not implement any threading code in C
AFAIK, you can't "implement threading" in ANY language other than asm.  You get the choice of putting in a bit of assembler, or using a threading implementation that has been put into the language for you.  That's swell if it happens to meet your needs.  In general, you get a choice between simple languages (C, asm) that make it easier for you to do things yourself, and complex languages that try to do everything for you, and end up with "bloated" implementations.  (it's EXACTLY the same problem that vendor-provided "low-level libraries" have - by the time you get it to do everything possible, you have an implementation that no one wants to use.)

Last I heard, "Java Embedded Micro Edition" ran on "as little as 1MB/128k", and Embedded C# implementations were similar.  And aren't getting much traction, despite people wanting standard-ish GUI interfaces (for example) on their embedded projects (for which these ought to be ideal.)   (OTOH, micro-Python is catching on, and interpreted tokenized BASIC still has a significant following.  So "performance" doesn't necessarily seem to be the limiting factor...)


Quote
"Seems like every firmware engineer has their own FIFO implementation. Why is that?"
because the usual embedded system doesn't need or want a full general FIFO implementation that can be used on any data structure, allocates dynamic memory as needed, throws exceptions on errors, catches exceptions from lower levels, is fully thread-safe for any number scheduling algorithms, etc, etc.   And the "simple" FIFO code that they do need is pretty easy to write.  And read.  (I'm sure the STL is wonderful in its way.   But the few times I've looked at an STL implementation of some algorithm, to see how it did something, I was faced with a relatively unintelligible mass of generalization, standardized but ugly internal names, and pretty advance C++ features (I'm sure it all make sense to someone who works ON STL.  But I don't think the beginner or average C++ programmer has much of a chance.  Sigh.)
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #83 on: March 28, 2018, 04:14:44 am »
technix
Quote
I doubt there exists a processor architecture that allows an interrupt to interrupt itself.
You can do this easy with most CPU's
The old Z80 makes this easy if it's wanted.
You can have a section of high speed interrupt code and slower hanging off the same interrupt.
For Z80 you just do a Push of slower code start point.
Fast returns to slower which returns to main.

The 68000 allowed interrupts to interrupt interrupt code.
With 68000 you have interrupt levels.
What I mean is an interrupt interrupting itself, not two interrupts interrupting each other. What I mean is, for example, IRQ 5 fired inside IRQ 5 handler.
 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #84 on: March 28, 2018, 06:43:00 am »
But not for embedded...  A lot of "taught" C++ is about the STL and various other libraries, and they're swell until you try to use them on a chip with severely limited resources, no "exception" OS support (or no OS at all), requirements that prohibit dynamic allocation, and etc.  Then you're back to implementing things yourself.
True dat.

I really like STL for use on big iron. It's handy to be able to whip out a std::map for unit tests that run on unix. But the memory management of STL has always seemed opaque/scary (a reflection on me, not STL) so I've never used a std::map or std::vector in a bare metal build. I have compiled code written by others for a Cortex-M that did use std::map, and all I can say is, "it seemed to work". Given time, I probably would have ripped that stuff out.

Interestingly, I've had good luck using Eigen on bare metal. On it's face, Eigen is at least as scary as STL, but turns out to straightforward (as I remember) in terms of memory management. And boy, is it fast. With optimization, those templated loop expansions turn into amazingly efficient code.

YMMV, of course.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #85 on: March 28, 2018, 06:56:11 am »
If you wanted atomic read-modify-write and your CPU had this, then the C compiler might give you access to it, or it might not (thus forcing you into assembler).

SGI used #pragma to have it on their big-irons with a lot of CPUs per rack.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #86 on: March 28, 2018, 06:58:31 am »
What I mean is an interrupt interrupting itself, not two interrupts interrupting each other. What I mean is, for example, IRQ 5 fired inside IRQ 5 handler.

what you really mean is you'd best avoid it.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #87 on: March 28, 2018, 09:46:19 am »
What I mean is an interrupt interrupting itself, not two interrupts interrupting each other. What I mean is, for example, IRQ 5 fired inside IRQ 5 handler.

what you really mean is you'd best avoid it.
If in interrupt can nest with itself the ISR would have to implement locking or mutex, otherwise as long as each interrupt have its own RAM they should not conflict.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #88 on: March 28, 2018, 10:21:24 am »
Quote
you could not implement any threading code in C
AFAIK, you can't "implement threading" in ANY language other than asm.  You get the choice of putting in a bit of assembler, or using a threading implementation that has been put into the language for you. 
(You are replying to my point, but that is difficult to see since you combined many posts into yours.)

That entirely depends on what you think of a "language" and "asm".

For example, the Java JVM has the necessary memory model and bytecode instructions that are created directly by the Java compiler. Yes, those aren't "primitive instructions" at the hardware level and there is some C/asm underneath, but...

The 80x86 asm instructions aren't "primitives instructions" either: they are internally interpreted by the hardware into a sequence of microoperations.

OTOH, have a look at the XMOS xCODE processors and xC. They have direct hardware support for multiprocessing - a put or get from a FIFO is a single blocking instruction. Plus they have direct hardware support enabling threads to sleep until one of several conditions is met (e.g. a timeout). Thus a "wait until input or timeout" is a single instruction, and when input/timeout occurs the core continues executing the code with 10ns latency. Eat your heart out ARM :)

Quote
Last I heard, "Java Embedded Micro Edition" ran on "as little as 1MB/128k", and Embedded C# implementations were similar.  And aren't getting much traction, despite people wanting standard-ish GUI interfaces (for example) on their embedded projects (for which these ought to be ideal.)   (OTOH, micro-Python is catching on, and interpreted tokenized BASIC still has a significant following.  So "performance" doesn't necessarily seem to be the limiting factor...)

I've been completely underwhelmed by the Java Micro, Real Time and other variants, and wouldn't touch them.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #89 on: March 28, 2018, 01:58:00 pm »
If in interrupt can nest with itself the ISR would have to implement locking or mutex, otherwise as long as each interrupt have its own RAM they should not conflict.

:blah: :blah: :blah: :blah:

have you ever done it? for a paid job? when you are in the rush? under deadlines?
and have you ever considered HOW MUCH TIME it takes for debugging this?

if so, it's more than clear that norm-people won't do that.
 

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #90 on: March 28, 2018, 02:24:57 pm »

legacy
 Sometimes helps to have a real world example to copy from when writing code.

A Gatling gun
Bullets in, lead out one spot, shells out second spot. used resources back waiting for new bullet.

The bullet in is the interrupt. Where does the interrupt stop?
Is the lead the main program or is the shell the main program?
Do you have two main programs or something else?

Not a bad model to follow for a One in and Many out chunk of code.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #91 on: March 28, 2018, 02:46:17 pm »
If in interrupt can nest with itself the ISR would have to implement locking or mutex, otherwise as long as each interrupt have its own RAM they should not conflict.

:blah: :blah: :blah: :blah:

have you ever done it? for a paid job? when you are in the rush? under deadlines?
and have you ever considered HOW MUCH TIME it takes for debugging this?

if so, it's more than clear that norm-people won't do that.

While I tend to agree, it isn't quite that clearcut - particularly in multicore systems.

I'll start by presuming that one thread does not mutate the memory of other threads, except via well-defined and well-implemented RTOS features, e.g. put/get "messages" with FIFOs, or fork/join operations. If that isn't the case, then I'm not interested in debugging the mess :)

At the system design level the principal operations are the threads and the messages between threads. On multicore systems you will, of course, have multiple threads on different cores sending messages at unpredictable times, sometimes simultaneously. That leads to the possibilities of deadlock and livelock, unless there are clear system design principles that avoid it in a specific application.

Now add interrupts to the equation. There is vanishingly little difference between a message from another core and a message from a peripheral (often called an interrupt). Think of the peripheral as being another core, albeit a very simple and asymmetric core.

Hence there is a good argument that if your system design is sufficient for multiple thread in a multicore system, then multiple simultaneous interrupts are not a major extra problem.

A good commercial example of that philosophy is in the xCORE processors (up to 32 cores, 4000MIPs), which manage hard realtime operation - unlike ARMs etc.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14470
  • Country: fr
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #92 on: March 28, 2018, 02:53:56 pm »
Now add interrupts to the equation. There is vanishingly little difference between a message from another core and a message from a peripheral (often called an interrupt). Think of the peripheral as being another core, albeit a very simple and asymmetric core.

Hence there is a good argument that if your system design is sufficient for multiple thread in a multicore system, then multiple simultaneous interrupts are not a major extra problem.

Agree with that.
 

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #93 on: March 28, 2018, 03:04:16 pm »
One of the problems with low level is the lack of understanding o a higher level.

For example, if the foundation of a language does not implement type checking properly, then any type checking errors will be hit or miss. That miss is a killer and puts the burden on the programmer for type checking.

With a language missing something needed, you might be able to get what is needed with careful use of tool a different way.
With language not understanding the need for separation, then forced separation can make sense if you can get access to proper hooks.
One way to force separation is to use separate programs with a low level separation of what the language uses.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #94 on: March 28, 2018, 04:00:24 pm »
One of the problems with low level is the lack of understanding o a higher level.

For example, if the foundation of a language does not implement type checking properly, then any type checking errors will be hit or miss. That miss is a killer and puts the burden on the programmer for type checking.

With a language missing something needed, you might be able to get what is needed with careful use of tool a different way.
With language not understanding the need for separation, then forced separation can make sense if you can get access to proper hooks.
One way to force separation is to use separate programs with a low level separation of what the language uses.

Unfortunately many of the younger and more naive C/C++ advocates believe the stories that C/C++ is necessary and sufficient for low level code. In reality it is neither necessary nor sufficient.

While not being necessary isn't a problem because it implies there are alternatives, not being sufficient is a problem. As you allude to, there is the concept of building castles on sand.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #95 on: March 28, 2018, 04:42:28 pm »
While I tend to agree, it isn't quite that clearcut - particularly in multicore systems

ummm, Multi-cores are a different matter, but about microcontrollers and mono-core stuff, even the documentation made by Microchip/Atmel warns the user from using nesting in their XMEGA line.

I have to say I agree with them! Well written, to warn people about the risk of wasting time in extra complexity during the debug phase.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #96 on: March 28, 2018, 04:50:33 pm »
If in interrupt can nest with itself the ISR would have to implement locking or mutex, otherwise as long as each interrupt have its own RAM they should not conflict.

:blah: :blah: :blah: :blah:

have you ever done it? for a paid job? when you are in the rush? under deadlines?
and have you ever considered HOW MUCH TIME it takes for debugging this?

if so, it's more than clear that norm-people won't do that.
Yes. In two of my jobs. Do keep in mind that this is China and all the boss wants is to rush a product to the market as soon as possible before anyone else can. It is the rule to rush absolutely everything or get fired even without a pushing deadline so his sales team can take their sweet time to prepare their sales pitches. I am forbidden from considering however much time it would take unless the problems actually arise (and rush through those too,) and once I do make some preemptive consideration I got fired for this.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #97 on: March 28, 2018, 04:52:33 pm »
While I tend to agree, it isn't quite that clearcut - particularly in multicore systems

ummm, Multi-cores are a different matter, but about microcontrollers and mono-core stuff, even the documentation made by Microchip/Atmel warns the user from using nesting in their XMEGA line.

I have to say I agree with them! Well written, to warn people about the risk of wasting time in extra complexity during the debug phase.

Atmel may have their own reasons for cautioning against it; I don't know. It wouldn't be the first time that hardware problems are avoided by "doing or not doing X" in the software :)

In any case, it still isn't perfectly clearcut in single core systems.

It is well to realise that multicore systems are becoming ever more prevalent (e.g. BeagleBone, xCORE, many ARMs, and outliers), and are the way of the future. Best to get ready for the future now.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #98 on: March 28, 2018, 05:00:08 pm »
I don't see how a language, any language, creates an atomic read-modify-write (or replace-add-one) if the underlying hardware doesn't have such a feature.  Some hardware can increment a memory location in a single instruction, some can not.  If the hardware can't provide the atomic increment/decrement, there is every probability that an interrupt will occur during the instruction sequence.

I took a quick look at the ARM Instruction Set and I didn't find such an instruction.  Maybe someone with more experience can point it out.
The only requirement is to have an atomic write for the size of the FIFO index counters (which every CPU has). When an index counter is incremented doesn't matter. Only the update of the variable in the memory matters.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Why does ARM mbed rely on C++? Is C++ the future?
« Reply #99 on: March 28, 2018, 05:05:11 pm »
What I mean is an interrupt interrupting itself, not two interrupts interrupting each other. What I mean is, for example, IRQ 5 fired inside IRQ 5 handler.
what you really mean is you'd best avoid it.
Not necessarily. Think about multiple devices sharing one interrupt and software demultiplexes the interrupt towards the various interrupt handlers. IIRC this happens on a PC with many PCI devices.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf