Author Topic: FPGA Vs µC  (Read 20614 times)

0 Members and 2 Guests are viewing this topic.

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: FPGA Vs µC
« Reply #25 on: October 21, 2015, 07:00:50 pm »
Interrupts can be tricky little beasts. I've never worked on critical systems but I have had to solve interrupt related problems; everything goes well until your device decides to keep on generating interrupts continuously (for several reasons, including perfectly valid situations). Today I ditch out interrupts in favor of the more well behaved and predictable polling method, usually if they're based on data coming from the outside. Like reading simple hw buttons with a GPIO pin, people tend to think that interrupts are good for that.
« Last Edit: October 21, 2015, 07:03:08 pm by nuno »
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #26 on: October 21, 2015, 07:48:49 pm »
well , interrupt are implanted for a reason. i personally find them useful on most case. of curse being to damn quick on detecting a press of a button is not really the purpose. it's case dependent. i guess feeling comfortable with pulling is just cause you know what the thing is doing step by step. i don't think i follow this logic , i tend to use maximum of what the implanted hardware has to offer.
 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #27 on: October 21, 2015, 08:03:25 pm »
Remember that interrupts (and threads) was something that was invented BEFORE the modern compiler theory was born. The whole theory of automatic code optimization and a large part of lambda calculus was not known at that time.

As said, it is Ok to use anything you want to use as long as it does not obstruct your proof of correctness. If you have an idea how you could prove the correctness of the specific code in the presence of a specific interrupt handler, go on. The main reason why many people avoid interrupts in critical code is that the proof is in many cases too complex for any practical use, and the code without interrupts could easily be proven. Just avoid anything you couldn't prove. Interrupts, threads and (sometimes) dynamic memory allocation are the candidates.

If, for some reason, you have multiple concurrent processes running (does not matter if they are on the same MCU or on different MCUs or in different parts of an FPGA or just in pure hardware), be very careful with the communication between them. This is something that goes out of control too easily. 90% errors I've seen in the source code in the past 10 years are more or less connected to that.
The difficult we do today; the impossible takes a little longer.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #28 on: October 21, 2015, 10:10:34 pm »
well , interrupt are implanted for a reason. i personally find them useful on most case. of curse being to damn quick on detecting a press of a button is not really the purpose. it's case dependent. i guess feeling comfortable with pulling is just cause you know what the thing is doing step by step. i don't think i follow this logic , i tend to use maximum of what the implanted hardware has to offer.
An interrupt must be 100% predictable or your device won't work. Using interrupts for GPIO is generally speaking a big NO! Many years ago I worked at a company which developed a product which was to be installed in homes. I wasn't involved in the firmware & hardware but this product had major problems with stability. One of the functions it had was a door-bell input. This input was directly connected to a GPIO pin on the processor. No filtering or protection or whatsover  :wtf: and many meters of unshielded wiring attached so it needed a series resistor and a big capacitor to work reasonably. When testing a different circuit attached to this input the device would halt under certain circumstances. It turned out the firmware programmer decided to use a GPIO interrupt which kept firing when the pin was half way a logic level  :palm: Worst of all he and the R&D manager refused to believe me that doing that was the stupidest thing to do -ever-. The company nearly went under from lawsuits due to the product not working properly and here they where telling me they where doing a good job and GPIO interrupts where meant for buttons :palm: :palm: :palm:
« Last Edit: October 21, 2015, 10:12:14 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: FPGA Vs µC
« Reply #29 on: October 21, 2015, 10:56:26 pm »
well , interrupt are implanted for a reason. i personally find them useful on most case. of curse being to damn quick on detecting a press of a button is not really the purpose. it's case dependent. i guess feeling comfortable with pulling is just cause you know what the thing is doing step by step. i don't think i follow this logic , i tend to use maximum of what the implanted hardware has to offer.
An interrupt must be 100% predictable or your device won't work. Using interrupts for GPIO is generally speaking a big NO! Many years ago I worked at a company which developed a product which was to be installed in homes. I wasn't involved in the firmware & hardware but this product had major problems with stability. One of the functions it had was a door-bell input. This input was directly connected to a GPIO pin on the processor. No filtering or protection or whatsover  :wtf: and many meters of unshielded wiring attached so it needed a series resistor and a big capacitor to work reasonably. When testing a different circuit attached to this input the device would halt under certain circumstances. It turned out the firmware programmer decided to use a GPIO interrupt which kept firing when the pin was half way a logic level  :palm: Worst of all he and the R&D manager refused to believe me that doing that was the stupidest thing to do -ever-. The company nearly went under from lawsuits due to the product not working properly and here they where telling me they where doing a good job and GPIO interrupts where meant for buttons :palm: :palm: :palm:

Interrupts can be a problem. And so can the ISRs!

At a new job nearly 20 years ago, I had to get a temperature controller working. It was the usual sort of thing: a temperature sensor, an ADC for it, a resistive heater under DAC control. The processor was a DS5000T (an 8051 variant which had battery-backed SRAM as its program store), programmed in C (the Avocet compiler). The guy who designed the system and wrote the code had a PhD in control systems.

The complaint? "The serial communications link isn't working." The micro ran off the usual 11-ish MHz oscillator with the MAX232 level translators. The serial line between the temperature controller and the computer to which it talked was only a couple of feet (both were in the same VME rack). They had tried shielding the serial line, they tried changing the baud rate, they tried all sorts of stuff.

One thing they pointed out was that when the temperature-control loop was disabled, the communication was flawless. They thought that the processing was somehow causing EMI or whatever which was making the communications fail.

I started looking through the source code, initially looking at how the serial port was managed, and it seemed fairly textbook. No printf() but instead a buffer was loaded with a string and off it went. It was interrupt driven, so on receive the ISR would read SBUF and store the character read into a FIFO, and on transmit complete the ISR would look to see if anything was left in the transmit buffer and send it if so.

Then I started looking at the larger program, which was basically the PID loop. And then the alarm bells started going off. First, the control loop was written with all of the variables as floats. And it gets better: He would read the ADC, and convert from ADU into floating-point volts. (Volts = VREF / counts). The gain constants and all of the intermediate results were floats. The result of the loop was a floating-point heater current that got converted to DAC counts.

So he basically took a Matlab model and implemented it C.

So when did the loop update? Well, there was a timer set to interrupt at some reasonable interval. And when the interrupt fired, the ISR got called, and he did all of those floating-point loop update calculations in the ISR.
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: FPGA Vs µC
« Reply #30 on: October 21, 2015, 11:32:42 pm »
(...) And when the interrupt fired, the ISR got called, and he did all of those floating-point loop update calculations in the ISR.
He should at least have made it re-entrant :). Actually, it can be quite tricky too, I remember having used a reentrant interrupt only once, and it was a very very special case (had no choice but to squeeeeeze the very last bit of performance of an AVR to be able to have the low latency realtime behavior needed). My moto to avoid problems is pretty simple: KISS.
« Last Edit: October 21, 2015, 11:35:01 pm by nuno »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19517
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #31 on: October 22, 2015, 12:23:43 am »
That is why one should always determine the real-time requirements (soft and hard) and also determine the maximum loop processing time analytically and/or using an instruction simulator.

Not forgetting to include the effectes of L1, L2, L3 caches :(

The XMOS devices are particularly good in this respect; the IDE states the precise loop or block times.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: FPGA Vs µC
« Reply #32 on: October 22, 2015, 10:23:50 am »
When system security matter , i mean for life threatening fail kind of application. Which technology is better for easier software implantation and safer execution.

MPU and DSP, definitively!
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: FPGA Vs µC
« Reply #33 on: October 22, 2015, 10:27:06 am »
MPU and DSP ...

..FPGAs are not a particularly non volatile hardware implementation - the internal structure is fixed and adjusted by loads of multiplexors, switches, that are controlled by a configuration that is inserted on power-up - as such, if you get a really nasty power spike (you assume EMP?) you will get a reset state (if not worse) just as you'd get with a microcontroller).

… for that reason!
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #34 on: October 22, 2015, 01:23:27 pm »
It is the wrong question, MPU/DSP/FPGA, none of the above is a solution, but any of the above may form part of a solution in life critical applications.

A big part of the art is to design out any single (and sometimes dual, much harder) fault causing or allowing a dangerous condition to be maintained, FMEA is the term you are looking for.

A fun example of safety design is microwave oven door switches, which have one NC and one NO contact, the NC contact is wired to hard short the primary of the transformer while the NO contact isolates the power....
By design the NC contact opens before the NO contact closes, but in the event of a welded power contact the NC contact will short the supply when the door is opened blowing the fuse and rendering the oven safe.

Some of the thing I have seen:
An additional monitored safety relay held in by a retriggerable monostable triggered by an edge on a GPIO pin, on startup you make sure the relay is NOT engaged, then when all safety conditions are met you start toggling the pin, wait a hundred ms then check the relay has pulled in, once all is good then you can proceed to command the power contactor on (Via the now closed contact) on the safety relay. In the event of a processor crash the monostable does not get retriggered and the opening of the safety relay causes the main power contactor to open.   

Often the startup routine is significantly complicated by also checking on expected default states, for example you read the state of the pressure switch (and error out if it is reporting high pressure) before you start the pump, then you check it is NOW reporting high pressure, then you check the coolant temperature is low, then you switch on the arc and check the coolant temperature climbs at a satisfactory rate, then....
You do not just start the pump and check the pressure switch to confirm pump operation, because the switch may be stuck.

Traffic light controls with relays fitted to short circuit the green lamp when any other lamp is green (Blown fuse hence OFF is much safer then conflicting greens).

Safety critical GPIs are often really analogue and are biased to somewhere mid rail, so that a broken connection can be detected (as can a short circuit if the switch is changeover and has lowish value resistors pulling up and down), the automotive crowd love this one. 

The really tricky stuff is when you have a situation where there is NOT an obvious safe condition to default to, often the case in automotive. If your engine management system throws a watchdog reboot it is quite possibly NOT safe to just shut down and light 'check engine', particularly if you are trying to overtake that semi on the country road at the time (Worse, it may take a SECOND to get some of the status from the slower can nodes that you really need to decide what to do), not easy. 

Regards, Dan.
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #35 on: October 22, 2015, 05:13:13 pm »
An additional monitored safety relay held in by a retriggerable monostable triggered by an edge on a GPIO pin, on startup you make sure the relay is NOT engaged, then when all safety conditions are met you start toggling the pin, wait a hundred ms then check the relay has pulled in,

did the same with a relay power between a negative rail and ground , the negative voltage is generated by toggling GPIO capacitor and same cap, if MCU fail it will stop generating pulses and relay is not powered any more.

It is the wrong question, MPU/DSP/FPGA, none of the above is a solution, but any of the above may form part of a solution in life critical applications.

the question is more about how much hardware capability will be helpful to privilege one tech over the other.
but as i understand most of participant don't see the hardware as a problem or more safe what ever it's MCU / DSP / FPGA if software is not well written.
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #36 on: October 22, 2015, 08:03:58 pm »
The gotcha is always complexity, and neither the FPGA not the CPU have really good tools to manage that complexity, worse even absent toolchain bugs there is usually not sufficient isolation to allow you to easily reason about parts of these systems small enough to be reasonably verified in isolation.

Software is often actually slightly more problematic this way then FPGA firmware, if only because the FPGA stuff usually ends up being strictly synchronous with constrained timing, there is no such thing as an interrupt between arbitrary machine instructions on an FPGA.

It is one thing to test that "If a and b then x within 10ms", it is orders of magnitude harder to verify that x ONLY ever occurs within 10ms of a and b becoming true, but for many safety cases that is the requirement, yet all too many tests only test the first bit (And I have a nasty feeling that the Church-Turing thesis has a few things to say about testing the second).

73 Dan.
 

Online KL27x

  • Super Contributor
  • ***
  • Posts: 4104
  • Country: us
Re: FPGA Vs µC
« Reply #37 on: October 22, 2015, 08:31:09 pm »
Quote
It turned out the firmware programmer decided to use a GPIO interrupt which kept firing when the pin was half way a logic level  :palm: Worst of all he and the R&D manager refused to believe me that doing that was the stupidest thing to do -ever-. The company nearly went under from lawsuits due to the product not working properly and here they where telling me they where doing a good job and GPIO interrupts where meant for buttons :palm: :palm: :palm:
Well seems like there are three issues here.

1.Signal integrity/circuit-design.
2.Debouncing
3.Interrupt handling

1. This is an issue, either way. It could potentially be alleviated or fixed, despite shoddy signal, with deboucing. This holds equally true in either method. Only with ISR, you can do this in the background with timers instead of stopping main code loop.

2. Debouncing: This can be achieved in either method

3. Interrupt handling: You could set up the GPIO interrupt to timeout after a switch is detected, in addition to or as part of the debouncing, using a timer interrupt. Thus effectively giving it a minimum period, similar to polling. Could be something like GPIO interrupt triggered, the ISR will turn off GPIO interrupt, turn on timer0 interrupt, check state of input pin in after X time has passed, a number of times. After switch is detected, set a flag and/or perform immediate task. Continue timer interrupt polling until input returns to resting state, then initiate timer interrupt to turn the GPIO interrupt back on after X time has passed. If debouncing fails, then initiate timer to turn GPIO interrupt back on after X time has passed.

If you can trust a WDT to keep the MCU running in mission critical situations, seems like other interrupts can be relied upon if implemented correctly. Complexity of proofing not withstanding.

If you told me that, I wouldn't believe it. Button interrupt is very useful for waking a micro from sleep via user input, and quite commonly used for such task. I would fix the code (and the circuit). But of course this is a doorbell, not a manned vehicle control system. Seems like problem 1 is the main issue. The other example of randomly performing a super long calculation within the ISR (and not turning off that particular interrupt during timing sensitive task) is super noob. Perhaps could have just set a flag in the ISR which results in the calculation somewhere in the main program loop, so that it could be interrupted, itself.

I would have thought that timer interrupts, in particular, would be quite useful in mission critical applications. The WDT being one perfectly good and valid example, IMO. WDT is barely more, really, than a timer interrupt with a RESET instruction. It might run off a different clock source, of course, which is the "barely more" part. It might also set a flag that can be detected after RESET, but that's still nothing you couldn't do with a regular timer interrupt.

Quote
there is no such thing as an interrupt between arbitrary machine instructions on an FPGA.
You can turn interrupts on/off in the software before starting code blocks that cannot be interrupted. I suppose this doesn't help so much in C, where finding a safe place to enable interrupts might be more challenging without a very deep understanding (or trust) of the compiler and libraries.

So far, the main reason to not use interrupts in mission critical apps seems to be human error. Surely this is a good enough reason. But it also seems like using interrupts can greatly simplify code in certain situations. Perhaps sometimes enough so that it makes things easier to proof? Heck, some programs could be 99% ISR and 1% code loop. This could be the easiest way to write a given program.

There's my 2 cents. Now I'm really broke.
« Last Edit: October 22, 2015, 09:58:49 pm by KL27x »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #38 on: October 22, 2015, 10:49:00 pm »
Quote
It turned out the firmware programmer decided to use a GPIO interrupt which kept firing when the pin was half way a logic level  :palm: Worst of all he and the R&D manager refused to believe me that doing that was the stupidest thing to do -ever-. The company nearly went under from lawsuits due to the product not working properly and here they where telling me they where doing a good job and GPIO interrupts where meant for buttons :palm: :palm: :palm:
Button interrupt is very useful for waking a micro from sleep via user input, and quite commonly used for such task.
A wake-up event usually fires once and isn't really an interrupt in many cases.
Quote
I would fix the code (and the circuit).
Fixing the circuit usually isn't an option so you have to fix the software.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online KL27x

  • Super Contributor
  • ***
  • Posts: 4104
  • Country: us
Re: FPGA Vs µC
« Reply #39 on: October 23, 2015, 01:05:27 am »
Quote
A wake-up event usually fires once and isn't really an interrupt in many cases.
Agree to disagree on both counts. A wake-up event is an true interrupt on some devices, and it may be expected to occur frequently.

Quote
Fixing the circuit usually isn't an option so you have to fix the software.
In this particular example, you might use a timer interrupt to PWM the IO pin in question to output hi on a 1% duty cycle with a short enough period to not induce a brownout when the button is pressed. With the RC circuitry on it, this should fix things, nicely. Of course you could also do that in your code loop, but that might not be practical. Anyhow the primary problem is obviously with the circuit. 
« Last Edit: October 23, 2015, 01:22:10 am by KL27x »
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: FPGA Vs µC
« Reply #40 on: October 24, 2015, 04:08:25 am »
The other example of randomly performing a super long calculation within the ISR (and not turning off that particular interrupt during timing sensitive task) is super noob. Perhaps could have just set a flag in the ISR which results in the calculation somewhere in the main program loop, so that it could be interrupted, itself.

It was total super noob. Like I said, the guy who did it was a freshly-minted PhD in controls, with a lot of Matlab/Simulink experience and zilch with anything embedded. Characters to and from the UART were simply dropped on the floor because the serial interrupt wasn't being handled in time due to being blocked by the timer ISR with its ridiculous non-re-entrant floating-point calculations.

My solution was of course exactly what you suggested: The timer interrupt sets a flag and exits, then the main loop looks for set flag and does the control-loop update. I also rewrote the control-loop update code to use integers instead of floats, which is completely obvious for everyone reading here. The loop update always completed before the timer expired and signaled time to do the next update.
 

Offline obiwanjacobi

  • Frequent Contributor
  • **
  • Posts: 988
  • Country: nl
  • What's this yippee-yayoh pin you talk about!?
    • Marctronix Blog
Re: FPGA Vs µC
« Reply #41 on: October 24, 2015, 04:48:43 am »
Fascinating what is being discussed here. I am but a hobbyist but love the best practices information being passed.

I always thought that interrupts were the best way to utilize the MCUs capabilities to their fullest extent..? I love how things just happen in the background. And I understand that putting an interrupt on external signals (like buttons) can backfire, but what about peripherals? I recently wrote a Usart class(es) (yes, C++) and used the interrupt for reading and writing to and from a ring-buffer. Is there any signal you could apply that would make that go haywire? I thought that would be pretty safe because of the hardware support - sure they made it robust - didn't they?  :-//
Arduino Template Library | Zalt Z80 Computer
Wrong code should not compile!
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: FPGA Vs µC
« Reply #42 on: October 24, 2015, 11:02:42 am »
I don't know what the guys at critical systems do, but I suppose they use interrupts too for the internal peripherals, as long as it's not something that can go "out of control". For a UART? I use it. Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it. Except, of course, if it's something you absolutely need to do "at interrupt time" (low latency response) - for which it is even more critical to have the other interrupts as short as possible (if any other interrupts...). Beyond that, KISS KISS KISS...
« Last Edit: October 24, 2015, 11:04:21 am by nuno »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #43 on: October 24, 2015, 12:20:50 pm »
Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it.
NO. A really big NO! The best way is to make an analysis with how much time is needed for each task and how quick an interrupt needs to be serviced. From there you can determine how much time can be spend inside every interrupt and whether you are going to need an OS. IMHO the best way to see an interrupt controller is a time-slicing OS in hardware where each interrupt handler is a background task. Using flags to signal the main thread to do something often creates the need for the main thread to become timing cricitical. You'll also need to transfer data between two asynchronous tasks which adds overhead and additional complexity.

For example: In signal processing applications it is better to do the entire processing in the ADC interrupt. The ADC interrupts are so frequent that other interrupts (from a UART for example) still have enough chance of getting serviced in time. In some of my microcontroller applications I have the controller spend over 90% of it's time in interrupts.
« Last Edit: October 24, 2015, 12:22:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline c4757p

  • Super Contributor
  • ***
  • Posts: 7799
  • Country: us
  • adieu
Re: FPGA Vs µC
« Reply #44 on: October 24, 2015, 12:40:07 pm »
A lot of the interrupt trouble can be alleviated by using a microcontroller that supports multiple interrupt levels, where higher levels can safely interrupt lower levels, and levels can be enabled and disabled separately.
No longer active here - try the IRC channel if you just can't be without me :)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19517
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #45 on: October 24, 2015, 01:34:08 pm »
A lot of the interrupt trouble can be alleviated by using a microcontroller that supports multiple interrupt levels, where higher levels can safely interrupt lower levels, and levels can be enabled and disabled separately.

It is a fallacy in thinking that, in hard realtime critical systems, multiple priority levels solve problems - and that's true for thread/task priority levels or interrupt priority levels. Of course, if the system is neither hard real time nor critical, then they may help.

It should be noted that multiple priority levels can introduce their own problems and/or make problems rare, transitory and virtually impossible to debug.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline c4757p

  • Super Contributor
  • ***
  • Posts: 7799
  • Country: us
  • adieu
Re: FPGA Vs µC
« Reply #46 on: October 24, 2015, 01:43:27 pm »
A lot of the interrupt trouble can be alleviated by using a microcontroller that supports multiple interrupt levels, where higher levels can safely interrupt lower levels, and levels can be enabled and disabled separately.

It is a fallacy in thinking that, in hard realtime critical systems, multiple priority levels solve problems - and that's true for thread/task priority levels or interrupt priority levels. Of course, if the system is neither hard real time nor critical, then they may help.

Yes, realtime systems are a unique problem wrt interrupts, and I imagine could be miserable with a multilevel interrupt controller.

Quote
It should be noted that multiple priority levels can introduce their own problems and/or make problems rare, transitory and virtually impossible to debug.

Interrupts in general can be miserable to debug.
No longer active here - try the IRC channel if you just can't be without me :)
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #47 on: October 24, 2015, 02:46:40 pm »
Interrupts in general can be miserable to debug.
True, but real misery is running out of stack ONLY when 4 asyc interrupts occur in precisely the wrong order and close enough together that at each step the higher priority one interrupts the lower..... 

Even something as simple as a uart ISR putting bytes into a ring buffer (Which is a very standard sort of thing to do) makes the reasoning about the ring buffer **HARD**, particularly when the core does out of order execution or load/store reordering, memory barriers and volatile are your friend and it is still hard to be sure you have not left a race somewhere.

Regards, Dan.
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: FPGA Vs µC
« Reply #48 on: October 24, 2015, 02:55:39 pm »
Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it.
NO. A really big NO! The best way is to make an analysis with how much time is needed for each task and how quick an interrupt needs to be serviced. From there you can determine how much time can be spend inside every interrupt and whether you are going to need an OS. IMHO the best way to see an interrupt controller is a time-slicing OS in hardware where each interrupt handler is a background task. Using flags to signal the main thread to do something often creates the need for the main thread to become timing cricitical. You'll also need to transfer data between two asynchronous tasks which adds overhead and additional complexity.

For example: In signal processing applications it is better to do the entire processing in the ADC interrupt. The ADC interrupts are so frequent that other interrupts (from a UART for example) still have enough chance of getting serviced in time. In some of my microcontroller applications I have the controller spend over 90% of it's time in interrupts.
Your millage may vary ;) . I don't always do that, I have done a lot of processing in interrupts. But that's my advice for someone starting, because it minimizes concurrency.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19517
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #49 on: October 24, 2015, 04:10:43 pm »
Even something as simple as a uart ISR putting bytes into a ring buffer (Which is a very standard sort of thing to do) makes the reasoning about the ring buffer **HARD**, particularly when the core does out of order execution or load/store reordering, memory barriers and volatile are your friend and it is still hard to be sure you have not left a race somewhere.

And don't forget presuming cache consistency in multicore machines - especially if there is a different o/s running on each core. There are certain things which really ought to provoke the "run away as fast as possible" reaction!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf