Author Topic: FPGA Vs µC  (Read 20584 times)

0 Members and 1 Guest are viewing this topic.

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
FPGA Vs µC
« on: October 21, 2015, 08:58:15 am »
Hi, a basic question just popup in my mind , it happen sometimes  :-DD

When system security matter , i mean for life threatening fail kind of application. Which technology is better for easier software implantation and safer execution.

-My knowledge on FPGA are pretty much basic but what i know is a FPGA is a much more like hardwired logic put somehow to execute something, the µC execute some instruction in an order defined by the software. Second point FPGA in capable of multi independent tasking which offer the possibly to keep an eye on every crucial data while doing something else.
So would be true to think that FPGA are much more easier for such problem.

-Facing EMP which technology is more reliable, assuming proper pcb design and shielding.

-Since FPGA are basically a hardware implantation , how easy would it be to turn the burn same hardware into an ASIC and economically speaking at what point ASIC are interesting.

 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #1 on: October 21, 2015, 09:20:10 am »
Both - or none.

It depends upon how good the design is verified, not how it is implemented. An µC programmed in Ada by a trained person is very reliable (military-grade). An µC programmed in Assembler is not reliable. C99 or even C++ is better provided that you have a skilled developer (it's virtually impossible to find a C++ one) but still not as good as Ada.

A Verilog or VHDL FPGA design is somewhere between C99 and Ada. It is quite easy to verify. On the other side, it would be probably larger which makes it harder to maintain (and increases the probability of verifying it incorrectly.

Do not trust tests. Only a formal proof guarantees that your product works well. A formal proof is easily doable in Ada, VHDL or Verilog, a bit harder in C99 and good C++11 code. It is hardly possible in poorly-written C++ or C89 code.
The difficult we do today; the impossible takes a little longer.
 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #2 on: October 21, 2015, 09:27:28 am »
Keep in mind that multitasking is a complex problem on its own. When multiple processes are executed in parallel (does not matter how), there is a BIG problem of communication. The formal verification of such a communication in general is proven to be impossible (the halting problem in computability theory) but is still doable in many practical cases. It is however so complex that virtually nobody makes it correctly (and nobody cares). Both MCU and FPGA are prone to this. Before introduced any multitasking, invent a way to make a formal proof of your algorithm.
The difficult we do today; the impossible takes a little longer.
 

Offline asgard20032

  • Regular Contributor
  • *
  • Posts: 184
Re: FPGA Vs µC
« Reply #3 on: October 21, 2015, 09:56:56 am »
Both - or none.

It depends upon how good the design is verified, not how it is implemented. An µC programmed in Ada by a trained person is very reliable (military-grade). An µC programmed in Assembler is not reliable. C99 or even C++ is better provided that you have a skilled developer (it's virtually impossible to find a C++ one) but still not as good as Ada.

A Verilog or VHDL FPGA design is somewhere between C99 and Ada. It is quite easy to verify. On the other side, it would be probably larger which makes it harder to maintain (and increases the probability of verifying it incorrectly.

Do not trust tests. Only a formal proof guarantees that your product works well. A formal proof is easily doable in Ada, VHDL or Verilog, a bit harder in C99 and good C++11 code. It is hardly possible in poorly-written C++ or C89 code.

May we get an example of such proof in those languages? Because formal proof isn't something people talk about often, so most of us are not familiar with this.

Also, if we want to program in Ada, which mcu development environment allow Ada programming? (I am not talking about compiling my own gcc tool-chain with ada enable)
« Last Edit: October 21, 2015, 10:02:09 am by asgard20032 »
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: FPGA Vs µC
« Reply #4 on: October 21, 2015, 09:59:15 am »
Comments on using MCU: Don't use interrupts or multitasking. Instead use polling, simple state machines and a "run to the completion" tasker or something similar which is predictable. For example, the book "Patterns for Time-Triggered Embedded Systems: Building Reliable Applications with the 8051 Family of Microcontrollers" is a good starting point and the concepts are easily portable to the other microcontroller. And, use watchdog properly.
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: FPGA Vs µC
« Reply #5 on: October 21, 2015, 10:02:42 am »
Both - or none.

It depends upon how good the design is verified, not how it is implemented. An µC programmed in Ada by a trained person is very reliable (military-grade). An µC programmed in Assembler is not reliable. C99 or even C++ is better provided that you have a skilled developer (it's virtually impossible to find a C++ one) but still not as good as Ada.

A Verilog or VHDL FPGA design is somewhere between C99 and Ada. It is quite easy to verify. On the other side, it would be probably larger which makes it harder to maintain (and increases the probability of verifying it incorrectly.

Do not trust tests. Only a formal proof guarantees that your product works well. A formal proof is easily doable in Ada, VHDL or Verilog, a bit harder in C99 and good C++11 code. It is hardly possible in poorly-written C++ or C89 code.

May we get an example of such proof in those languages? Because formal proof isn't something people talk about often, so most of us are not familiar with this.

Also, if we want to program in Ada, which mcu development environment allow Ada programming?

Ada's subset SPARK is used for formal validation and verification. Ada tool-chain for bare-metal embedded programming is available for free at least for AVR and ARM. Of course there are others if you have budget to pay for it. There might also be other freely available ports, but I haven't been looking at those.
« Last Edit: October 21, 2015, 10:08:09 am by Kalvin »
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #6 on: October 21, 2015, 10:23:33 am »
Comments on using MCU: Don't use interrupts or multitasking. Instead use polling, simple state machines and a "run to the completion" tasker or something similar which is predictable. For example, the book "Patterns for Time-Triggered Embedded Systems: Building Reliable Applications with the 8051 Family of Microcontrollers" is a good starting point and the concepts are easily portable to the other microcontroller. And, use watchdog properly.

i think 'simple' is relative. everyone i talked to him about this basically say the same thing you just said "polling + state machines"

except one there is a lot of micro-task that should run practically in the same time ( managing communication with a PC while communicating with a display MCU while running ADC while running the main task of the whole thing ) i guess a state machine is a bit restrictive on how you should manage all that in a secure way.
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: FPGA Vs µC
« Reply #7 on: October 21, 2015, 10:38:17 am »
Comments on using MCU: Don't use interrupts or multitasking. Instead use polling, simple state machines and a "run to the completion" tasker or something similar which is predictable. For example, the book "Patterns for Time-Triggered Embedded Systems: Building Reliable Applications with the 8051 Family of Microcontrollers" is a good starting point and the concepts are easily portable to the other microcontroller. And, use watchdog properly.

i think 'simple' is relative. everyone i talked to him about this basically say the same thing you just said "polling + state machines"

except one there is a lot of micro-task that should run practically in the same time ( managing communication with a PC while communicating with a display MCU while running ADC while running the main task of the whole thing ) i guess a state machine is a bit restrictive on how you should manage all that in a secure way.

Split the design in half: a) Mission-critical part running on MCU A and b) non-critical part running on another MCU B. The dedicated processors will use simple interprocessor communication protocol between the processors.
 

Offline daqq

  • Super Contributor
  • ***
  • Posts: 2302
  • Country: sk
    • My site
Re: FPGA Vs µC
« Reply #8 on: October 21, 2015, 10:38:26 am »
Quote
-Facing EMP which technology is more reliable, assuming proper pcb design and shielding.
Both are just as reliable I'd say in this. If your question is: How will they react when their inputs are nonsensical (such as sensor malfunction), that depends on how you design your software/logic. You can screw up an FPGA based design just as efficiently as a microcontroller based design. For critical stuff follow the simple rule: Simpler is better.

Quote
-Since FPGA are basically a hardware implantation , how easy would it be to turn the burn same hardware into an ASIC and economically speaking at what point ASIC are interesting.
Varies wildly - NRE (non recurring engineering costs) for a full custom ASIC, depending on what technology is used can range from several tens of thousands (simple stuff, well understood big node sizes) to several millions (high end, top of the line, bleeding edge tech/process). Add to that the price of software tools and work (semiconductor design is not a trivial tasks).

There are several areas between Full custom ASIC and general purpose FPGA though - you have ICs that have a lot of simple blocks that you connect by means of just one mask set... see https://www.altera.com/content/dam/altera-www/global/en_US/pdfs/literature/po/ss-hcasics.pdf . It's non volatile, cheaper (above a certain quantity) than an FPGA.

It should be noted that FPGAs are not a particularly non volatile hardware implementation - the internal structure is fixed and adjusted by loads of multiplexors, switches, that are controlled by a configuration that is inserted on power-up - as such, if you get a really nasty power spike (you assume EMP?) you will get a reset state (if not worse) just as you'd get with a microcontroller).
Believe it or not, pointy haired people do exist!
+++Divide By Cucumber Error. Please Reinstall Universe And Reboot +++
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #9 on: October 21, 2015, 10:53:58 am »
Comments on using MCU: Don't use interrupts or multitasking. Instead use polling, simple state machines and a "run to the completion" tasker or something similar which is predictable. For example, the book "Patterns for Time-Triggered Embedded Systems: Building Reliable Applications with the 8051 Family of Microcontrollers" is a good starting point and the concepts are easily portable to the other microcontroller. And, use watchdog properly.

i think 'simple' is relative. everyone i talked to him about this basically say the same thing you just said "polling + state machines"

except one there is a lot of micro-task that should run practically in the same time ( managing communication with a PC while communicating with a display MCU while running ADC while running the main task of the whole thing ) i guess a state machine is a bit restrictive on how you should manage all that in a secure way.

Split the design in half: a) Mission-critical part running on MCU A and b) non-critical part running on another MCU B. The dedicated processors will use simple interprocessor communication protocol between the processors.

Good,  but it return to what Gall said
Keep in mind that multitasking is a complex problem on its own. When multiple processes are executed in parallel (does not matter how), there is a BIG problem of communication. The formal verification of such a communication in general is proven to be impossible (the halting problem in computability theory) but is still doable in many practical cases. It is however so complex that virtually nobody makes it correctly (and nobody cares). Both MCU and FPGA are prone to this. Before introduced any multitasking, invent a way to make a formal proof of your algorithm.

i find it easier for spit it in half in one micro , by the way that non-critical stuff are done by the available hardware ( DMA & DMA interrupt are helpfull with that ) and critical stuff are done by polling.
cause communication are a problem by it's self , that you should manage to make it reliable and robust , in a way that both micro manage to resync dropped com, un-hang blocked com line ( like happen in I2C)  and i think it's easier to just check that every process in giving a valid data to the other process while being executed in the same chip


It should be noted that FPGAs are not a particularly non volatile hardware implementation - the internal structure is fixed and adjusted by loads of multiplexors, switches, that are controlled by a configuration that is inserted on power-up - as such, if you get a really nasty power spike (you assume EMP?) you will get a reset state (if not worse) just as you'd get with a microcontroller).

True, totally forgot that FPGA depend on the program they load on boot.
 

Offline daqq

  • Super Contributor
  • ***
  • Posts: 2302
  • Country: sk
    • My site
Re: FPGA Vs µC
« Reply #10 on: October 21, 2015, 11:00:19 am »
Quote
True, totally forgot that FPGA depend on the program they load on boot.
If that bothers you, CPLDs are more fixed - they have a non-volatile memory inside, but are far less complex. But at the end of the day a power spike will still reset your device (whatever it is) to an initial state - the connections might be there, but the internal state of the RAMs, flip flops, etc. might get reset.
Believe it or not, pointy haired people do exist!
+++Divide By Cucumber Error. Please Reinstall Universe And Reboot +++
 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #11 on: October 21, 2015, 12:09:09 pm »
May we get an example of such proof in those languages? Because formal proof isn't something people talk about often, so most of us are not familiar with this.

I'll give it a try.

First, let's prove that our algorithm is correct. An example: how do we prove quicksort:

An array is sorted if and only if each element is not smaller than any of the preceeding elements. (And not larger than any of the following elements, which is an obvious consequence).

Let's prove that quicksort gives a sorted array on its output. It is obvious that an array of only one element is always sorted. On each step of quicksort we split the array so that the first part contains only elements not larger than the pivot and the second has only larger elements. That means, any element of second subarray is not smaller as any element of the first one. Since it is not smaller than the pivot, this operation does not affect the "sorted" property: if both subarrays are sorted, so is the result. By mathematical induction, this means that the algorithm makes sorted array for any number of elements. Proven.

Now let's prove that our implementation is really the correct implementation of the proven algorithm. For illustrative purposes, I make a very inefficient C99 implementation:
Code: [Select]
void quicksort(int data[], size_t size)
{
    if (size < 2)
        return;
    const int pivot = data[0];

    int left[size];
    size_t left_size = 0;
    int right[size];
    size_t right_size = 0;
   
    for (size_t i = 1; i < size; ++i)
    {
        if (data[i] <= pivot)
        {
            left[left_size++] = data[i];
        }
        else
        {
            right[right_size++] = data[i];
        }
    }

    quicksort(left, left_size);
    quicksort(right, right_size);

    for (size_t i = 0; i < left_size; ++i)
        data[i] = left[i];
    data[left_size] = pivot;
    for (size_t i = 0; i < right_size; ++i)
        data[i + left_size + 1] = right[i];
}

I wrote it so that there is some place for errors, but we can still prove that it is correct (to some extents, see below).

If size < 2, this means, that the array has only one or zero elements, It is already sorted. Do nothing. Ok.

We choose very first element as a pivot. Ok.

We create two arrays and their sizes. This is the place where our program can fail if there is not enough stack memory. Here we have no guard against it, and we have to proof elsewhere that our array size is small enough to fit our stack. In the worst case we'll need at least size*(size+1)*sizeof(int) stack space + size recursion depth. Let's consider here that we have a proof of having such a stack space to the point of the call. Ok.

Then we loop for the rest of elements, started at the second one. This loops guarantees that each element is going to the left or to the right (but not both), and all elements not larger than the pivot are going to the left. This means, left and right will have the property required by our theoretical proof, and sum of their sizes wull be exactly size-1. Ok.

Call quicksort twice. Ok.

Copy elements from the left. Such copying takes exactly one element and keeps their amount and order. Ok.

Copy pivot. Pivot should go after the left array. Ok.

Copy right side. Here we should check array indices carefully. We could see that, if i = 0, our element goes right after pivot. And the index of the last element is size - 1 + left_size + right_size + 1 = size - 1, which is correct too. Ok.

Proven, with the exception of stack size problem.

A general rule here is: "I do not write what I can't prove". Not every code can be proven in such a way. This is the matter of human discipline. No technology is completely fool-proof.

Functional languages like Lisp and languages like Ada are designed to make such formal proofs as easy as possible. C, C++ and Java are not. Languages like Java and (sometimes) C++ have too much possibilities so the proof is hardly possible in many cases. The limited functionality of the language is good for this purpose as it limits the number of possibilities one has to consider during the proof.


Quote
Also, if we want to program in Ada, which mcu development environment allow Ada programming? (I am not talking about compiling my own gcc tool-chain with ada enable)
Sorry, the only Ada toolchain I know is exactly that gcc. Its Ada compiler, gnat, is the one used by the Pentagon.
The difficult we do today; the impossible takes a little longer.
 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #12 on: October 21, 2015, 12:11:16 pm »
Comments on using MCU: Don't use interrupts or multitasking. Instead use polling, simple state machines and a "run to the completion" tasker or something similar which is predictable.
Right! This is the obvious way to make the code simple enough to be proven. multitasking and interrupts are something that could happen everywhere, making the proof as hard as impossible.
The difficult we do today; the impossible takes a little longer.
 

Offline Ice-Tea

  • Super Contributor
  • ***
  • Posts: 3070
  • Country: be
    • Freelance Hardware Engineer
Re: FPGA Vs µC
« Reply #13 on: October 21, 2015, 01:14:48 pm »
Comments on using MCU: Don't use interrupts or multitasking. Instead use polling, simple state machines and a "run to the completion" tasker or something similar which is predictable. For example, the book "Patterns for Time-Triggered Embedded Systems: Building Reliable Applications with the 8051 Family of Microcontrollers" is a good starting point and the concepts are easily portable to the other microcontroller. And, use watchdog properly.

Then, off course, you may have that some time passes between an event and the reaction to that event. Time may be as little as nothing and as much as the entire loop takes to process. This is something the FPGA has no trouble with: it evaluates everything, all the time.

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: FPGA Vs µC
« Reply #14 on: October 21, 2015, 01:21:15 pm »
Comments on using MCU: Don't use interrupts or multitasking. Instead use polling, simple state machines and a "run to the completion" tasker or something similar which is predictable. For example, the book "Patterns for Time-Triggered Embedded Systems: Building Reliable Applications with the 8051 Family of Microcontrollers" is a good starting point and the concepts are easily portable to the other microcontroller. And, use watchdog properly.

Then, off course, you may have that some time passes between an event and the reaction to that event. Time may be as little as nothing and as much as the entire loop takes to process. This is something the FPGA has no trouble with: it evaluates everything, all the time.

That is why one should always determine the real-time requirements (soft and hard) and also determine the maximum loop processing time analytically and/or using an instruction simulator. For non-critical systems even an oscilloscope is enough for determining typical loop processing time and event response time.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #15 on: October 21, 2015, 01:47:57 pm »
Split the design in half: a) Mission-critical part running on MCU A and b) non-critical part running on another MCU B. The dedicated processors will use simple interprocessor communication protocol between the processors.
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: FPGA Vs µC
« Reply #16 on: October 21, 2015, 02:01:14 pm »
Split the design in half: a) Mission-critical part running on MCU A and b) non-critical part running on another MCU B. The dedicated processors will use simple interprocessor communication protocol between the processors.
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).
Really? Are you quite sure about what you are saying? The critical part is running on a MCU without any fancy OS stuff performing its mission-critical or life-critical function autonomously. It will send the data to the other MCU running the non-critical part of the application. It will receive (non-blocking poll) data from the non-critical part for settings and other configuration stuff. The system is designed so that the critical part of the application will perform its task even if the non-critical part is not up and running (ie. it has crashed, it has missed its soft real-time deadline, etc). The communication protocol is idesigned to have no interlocks. However, implemented on the same processor, you will be in trouble if the non-critical part of the application crashes, performs some goofy stuff, application hangs due to programming error and the watchdog will restart the system... Think again. 
 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #17 on: October 21, 2015, 02:05:40 pm »
Split the design in half: a) Mission-critical part running on MCU A and b) non-critical part running on another MCU B. The dedicated processors will use simple interprocessor communication protocol between the processors.
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).
Exactly.

There is NO RECIPE that makes it "just work", there is only one rule: keep it simple enough for the formal verification. It does not matter how you do that all as long as you can verify it.

In most cases, multitasking, interrupt handling and multiple-MCU solutions are low-hanging fruits. They all give a false sense of simplicity, being in fact not simple at all. For example, using interrupts to achieve "faster response" will probably lead to unpredictable response time, really fast in 99.9% cases but unacceptably slow in 0.1% cases. That is, it's the 0.1% probability that your device will fail, which is unacceptable for a good device. It is better to respond slower but with 100% probability (Ok, algorithmically 100% probability, since there is always a non-zero probability that your device will be hit by an asteroid).

The goal is to achieve 100% reliability of the software source code itself, so that all failures are essentially hardware or compiler failures. Both modern compilers and hardware are very reliable.
The difficult we do today; the impossible takes a little longer.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #18 on: October 21, 2015, 02:09:44 pm »
Split the design in half: a) Mission-critical part running on MCU A and b) non-critical part running on another MCU B. The dedicated processors will use simple interprocessor communication protocol between the processors.
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).
Really? Are you quite sure about what you are saying? The critical part is running on a MCU without any fancy OS stuff performing its mission-critical or life-critical function autonomously. It will send the data to the other MCU running the non-critical part of the application.
Try to build such a system and you'll see why it is a bad idea. In the end the functions on both processors will be much more intertwined than you think/want at first glance. The start of the slippery slope: Chances are both processors will need to do something special if one of them fails.
It can only work if you can make a very clean break between the two microcontrollers and put them in seperate boxes with an RS232 or RS485 interface (cable) between them.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #19 on: October 21, 2015, 02:14:53 pm »
Really? Are you quite sure about what you are saying? The critical part is running on a MCU without any fancy OS stuff performing its mission-critical or life-critical function autonomously.
The problem of such approach is, your fancy OS may send wrong commands to the critical MCU or display wrong state on the screen. The whole chain is not stronger as its weakest link.

In a mission-critical application, the user interface is as critical as the core functionality. The communication channel between the device and the operator shall be reliable. If you can't make a reliable GUI, better use LEDs and hardware buttons instead.
The difficult we do today; the impossible takes a little longer.
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: FPGA Vs µC
« Reply #20 on: October 21, 2015, 02:19:55 pm »
Really? Are you quite sure about what you are saying? The critical part is running on a MCU without any fancy OS stuff performing its mission-critical or life-critical function autonomously.
The problem of such approach is, your fancy OS may send wrong commands to the critical MCU or display wrong state on the screen. The whole chain is not stronger as its weakest link.

In a mission-critical application, the user interface is as critical as the core functionality. The communication channel between the device and the operator shall be reliable. If you can't make a reliable GUI, better use LEDs and hardware buttons instead.

Take a look inside your car. There is quite clear separation between the mission-critical MCU systems and non-critical MCU systems. Same principle applies for example to Airbus which can be flown even if the instrument panel freezes, blanks and reboots.
« Last Edit: October 21, 2015, 02:57:13 pm by Kalvin »
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #21 on: October 21, 2015, 03:44:58 pm »
The famous counterpoint was a cockup involving a certain radiotherapy machine (The UI could be gotten fatally (literally) out of sync with the physical state of the hardware)......

The Therac-25 should be a cautionary tale for every embedded systems engineer.

High SIL level systems are just HARD.

It is telling that the railways over here will not allow any use of interrupts in the code running the signalling systems (And that the Victorian era railway had an accident caused by a few tens of millisecond race condition in a strictly mechanical points interlock system).

Regards, Dan.
 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #22 on: October 21, 2015, 03:59:47 pm »
Another example of the communication via the human-machine interface was the crash of Tatarstan Airlines Flight 363 in Kazan. The actual cause for the crash was misinterpretation of the displays by the pilot. A software failure in displays would result in exactly the same.

And that's why cars have steering and brakes controlled mechanically/hydraulically, not electronically.
The difficult we do today; the impossible takes a little longer.
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #23 on: October 21, 2015, 05:16:12 pm »
Split the design in half: a) Mission-critical part running on MCU A and b) non-critical part running on another MCU B. The dedicated processors will use simple interprocessor communication protocol between the processors.
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).
Really? Are you quite sure about what you are saying? The critical part is running on a MCU without any fancy OS stuff performing its mission-critical or life-critical function autonomously. It will send the data to the other MCU running the non-critical part of the application.
Try to build such a system and you'll see why it is a bad idea. In the end the functions on both processors will be much more intertwined than you think/want at first glance. The start of the slippery slope: Chances are both processors will need to do something special if one of them fails.
It can only work if you can make a very clean break between the two microcontrollers and put them in seperate boxes with an RS232 or RS485 interface (cable) between them.

it is a bad idea, been through that, on paper it seems fine and easy, when programming it's mess and both processors have control over the critical circuitry so if one fail the other shut it down , and above all com are nightmarish experience. you just can't keep it simple.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #24 on: October 21, 2015, 05:29:19 pm »
At one day when at work the whole building started to shake.  :wtf: It turned out 'a system' with a large heavy mechanical structure had crashed into itself. Post mortem analyses revealed that the programmers from the supplier had just mixed the emergency stop code with the normal operating code. In other words: there was no layer providing any safety! Ofcourse that went wrong during testing...  :palm: Fortunately nobody got hurt due to other safety precautions but the incident could easely have resulted in death. 'Our' software engineering guys related to that project then decided to rewrite the entire system themselves and do it right (they already had a lot of experience writing safety critical software).
« Last Edit: October 21, 2015, 05:40:33 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: FPGA Vs µC
« Reply #25 on: October 21, 2015, 07:00:50 pm »
Interrupts can be tricky little beasts. I've never worked on critical systems but I have had to solve interrupt related problems; everything goes well until your device decides to keep on generating interrupts continuously (for several reasons, including perfectly valid situations). Today I ditch out interrupts in favor of the more well behaved and predictable polling method, usually if they're based on data coming from the outside. Like reading simple hw buttons with a GPIO pin, people tend to think that interrupts are good for that.
« Last Edit: October 21, 2015, 07:03:08 pm by nuno »
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #26 on: October 21, 2015, 07:48:49 pm »
well , interrupt are implanted for a reason. i personally find them useful on most case. of curse being to damn quick on detecting a press of a button is not really the purpose. it's case dependent. i guess feeling comfortable with pulling is just cause you know what the thing is doing step by step. i don't think i follow this logic , i tend to use maximum of what the implanted hardware has to offer.
 

Offline Gall

  • Frequent Contributor
  • **
  • Posts: 310
  • Country: ru
Re: FPGA Vs µC
« Reply #27 on: October 21, 2015, 08:03:25 pm »
Remember that interrupts (and threads) was something that was invented BEFORE the modern compiler theory was born. The whole theory of automatic code optimization and a large part of lambda calculus was not known at that time.

As said, it is Ok to use anything you want to use as long as it does not obstruct your proof of correctness. If you have an idea how you could prove the correctness of the specific code in the presence of a specific interrupt handler, go on. The main reason why many people avoid interrupts in critical code is that the proof is in many cases too complex for any practical use, and the code without interrupts could easily be proven. Just avoid anything you couldn't prove. Interrupts, threads and (sometimes) dynamic memory allocation are the candidates.

If, for some reason, you have multiple concurrent processes running (does not matter if they are on the same MCU or on different MCUs or in different parts of an FPGA or just in pure hardware), be very careful with the communication between them. This is something that goes out of control too easily. 90% errors I've seen in the source code in the past 10 years are more or less connected to that.
The difficult we do today; the impossible takes a little longer.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #28 on: October 21, 2015, 10:10:34 pm »
well , interrupt are implanted for a reason. i personally find them useful on most case. of curse being to damn quick on detecting a press of a button is not really the purpose. it's case dependent. i guess feeling comfortable with pulling is just cause you know what the thing is doing step by step. i don't think i follow this logic , i tend to use maximum of what the implanted hardware has to offer.
An interrupt must be 100% predictable or your device won't work. Using interrupts for GPIO is generally speaking a big NO! Many years ago I worked at a company which developed a product which was to be installed in homes. I wasn't involved in the firmware & hardware but this product had major problems with stability. One of the functions it had was a door-bell input. This input was directly connected to a GPIO pin on the processor. No filtering or protection or whatsover  :wtf: and many meters of unshielded wiring attached so it needed a series resistor and a big capacitor to work reasonably. When testing a different circuit attached to this input the device would halt under certain circumstances. It turned out the firmware programmer decided to use a GPIO interrupt which kept firing when the pin was half way a logic level  :palm: Worst of all he and the R&D manager refused to believe me that doing that was the stupidest thing to do -ever-. The company nearly went under from lawsuits due to the product not working properly and here they where telling me they where doing a good job and GPIO interrupts where meant for buttons :palm: :palm: :palm:
« Last Edit: October 21, 2015, 10:12:14 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: FPGA Vs µC
« Reply #29 on: October 21, 2015, 10:56:26 pm »
well , interrupt are implanted for a reason. i personally find them useful on most case. of curse being to damn quick on detecting a press of a button is not really the purpose. it's case dependent. i guess feeling comfortable with pulling is just cause you know what the thing is doing step by step. i don't think i follow this logic , i tend to use maximum of what the implanted hardware has to offer.
An interrupt must be 100% predictable or your device won't work. Using interrupts for GPIO is generally speaking a big NO! Many years ago I worked at a company which developed a product which was to be installed in homes. I wasn't involved in the firmware & hardware but this product had major problems with stability. One of the functions it had was a door-bell input. This input was directly connected to a GPIO pin on the processor. No filtering or protection or whatsover  :wtf: and many meters of unshielded wiring attached so it needed a series resistor and a big capacitor to work reasonably. When testing a different circuit attached to this input the device would halt under certain circumstances. It turned out the firmware programmer decided to use a GPIO interrupt which kept firing when the pin was half way a logic level  :palm: Worst of all he and the R&D manager refused to believe me that doing that was the stupidest thing to do -ever-. The company nearly went under from lawsuits due to the product not working properly and here they where telling me they where doing a good job and GPIO interrupts where meant for buttons :palm: :palm: :palm:

Interrupts can be a problem. And so can the ISRs!

At a new job nearly 20 years ago, I had to get a temperature controller working. It was the usual sort of thing: a temperature sensor, an ADC for it, a resistive heater under DAC control. The processor was a DS5000T (an 8051 variant which had battery-backed SRAM as its program store), programmed in C (the Avocet compiler). The guy who designed the system and wrote the code had a PhD in control systems.

The complaint? "The serial communications link isn't working." The micro ran off the usual 11-ish MHz oscillator with the MAX232 level translators. The serial line between the temperature controller and the computer to which it talked was only a couple of feet (both were in the same VME rack). They had tried shielding the serial line, they tried changing the baud rate, they tried all sorts of stuff.

One thing they pointed out was that when the temperature-control loop was disabled, the communication was flawless. They thought that the processing was somehow causing EMI or whatever which was making the communications fail.

I started looking through the source code, initially looking at how the serial port was managed, and it seemed fairly textbook. No printf() but instead a buffer was loaded with a string and off it went. It was interrupt driven, so on receive the ISR would read SBUF and store the character read into a FIFO, and on transmit complete the ISR would look to see if anything was left in the transmit buffer and send it if so.

Then I started looking at the larger program, which was basically the PID loop. And then the alarm bells started going off. First, the control loop was written with all of the variables as floats. And it gets better: He would read the ADC, and convert from ADU into floating-point volts. (Volts = VREF / counts). The gain constants and all of the intermediate results were floats. The result of the loop was a floating-point heater current that got converted to DAC counts.

So he basically took a Matlab model and implemented it C.

So when did the loop update? Well, there was a timer set to interrupt at some reasonable interval. And when the interrupt fired, the ISR got called, and he did all of those floating-point loop update calculations in the ISR.
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: FPGA Vs µC
« Reply #30 on: October 21, 2015, 11:32:42 pm »
(...) And when the interrupt fired, the ISR got called, and he did all of those floating-point loop update calculations in the ISR.
He should at least have made it re-entrant :). Actually, it can be quite tricky too, I remember having used a reentrant interrupt only once, and it was a very very special case (had no choice but to squeeeeeze the very last bit of performance of an AVR to be able to have the low latency realtime behavior needed). My moto to avoid problems is pretty simple: KISS.
« Last Edit: October 21, 2015, 11:35:01 pm by nuno »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #31 on: October 22, 2015, 12:23:43 am »
That is why one should always determine the real-time requirements (soft and hard) and also determine the maximum loop processing time analytically and/or using an instruction simulator.

Not forgetting to include the effectes of L1, L2, L3 caches :(

The XMOS devices are particularly good in this respect; the IDE states the precise loop or block times.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: FPGA Vs µC
« Reply #32 on: October 22, 2015, 10:23:50 am »
When system security matter , i mean for life threatening fail kind of application. Which technology is better for easier software implantation and safer execution.

MPU and DSP, definitively!
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: FPGA Vs µC
« Reply #33 on: October 22, 2015, 10:27:06 am »
MPU and DSP ...

..FPGAs are not a particularly non volatile hardware implementation - the internal structure is fixed and adjusted by loads of multiplexors, switches, that are controlled by a configuration that is inserted on power-up - as such, if you get a really nasty power spike (you assume EMP?) you will get a reset state (if not worse) just as you'd get with a microcontroller).

… for that reason!
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #34 on: October 22, 2015, 01:23:27 pm »
It is the wrong question, MPU/DSP/FPGA, none of the above is a solution, but any of the above may form part of a solution in life critical applications.

A big part of the art is to design out any single (and sometimes dual, much harder) fault causing or allowing a dangerous condition to be maintained, FMEA is the term you are looking for.

A fun example of safety design is microwave oven door switches, which have one NC and one NO contact, the NC contact is wired to hard short the primary of the transformer while the NO contact isolates the power....
By design the NC contact opens before the NO contact closes, but in the event of a welded power contact the NC contact will short the supply when the door is opened blowing the fuse and rendering the oven safe.

Some of the thing I have seen:
An additional monitored safety relay held in by a retriggerable monostable triggered by an edge on a GPIO pin, on startup you make sure the relay is NOT engaged, then when all safety conditions are met you start toggling the pin, wait a hundred ms then check the relay has pulled in, once all is good then you can proceed to command the power contactor on (Via the now closed contact) on the safety relay. In the event of a processor crash the monostable does not get retriggered and the opening of the safety relay causes the main power contactor to open.   

Often the startup routine is significantly complicated by also checking on expected default states, for example you read the state of the pressure switch (and error out if it is reporting high pressure) before you start the pump, then you check it is NOW reporting high pressure, then you check the coolant temperature is low, then you switch on the arc and check the coolant temperature climbs at a satisfactory rate, then....
You do not just start the pump and check the pressure switch to confirm pump operation, because the switch may be stuck.

Traffic light controls with relays fitted to short circuit the green lamp when any other lamp is green (Blown fuse hence OFF is much safer then conflicting greens).

Safety critical GPIs are often really analogue and are biased to somewhere mid rail, so that a broken connection can be detected (as can a short circuit if the switch is changeover and has lowish value resistors pulling up and down), the automotive crowd love this one. 

The really tricky stuff is when you have a situation where there is NOT an obvious safe condition to default to, often the case in automotive. If your engine management system throws a watchdog reboot it is quite possibly NOT safe to just shut down and light 'check engine', particularly if you are trying to overtake that semi on the country road at the time (Worse, it may take a SECOND to get some of the status from the slower can nodes that you really need to decide what to do), not easy. 

Regards, Dan.
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #35 on: October 22, 2015, 05:13:13 pm »
An additional monitored safety relay held in by a retriggerable monostable triggered by an edge on a GPIO pin, on startup you make sure the relay is NOT engaged, then when all safety conditions are met you start toggling the pin, wait a hundred ms then check the relay has pulled in,

did the same with a relay power between a negative rail and ground , the negative voltage is generated by toggling GPIO capacitor and same cap, if MCU fail it will stop generating pulses and relay is not powered any more.

It is the wrong question, MPU/DSP/FPGA, none of the above is a solution, but any of the above may form part of a solution in life critical applications.

the question is more about how much hardware capability will be helpful to privilege one tech over the other.
but as i understand most of participant don't see the hardware as a problem or more safe what ever it's MCU / DSP / FPGA if software is not well written.
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #36 on: October 22, 2015, 08:03:58 pm »
The gotcha is always complexity, and neither the FPGA not the CPU have really good tools to manage that complexity, worse even absent toolchain bugs there is usually not sufficient isolation to allow you to easily reason about parts of these systems small enough to be reasonably verified in isolation.

Software is often actually slightly more problematic this way then FPGA firmware, if only because the FPGA stuff usually ends up being strictly synchronous with constrained timing, there is no such thing as an interrupt between arbitrary machine instructions on an FPGA.

It is one thing to test that "If a and b then x within 10ms", it is orders of magnitude harder to verify that x ONLY ever occurs within 10ms of a and b becoming true, but for many safety cases that is the requirement, yet all too many tests only test the first bit (And I have a nasty feeling that the Church-Turing thesis has a few things to say about testing the second).

73 Dan.
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4102
  • Country: us
Re: FPGA Vs µC
« Reply #37 on: October 22, 2015, 08:31:09 pm »
Quote
It turned out the firmware programmer decided to use a GPIO interrupt which kept firing when the pin was half way a logic level  :palm: Worst of all he and the R&D manager refused to believe me that doing that was the stupidest thing to do -ever-. The company nearly went under from lawsuits due to the product not working properly and here they where telling me they where doing a good job and GPIO interrupts where meant for buttons :palm: :palm: :palm:
Well seems like there are three issues here.

1.Signal integrity/circuit-design.
2.Debouncing
3.Interrupt handling

1. This is an issue, either way. It could potentially be alleviated or fixed, despite shoddy signal, with deboucing. This holds equally true in either method. Only with ISR, you can do this in the background with timers instead of stopping main code loop.

2. Debouncing: This can be achieved in either method

3. Interrupt handling: You could set up the GPIO interrupt to timeout after a switch is detected, in addition to or as part of the debouncing, using a timer interrupt. Thus effectively giving it a minimum period, similar to polling. Could be something like GPIO interrupt triggered, the ISR will turn off GPIO interrupt, turn on timer0 interrupt, check state of input pin in after X time has passed, a number of times. After switch is detected, set a flag and/or perform immediate task. Continue timer interrupt polling until input returns to resting state, then initiate timer interrupt to turn the GPIO interrupt back on after X time has passed. If debouncing fails, then initiate timer to turn GPIO interrupt back on after X time has passed.

If you can trust a WDT to keep the MCU running in mission critical situations, seems like other interrupts can be relied upon if implemented correctly. Complexity of proofing not withstanding.

If you told me that, I wouldn't believe it. Button interrupt is very useful for waking a micro from sleep via user input, and quite commonly used for such task. I would fix the code (and the circuit). But of course this is a doorbell, not a manned vehicle control system. Seems like problem 1 is the main issue. The other example of randomly performing a super long calculation within the ISR (and not turning off that particular interrupt during timing sensitive task) is super noob. Perhaps could have just set a flag in the ISR which results in the calculation somewhere in the main program loop, so that it could be interrupted, itself.

I would have thought that timer interrupts, in particular, would be quite useful in mission critical applications. The WDT being one perfectly good and valid example, IMO. WDT is barely more, really, than a timer interrupt with a RESET instruction. It might run off a different clock source, of course, which is the "barely more" part. It might also set a flag that can be detected after RESET, but that's still nothing you couldn't do with a regular timer interrupt.

Quote
there is no such thing as an interrupt between arbitrary machine instructions on an FPGA.
You can turn interrupts on/off in the software before starting code blocks that cannot be interrupted. I suppose this doesn't help so much in C, where finding a safe place to enable interrupts might be more challenging without a very deep understanding (or trust) of the compiler and libraries.

So far, the main reason to not use interrupts in mission critical apps seems to be human error. Surely this is a good enough reason. But it also seems like using interrupts can greatly simplify code in certain situations. Perhaps sometimes enough so that it makes things easier to proof? Heck, some programs could be 99% ISR and 1% code loop. This could be the easiest way to write a given program.

There's my 2 cents. Now I'm really broke.
« Last Edit: October 22, 2015, 09:58:49 pm by KL27x »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #38 on: October 22, 2015, 10:49:00 pm »
Quote
It turned out the firmware programmer decided to use a GPIO interrupt which kept firing when the pin was half way a logic level  :palm: Worst of all he and the R&D manager refused to believe me that doing that was the stupidest thing to do -ever-. The company nearly went under from lawsuits due to the product not working properly and here they where telling me they where doing a good job and GPIO interrupts where meant for buttons :palm: :palm: :palm:
Button interrupt is very useful for waking a micro from sleep via user input, and quite commonly used for such task.
A wake-up event usually fires once and isn't really an interrupt in many cases.
Quote
I would fix the code (and the circuit).
Fixing the circuit usually isn't an option so you have to fix the software.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4102
  • Country: us
Re: FPGA Vs µC
« Reply #39 on: October 23, 2015, 01:05:27 am »
Quote
A wake-up event usually fires once and isn't really an interrupt in many cases.
Agree to disagree on both counts. A wake-up event is an true interrupt on some devices, and it may be expected to occur frequently.

Quote
Fixing the circuit usually isn't an option so you have to fix the software.
In this particular example, you might use a timer interrupt to PWM the IO pin in question to output hi on a 1% duty cycle with a short enough period to not induce a brownout when the button is pressed. With the RC circuitry on it, this should fix things, nicely. Of course you could also do that in your code loop, but that might not be practical. Anyhow the primary problem is obviously with the circuit. 
« Last Edit: October 23, 2015, 01:22:10 am by KL27x »
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: FPGA Vs µC
« Reply #40 on: October 24, 2015, 04:08:25 am »
The other example of randomly performing a super long calculation within the ISR (and not turning off that particular interrupt during timing sensitive task) is super noob. Perhaps could have just set a flag in the ISR which results in the calculation somewhere in the main program loop, so that it could be interrupted, itself.

It was total super noob. Like I said, the guy who did it was a freshly-minted PhD in controls, with a lot of Matlab/Simulink experience and zilch with anything embedded. Characters to and from the UART were simply dropped on the floor because the serial interrupt wasn't being handled in time due to being blocked by the timer ISR with its ridiculous non-re-entrant floating-point calculations.

My solution was of course exactly what you suggested: The timer interrupt sets a flag and exits, then the main loop looks for set flag and does the control-loop update. I also rewrote the control-loop update code to use integers instead of floats, which is completely obvious for everyone reading here. The loop update always completed before the timer expired and signaled time to do the next update.
 

Offline obiwanjacobi

  • Frequent Contributor
  • **
  • Posts: 988
  • Country: nl
  • What's this yippee-yayoh pin you talk about!?
    • Marctronix Blog
Re: FPGA Vs µC
« Reply #41 on: October 24, 2015, 04:48:43 am »
Fascinating what is being discussed here. I am but a hobbyist but love the best practices information being passed.

I always thought that interrupts were the best way to utilize the MCUs capabilities to their fullest extent..? I love how things just happen in the background. And I understand that putting an interrupt on external signals (like buttons) can backfire, but what about peripherals? I recently wrote a Usart class(es) (yes, C++) and used the interrupt for reading and writing to and from a ring-buffer. Is there any signal you could apply that would make that go haywire? I thought that would be pretty safe because of the hardware support - sure they made it robust - didn't they?  :-//
Arduino Template Library | Zalt Z80 Computer
Wrong code should not compile!
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: FPGA Vs µC
« Reply #42 on: October 24, 2015, 11:02:42 am »
I don't know what the guys at critical systems do, but I suppose they use interrupts too for the internal peripherals, as long as it's not something that can go "out of control". For a UART? I use it. Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it. Except, of course, if it's something you absolutely need to do "at interrupt time" (low latency response) - for which it is even more critical to have the other interrupts as short as possible (if any other interrupts...). Beyond that, KISS KISS KISS...
« Last Edit: October 24, 2015, 11:04:21 am by nuno »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #43 on: October 24, 2015, 12:20:50 pm »
Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it.
NO. A really big NO! The best way is to make an analysis with how much time is needed for each task and how quick an interrupt needs to be serviced. From there you can determine how much time can be spend inside every interrupt and whether you are going to need an OS. IMHO the best way to see an interrupt controller is a time-slicing OS in hardware where each interrupt handler is a background task. Using flags to signal the main thread to do something often creates the need for the main thread to become timing cricitical. You'll also need to transfer data between two asynchronous tasks which adds overhead and additional complexity.

For example: In signal processing applications it is better to do the entire processing in the ADC interrupt. The ADC interrupts are so frequent that other interrupts (from a UART for example) still have enough chance of getting serviced in time. In some of my microcontroller applications I have the controller spend over 90% of it's time in interrupts.
« Last Edit: October 24, 2015, 12:22:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline c4757p

  • Super Contributor
  • ***
  • Posts: 7799
  • Country: us
  • adieu
Re: FPGA Vs µC
« Reply #44 on: October 24, 2015, 12:40:07 pm »
A lot of the interrupt trouble can be alleviated by using a microcontroller that supports multiple interrupt levels, where higher levels can safely interrupt lower levels, and levels can be enabled and disabled separately.
No longer active here - try the IRC channel if you just can't be without me :)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #45 on: October 24, 2015, 01:34:08 pm »
A lot of the interrupt trouble can be alleviated by using a microcontroller that supports multiple interrupt levels, where higher levels can safely interrupt lower levels, and levels can be enabled and disabled separately.

It is a fallacy in thinking that, in hard realtime critical systems, multiple priority levels solve problems - and that's true for thread/task priority levels or interrupt priority levels. Of course, if the system is neither hard real time nor critical, then they may help.

It should be noted that multiple priority levels can introduce their own problems and/or make problems rare, transitory and virtually impossible to debug.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline c4757p

  • Super Contributor
  • ***
  • Posts: 7799
  • Country: us
  • adieu
Re: FPGA Vs µC
« Reply #46 on: October 24, 2015, 01:43:27 pm »
A lot of the interrupt trouble can be alleviated by using a microcontroller that supports multiple interrupt levels, where higher levels can safely interrupt lower levels, and levels can be enabled and disabled separately.

It is a fallacy in thinking that, in hard realtime critical systems, multiple priority levels solve problems - and that's true for thread/task priority levels or interrupt priority levels. Of course, if the system is neither hard real time nor critical, then they may help.

Yes, realtime systems are a unique problem wrt interrupts, and I imagine could be miserable with a multilevel interrupt controller.

Quote
It should be noted that multiple priority levels can introduce their own problems and/or make problems rare, transitory and virtually impossible to debug.

Interrupts in general can be miserable to debug.
No longer active here - try the IRC channel if you just can't be without me :)
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #47 on: October 24, 2015, 02:46:40 pm »
Interrupts in general can be miserable to debug.
True, but real misery is running out of stack ONLY when 4 asyc interrupts occur in precisely the wrong order and close enough together that at each step the higher priority one interrupts the lower..... 

Even something as simple as a uart ISR putting bytes into a ring buffer (Which is a very standard sort of thing to do) makes the reasoning about the ring buffer **HARD**, particularly when the core does out of order execution or load/store reordering, memory barriers and volatile are your friend and it is still hard to be sure you have not left a race somewhere.

Regards, Dan.
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: FPGA Vs µC
« Reply #48 on: October 24, 2015, 02:55:39 pm »
Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it.
NO. A really big NO! The best way is to make an analysis with how much time is needed for each task and how quick an interrupt needs to be serviced. From there you can determine how much time can be spend inside every interrupt and whether you are going to need an OS. IMHO the best way to see an interrupt controller is a time-slicing OS in hardware where each interrupt handler is a background task. Using flags to signal the main thread to do something often creates the need for the main thread to become timing cricitical. You'll also need to transfer data between two asynchronous tasks which adds overhead and additional complexity.

For example: In signal processing applications it is better to do the entire processing in the ADC interrupt. The ADC interrupts are so frequent that other interrupts (from a UART for example) still have enough chance of getting serviced in time. In some of my microcontroller applications I have the controller spend over 90% of it's time in interrupts.
Your millage may vary ;) . I don't always do that, I have done a lot of processing in interrupts. But that's my advice for someone starting, because it minimizes concurrency.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #49 on: October 24, 2015, 04:10:43 pm »
Even something as simple as a uart ISR putting bytes into a ring buffer (Which is a very standard sort of thing to do) makes the reasoning about the ring buffer **HARD**, particularly when the core does out of order execution or load/store reordering, memory barriers and volatile are your friend and it is still hard to be sure you have not left a race somewhere.

And don't forget presuming cache consistency in multicore machines - especially if there is a different o/s running on each core. There are certain things which really ought to provoke the "run away as fast as possible" reaction!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: FPGA Vs µC
« Reply #50 on: October 24, 2015, 04:22:08 pm »
Interrupts in general can be miserable to debug.

it's when a (not intrusive) true-ICE + hw-tracer (with 20nsec of resolution) makes the difference  :D
I have been designing one for my soft core for 1 year, not completed yet, but … it has already made me happy about interrupts
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #51 on: October 24, 2015, 06:22:53 pm »
- especially if there is a different o/s running on each core. There are certain things which really ought to provoke the "run away as fast as possible" reaction!
Ouch!
You mean people actually do that with a shared cache?
That is right up there with writing your own switch to protected mode on X86, double fault city....

Now I am a hardware guy by inclination, but that scares me.

Regards, Dan.
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #52 on: October 24, 2015, 07:11:39 pm »
Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it.
NO. A really big NO! The best way is to make an analysis with how much time is needed for each task and how quick an interrupt needs to be serviced. From there you can determine how much time can be spend inside every interrupt and whether you are going to need an OS. IMHO the best way to see an interrupt controller is a time-slicing OS in hardware where each interrupt handler is a background task. Using flags to signal the main thread to do something often creates the need for the main thread to become timing cricitical. You'll also need to transfer data between two asynchronous tasks which adds overhead and additional complexity.

For example: In signal processing applications it is better to do the entire processing in the ADC interrupt. The ADC interrupts are so frequent that other interrupts (from a UART for example) still have enough chance of getting serviced in time. In some of my microcontroller applications I have the controller spend over 90% of it's time in interrupts.

+1

most of application i have done recently , ADC, USART, SPI are handled by  interrupt and DMA , my main loop usually grap data they need do what they do and then most of the time it's only the ADC interrupt doing some stuff 99% of the time.

On some old application i wrote with PIC , the interrupt do 100% of the job, ADC and 1 timer, absolutely no code in main loop.
From my tests when main loop hang , code triggered with interrupt still work when it should work even with the main loop stacked.
once i had I2C bus hang ( in main loop) but ADC still running, with interrupt, in my case this was not much of use , but it can be useful for example with timer that check i2c and unhang it, i have much more chance that the timer interrupt will be executed.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #53 on: October 24, 2015, 07:34:30 pm »
- especially if there is a different o/s running on each core. There are certain things which really ought to provoke the "run away as fast as possible" reaction!
Ouch!
You mean people actually do that with a shared cache?
That is right up there with writing your own switch to protected mode on X86, double fault city....

Now I am a hardware guy by inclination, but that scares me.

Regards, Dan.

Look up the Xilinx Zynq: FPGA + dual core ARM A9 with internal memory, external memory, can run an RTOS on one core and linux on the other. Main menory is, of course, only one level in the hierarchy between register, through cache to disk and the cloud.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline BravoV

  • Super Contributor
  • ***
  • Posts: 7547
  • Country: 00
  • +++ ATH1
Re: FPGA Vs µC
« Reply #54 on: October 24, 2015, 07:48:15 pm »
I made a new thread -> HERE, solely on discussing the pro & con of using interrupt, as the OP and topic is on FPGA vs uC, cmiiw.

Hope you experts and experienced fellows don't mind to jump in there to share some thought.


Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: FPGA Vs µC
« Reply #55 on: October 24, 2015, 08:07:08 pm »
Both - or none.

It depends upon how good the design is verified, not how it is implemented. An µC programmed in Ada by a trained person is very reliable (military-grade). An µC programmed in Assembler is not reliable. C99 or even C++ is better provided that you have a skilled developer (it's virtually impossible to find a C++ one) but still not as good as Ada.

A Verilog or VHDL FPGA design is somewhere between C99 and Ada. It is quite easy to verify. On the other side, it would be probably larger which makes it harder to maintain (and increases the probability of verifying it incorrectly.

Do not trust tests. Only a formal proof guarantees that your product works well. A formal proof is easily doable in Ada, VHDL or Verilog, a bit harder in C99 and good C++11 code. It is hardly possible in poorly-written C++ or C89 code.

May we get an example of such proof in those languages? Because formal proof isn't something people talk about often, so most of us are not familiar with this.

Also, if we want to program in Ada, which mcu development environment allow Ada programming? (I am not talking about compiling my own gcc tool-chain with ada enable)

You don't need to program in Ada to be able to prove program correctness - i.e. that a program execution stops (doesn't run in an endless loop forever, never returning an answer) and that the answer returned is actually correct. Both of these have to be true for a program to be considered totally correct (otherwise it is only partially correct). Special cases, such as MCU code running forever are a trivial extension - each iteration of the main loop has to be correct.

You can even formally verify assembler if you want (so no, there is nothing inherently "unreliable" about assembler - let's not talk long term maintenance issues). Formal verification has nothing whatsoever with the choice of a programming language, however, some languages do make things easier (or more difficult).

One example of a formal method that is still used and is targeting structured program schemes - i.e. languages similar to Pascal, C, Java, C++, etc. that use nested high level control blocks (unlike e.g. Fortran, Basic or that assembler) is Hoare method. From a very high level point of view, Hoare method is basically a mathematical induction over the program blocks. If I prove that all sub blocks are correct and are correctly linked together using pre and post conditions, then I can conclude that my current block is correct as well. You continue doing this until you arrive to the top-level block of your program (e.g. you main() function).

I am not going to do a proof here, that would be too long, but you can find an example of this here:
http://www.cs.cmu.edu/~aldrich/courses/654-sp07/slides/7-hoare.pdf
and  here:
http://www.slidefinder.net/h/hoare_method_proving_correctness_programs/c21_hoare/15496132

Now Hoare method proves only partial correctness - that is, the result is going to be correct, if the program stops. It does not guarantee that the program actually stops - that you have to prove separately.

Now, if you want to prove correctness of something like assembler code or Basic/Fortran, you can use Floyd's method instead. A quick introduction is here (yes, the techniques are THAT old - 1967, Hoare 1969 ...):
https://users.cs.duke.edu/~raw/cps206/ProgramCorrectness.htm

It is more laborious and you have to set good invariants, otherwise your proof will not be good for much.

Both of these two methods can be greatly helped if your programming language has some special features. E.g. if you are doing design by contract like in Ada you are going to have much easier time proving your program correct, because you can rely on this language feature.

Another thing are side effects - if  your functions are side effects free (aka pure - their result depending only on their arguments and not anything else and they are not modifying anything else), then the proof is going to be much easier. Complicated side effects where behaviour of a function could depend on things outside of it (and thus not covered by pre/post conditions or invariants) could make the proof very difficult or impossible. This is one reason why functional programming is so popular among theoretical computer scientists - functional programming deals only with pure functions, so the problem with side effects is eliminated. Now that has some practical consequences because a real program needs at least some side effects - such as I/O - and there are some complex theoretical frameworks dealing with these (e.g. monads in Haskell).

A final remark - FYI, the neither Space Shuttle nor the Apollo guidance computer code were formally verified (there weren't computers powerful enough to do the verification in a reasonable time available). However, they very employing some very good software engineering practices. So that could be worth more than trying to go into formal verification which is extremely hard for anything but trivial programs.
 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: FPGA Vs µC
« Reply #56 on: October 24, 2015, 08:13:54 pm »
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).

I guess you aren't driving a car or traveling by plane these days. At a certain task complexity it is pretty much inevitable that the system becomes distributed and you have some sort of a bus between the individual pieces (e.g. CAN in most cars).

 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #57 on: October 24, 2015, 08:29:42 pm »
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).

I guess you aren't driving a car or traveling by plane these days. At a certain task complexity it is pretty much inevitable that the system becomes distributed and you have some sort of a bus between the individual pieces (e.g. CAN in most cars).



they certainly do, but how safe the task being sheared. am sure that if they need safe execution both or as many MCU they use, they need to share input data and have similar action on the hardware. So basically it's the same process running on different chips. Old (maybe they still exist ) Boiler control unit use that for flame ignition. now they use one MCU but should be written with Class B library.
Am sure in cars , it's a main MCU that handel most of the critical things and a lot of auxiliary MCU for doing all sort of other stuff, but i never worked on such system so can't be sure about that.

Edit : MCU shearing critical process, must always sync their state machine.
could be fun to do that but since most of the time they should do different task along the main 'secure' task , pretending to be totally sure about how synchronized they are mean sync every line of code ... what a headache
« Last Edit: October 24, 2015, 08:40:52 pm by hamdi.tn »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #58 on: October 24, 2015, 08:34:19 pm »
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).
I guess you aren't driving a car or traveling by plane these days. At a certain task complexity it is pretty much inevitable that the system becomes distributed and you have some sort of a bus between the individual pieces (e.g. CAN in most cars).
I never said it couldn't be done but there is much more involved than just slapping two microcontrollers on a board! Actually: in case the microcontrollers sit on one board there needs a very good reason not to use one microcontroller.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Tabs

  • Regular Contributor
  • *
  • Posts: 106
  • Country: gb
Re: FPGA Vs µC
« Reply #59 on: October 24, 2015, 08:47:43 pm »
both have their uses as safety critical applications. You're selection really depends on the complexity of the task. If its a simple application that can be done with an MCU then KISS. The real difficulty is in the burden of proof/verification required to meet your safety standard.

You didnt say which standard or industry you are trying to develop for?
I left the Avionics industry 2 years ago and the last project I worked on was to consider the use of Multicore CPUs for DAL A applications.

We had to prove that all cores were segregated (each with separate memory management units, cache levels etc). Where we couldnt segregate we had to disable.
ie L3 cache had to be disabled. the OSes and associated kernels on each core had to sit on top of a hypervisor which controlled access to the hardware. You're not allowed to use any form of code optimization that would result in loss of deterministic code execution (so not predictive branching, out of order execution).
Verifying all this presents a massive problem for h/w and theres no way to do it without the manufacturer of each device helping you. You wouldnt be able to use Zynqs (or anything with an ARM core), or Intels (proprietary h/w in the chip, details of which are never made public).
As far as I was aware, the use of multicore processors was so new that there were'nt any guidelines on it.
EASA actually commissioned a consortium of avionics developers (of which my employer, but not me was part of) to investigate the use of multicore cpus and the Kintex FPGA.  link below:

http://easa.europa.eu/system/files/dfu/CCC_12_006898-REV07%20-%20MULCORS%20Final%20Report.pdf

It gives you a good idea of some of the things you have to look into for H/W and S/W. I remember they announced the purchase of OS provider SysGO for 20m because it was cheaper than developing one in house and verifying it.
 

 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4102
  • Country: us
Re: FPGA Vs µC
« Reply #60 on: October 24, 2015, 08:49:14 pm »
Quote
Quote
Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it.

NO. A really big NO!
If you have to do an ADC reading at the exact time of the interrupt, then of course you will do it in the ISR. Or the triggering event may precede the exact time you need to run the ADC... in which case you may use a timer interrupt to get the reading at the exact time you need, rather than wait in the ISR. The idea that was conveyed was to spend as little time in the ISR as possible, in general. And, in general, that is a big YES, IMO.

I think of ISR as processor bandwidth. How much bandwidth does each of your ISR routines take up under various conditions? The example of the doorbell is a good one, because it shows how an interrupt can take up an unexpected amount of bandwidth. Timer interrupts, however, are extremely easy to calculate and/or observe on an oscope. Easier than managing same in a code loop which changes are you add/edit code.

Prioritized interrupts are just another tool. It is just as easy to prioritize interrupts in software, just not necessarily as instantaneous. You can poll for a higher priority interrupt flag within an ISR subroutine and call the higher priority ISR subroutine within the lower priority interrupt service subroutine. Because you have control over where that call occurs, you have more control over stack management to where the high priorty interrupt can't occur at the bottom of a 10-stack deep lower priority ISR sub-sub-subroutine... at the cost of not being truly instantaneous. Even when using a device with prioritized interrupt levels, I have yet to find a good reason to use them. I have more control in software.
« Last Edit: October 24, 2015, 09:43:29 pm by KL27x »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #61 on: October 24, 2015, 08:54:57 pm »
Edit : MCU shearing critical process, must always sync their state machine.
could be fun to do that but since most of the time they should do different task along the main 'secure' task , pretending to be totally sure about how synchronized they are mean sync every line of code ... what a headache

Consider "high availability" systems such as clustered telecoms controllers. Here there is a single cluster consisting of multiple machines with shared state. There is one primary master and a secondary master that takes over when the primary master dies. Now consider a network fault in which the primary and secondary masters become split so they both think they are the primary master -and then the network fault is removed. This is colloquially called the "split brain problem", and it does not have a clean solution.

Similar problems occur in a token ring network that becomes partitioned, each with their own token. When they are reconnected there is the "difficult" issue of which token should be dropped.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #62 on: October 24, 2015, 10:40:28 pm »
Edit : MCU shearing critical process, must always sync their state machine.
could be fun to do that but since most of the time they should do different task along the main 'secure' task , pretending to be totally sure about how synchronized they are mean sync every line of code ... what a headache

Consider "high availability" systems such as clustered telecoms controllers. Here there is a single cluster consisting of multiple machines with shared state. There is one primary master and a secondary master that takes over when the primary master dies. Now consider a network fault in which the primary and secondary masters become split so they both think they are the primary master -and then the network fault is removed. This is colloquially called the "split brain problem", and it does not have a clean solution.

Similar problems occur in a token ring network that becomes partitioned, each with their own token. When they are reconnected there is the "difficult" issue of which token should be dropped.


i will have "split brain problem" once i figure out how to make everything in a software safe and could prove it  :-DD
well that's an other problem to consider ...
i think starting with this thread we end up
- trying to define what a safe system is how we can prove it is.
- trying to figure out if straight forward polling software is safer than main loop with bunch of background tasks triggered by interrupt.
- trying to figure out if multi-processor are better than a single one.

no i think i already have a split brain problem now xD
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #63 on: October 24, 2015, 10:42:24 pm »
both have their uses as safety critical applications. You're selection really depends on the complexity of the task. If its a simple application that can be done with an MCU then KISS. The real difficulty is in the burden of proof/verification required to meet your safety standard.

You didnt say which standard or industry you are trying to develop for?
I left the Avionics industry 2 years ago and the last project I worked on was to consider the use of Multicore CPUs for DAL A applications.

We had to prove that all cores were segregated (each with separate memory management units, cache levels etc). Where we couldnt segregate we had to disable.
ie L3 cache had to be disabled. the OSes and associated kernels on each core had to sit on top of a hypervisor which controlled access to the hardware. You're not allowed to use any form of code optimization that would result in loss of deterministic code execution (so not predictive branching, out of order execution).
Verifying all this presents a massive problem for h/w and theres no way to do it without the manufacturer of each device helping you. You wouldnt be able to use Zynqs (or anything with an ARM core), or Intels (proprietary h/w in the chip, details of which are never made public).
As far as I was aware, the use of multicore processors was so new that there were'nt any guidelines on it.
EASA actually commissioned a consortium of avionics developers (of which my employer, but not me was part of) to investigate the use of multicore cpus and the Kintex FPGA.  link below:

http://easa.europa.eu/system/files/dfu/CCC_12_006898-REV07%20-%20MULCORS%20Final%20Report.pdf

It gives you a good idea of some of the things you have to look into for H/W and S/W. I remember they announced the purchase of OS provider SysGO for 20m because it was cheaper than developing one in house and verifying it.

well that's an interesting doc that i will certainly take time to look at , thanks for sharing   :-+
 

Offline MT

  • Super Contributor
  • ***
  • Posts: 1616
  • Country: aq
Re: FPGA Vs µC
« Reply #64 on: October 25, 2015, 01:05:01 am »
Quote
author=hamdi.tn link=topic=57137.msg784987#msg784987 date=1445726428]
- trying to figure out if multi-processor are better than a single one.

Lets say you now today had a single MCU that run at 1Thz has 1M byte memory that also
runs at 1Thz would you have any system related problem left in your application?
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #65 on: October 25, 2015, 08:28:58 am »
Quote
author=hamdi.tn link=topic=57137.msg784987#msg784987 date=1445726428]
- trying to figure out if multi-processor are better than a single one.

Lets say you now today had a single MCU that run at 1Thz has 1M byte memory that also
runs at 1Thz would you have any system related problem left in your application?


it's not performance issue, we talking reliability issue in multi-MCU design. so 1K memory to 1M memory , 8Mhz to 1THz theoretically will face the same problems
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #66 on: October 25, 2015, 08:57:16 am »
Quote
author=hamdi.tn link=topic=57137.msg784987#msg784987 date=1445726428]
- trying to figure out if multi-processor are better than a single one.
Lets say you now today had a single MCU that run at 1Thz has 1M byte memory that also
runs at 1Thz would you have any system related problem left in your application?

You would get the same problems, only with less latency.

Whether they would be more frequent would depend on whether the problem was provoked by the external system. Thus multiprocessor cache "interactions" might be more frequent, but interrupt problems would be the same frequency.

If you have a loot at, for example, the causes and effects of priority inversion you will see there is no discussion of speed.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf