Author Topic: FPGA Vs µC  (Read 20475 times)

0 Members and 1 Guest are viewing this topic.

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: FPGA Vs µC
« Reply #50 on: October 24, 2015, 04:22:08 pm »
Interrupts in general can be miserable to debug.

it's when a (not intrusive) true-ICE + hw-tracer (with 20nsec of resolution) makes the difference  :D
I have been designing one for my soft core for 1 year, not completed yet, but … it has already made me happy about interrupts
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: FPGA Vs µC
« Reply #51 on: October 24, 2015, 06:22:53 pm »
- especially if there is a different o/s running on each core. There are certain things which really ought to provoke the "run away as fast as possible" reaction!
Ouch!
You mean people actually do that with a shared cache?
That is right up there with writing your own switch to protected mode on X86, double fault city....

Now I am a hardware guy by inclination, but that scares me.

Regards, Dan.
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #52 on: October 24, 2015, 07:11:39 pm »
Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it.
NO. A really big NO! The best way is to make an analysis with how much time is needed for each task and how quick an interrupt needs to be serviced. From there you can determine how much time can be spend inside every interrupt and whether you are going to need an OS. IMHO the best way to see an interrupt controller is a time-slicing OS in hardware where each interrupt handler is a background task. Using flags to signal the main thread to do something often creates the need for the main thread to become timing cricitical. You'll also need to transfer data between two asynchronous tasks which adds overhead and additional complexity.

For example: In signal processing applications it is better to do the entire processing in the ADC interrupt. The ADC interrupts are so frequent that other interrupts (from a UART for example) still have enough chance of getting serviced in time. In some of my microcontroller applications I have the controller spend over 90% of it's time in interrupts.

+1

most of application i have done recently , ADC, USART, SPI are handled by  interrupt and DMA , my main loop usually grap data they need do what they do and then most of the time it's only the ADC interrupt doing some stuff 99% of the time.

On some old application i wrote with PIC , the interrupt do 100% of the job, ADC and 1 timer, absolutely no code in main loop.
From my tests when main loop hang , code triggered with interrupt still work when it should work even with the main loop stacked.
once i had I2C bus hang ( in main loop) but ADC still running, with interrupt, in my case this was not much of use , but it can be useful for example with timer that check i2c and unhang it, i have much more chance that the timer interrupt will be executed.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19280
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #53 on: October 24, 2015, 07:34:30 pm »
- especially if there is a different o/s running on each core. There are certain things which really ought to provoke the "run away as fast as possible" reaction!
Ouch!
You mean people actually do that with a shared cache?
That is right up there with writing your own switch to protected mode on X86, double fault city....

Now I am a hardware guy by inclination, but that scares me.

Regards, Dan.

Look up the Xilinx Zynq: FPGA + dual core ARM A9 with internal memory, external memory, can run an RTOS on one core and linux on the other. Main menory is, of course, only one level in the hierarchy between register, through cache to disk and the cloud.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline BravoV

  • Super Contributor
  • ***
  • Posts: 7547
  • Country: 00
  • +++ ATH1
Re: FPGA Vs µC
« Reply #54 on: October 24, 2015, 07:48:15 pm »
I made a new thread -> HERE, solely on discussing the pro & con of using interrupt, as the OP and topic is on FPGA vs uC, cmiiw.

Hope you experts and experienced fellows don't mind to jump in there to share some thought.


Offline janoc

  • Super Contributor
  • ***
  • Posts: 3781
  • Country: de
Re: FPGA Vs µC
« Reply #55 on: October 24, 2015, 08:07:08 pm »
Both - or none.

It depends upon how good the design is verified, not how it is implemented. An µC programmed in Ada by a trained person is very reliable (military-grade). An µC programmed in Assembler is not reliable. C99 or even C++ is better provided that you have a skilled developer (it's virtually impossible to find a C++ one) but still not as good as Ada.

A Verilog or VHDL FPGA design is somewhere between C99 and Ada. It is quite easy to verify. On the other side, it would be probably larger which makes it harder to maintain (and increases the probability of verifying it incorrectly.

Do not trust tests. Only a formal proof guarantees that your product works well. A formal proof is easily doable in Ada, VHDL or Verilog, a bit harder in C99 and good C++11 code. It is hardly possible in poorly-written C++ or C89 code.

May we get an example of such proof in those languages? Because formal proof isn't something people talk about often, so most of us are not familiar with this.

Also, if we want to program in Ada, which mcu development environment allow Ada programming? (I am not talking about compiling my own gcc tool-chain with ada enable)

You don't need to program in Ada to be able to prove program correctness - i.e. that a program execution stops (doesn't run in an endless loop forever, never returning an answer) and that the answer returned is actually correct. Both of these have to be true for a program to be considered totally correct (otherwise it is only partially correct). Special cases, such as MCU code running forever are a trivial extension - each iteration of the main loop has to be correct.

You can even formally verify assembler if you want (so no, there is nothing inherently "unreliable" about assembler - let's not talk long term maintenance issues). Formal verification has nothing whatsoever with the choice of a programming language, however, some languages do make things easier (or more difficult).

One example of a formal method that is still used and is targeting structured program schemes - i.e. languages similar to Pascal, C, Java, C++, etc. that use nested high level control blocks (unlike e.g. Fortran, Basic or that assembler) is Hoare method. From a very high level point of view, Hoare method is basically a mathematical induction over the program blocks. If I prove that all sub blocks are correct and are correctly linked together using pre and post conditions, then I can conclude that my current block is correct as well. You continue doing this until you arrive to the top-level block of your program (e.g. you main() function).

I am not going to do a proof here, that would be too long, but you can find an example of this here:
http://www.cs.cmu.edu/~aldrich/courses/654-sp07/slides/7-hoare.pdf
and  here:
http://www.slidefinder.net/h/hoare_method_proving_correctness_programs/c21_hoare/15496132

Now Hoare method proves only partial correctness - that is, the result is going to be correct, if the program stops. It does not guarantee that the program actually stops - that you have to prove separately.

Now, if you want to prove correctness of something like assembler code or Basic/Fortran, you can use Floyd's method instead. A quick introduction is here (yes, the techniques are THAT old - 1967, Hoare 1969 ...):
https://users.cs.duke.edu/~raw/cps206/ProgramCorrectness.htm

It is more laborious and you have to set good invariants, otherwise your proof will not be good for much.

Both of these two methods can be greatly helped if your programming language has some special features. E.g. if you are doing design by contract like in Ada you are going to have much easier time proving your program correct, because you can rely on this language feature.

Another thing are side effects - if  your functions are side effects free (aka pure - their result depending only on their arguments and not anything else and they are not modifying anything else), then the proof is going to be much easier. Complicated side effects where behaviour of a function could depend on things outside of it (and thus not covered by pre/post conditions or invariants) could make the proof very difficult or impossible. This is one reason why functional programming is so popular among theoretical computer scientists - functional programming deals only with pure functions, so the problem with side effects is eliminated. Now that has some practical consequences because a real program needs at least some side effects - such as I/O - and there are some complex theoretical frameworks dealing with these (e.g. monads in Haskell).

A final remark - FYI, the neither Space Shuttle nor the Apollo guidance computer code were formally verified (there weren't computers powerful enough to do the verification in a reasonable time available). However, they very employing some very good software engineering practices. So that could be worth more than trying to go into formal verification which is extremely hard for anything but trivial programs.
 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3781
  • Country: de
Re: FPGA Vs µC
« Reply #56 on: October 24, 2015, 08:13:54 pm »
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).

I guess you aren't driving a car or traveling by plane these days. At a certain task complexity it is pretty much inevitable that the system becomes distributed and you have some sort of a bus between the individual pieces (e.g. CAN in most cars).

 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #57 on: October 24, 2015, 08:29:42 pm »
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).

I guess you aren't driving a car or traveling by plane these days. At a certain task complexity it is pretty much inevitable that the system becomes distributed and you have some sort of a bus between the individual pieces (e.g. CAN in most cars).



they certainly do, but how safe the task being sheared. am sure that if they need safe execution both or as many MCU they use, they need to share input data and have similar action on the hardware. So basically it's the same process running on different chips. Old (maybe they still exist ) Boiler control unit use that for flame ignition. now they use one MCU but should be written with Class B library.
Am sure in cars , it's a main MCU that handel most of the critical things and a lot of auxiliary MCU for doing all sort of other stuff, but i never worked on such system so can't be sure about that.

Edit : MCU shearing critical process, must always sync their state machine.
could be fun to do that but since most of the time they should do different task along the main 'secure' task , pretending to be totally sure about how synchronized they are mean sync every line of code ... what a headache
« Last Edit: October 24, 2015, 08:40:52 pm by hamdi.tn »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26755
  • Country: nl
    • NCT Developments
Re: FPGA Vs µC
« Reply #58 on: October 24, 2015, 08:34:19 pm »
That is a recipe for dissaster! Instead of one microcontroller which can lock up you suddenly have 2 microcontrollers which can lock up and not to mention the asynchronous communication between them (2 microcontrollers = running 2 parallel asynchronous tasks).
I guess you aren't driving a car or traveling by plane these days. At a certain task complexity it is pretty much inevitable that the system becomes distributed and you have some sort of a bus between the individual pieces (e.g. CAN in most cars).
I never said it couldn't be done but there is much more involved than just slapping two microcontrollers on a board! Actually: in case the microcontrollers sit on one board there needs a very good reason not to use one microcontroller.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Tabs

  • Regular Contributor
  • *
  • Posts: 106
  • Country: gb
Re: FPGA Vs µC
« Reply #59 on: October 24, 2015, 08:47:43 pm »
both have their uses as safety critical applications. You're selection really depends on the complexity of the task. If its a simple application that can be done with an MCU then KISS. The real difficulty is in the burden of proof/verification required to meet your safety standard.

You didnt say which standard or industry you are trying to develop for?
I left the Avionics industry 2 years ago and the last project I worked on was to consider the use of Multicore CPUs for DAL A applications.

We had to prove that all cores were segregated (each with separate memory management units, cache levels etc). Where we couldnt segregate we had to disable.
ie L3 cache had to be disabled. the OSes and associated kernels on each core had to sit on top of a hypervisor which controlled access to the hardware. You're not allowed to use any form of code optimization that would result in loss of deterministic code execution (so not predictive branching, out of order execution).
Verifying all this presents a massive problem for h/w and theres no way to do it without the manufacturer of each device helping you. You wouldnt be able to use Zynqs (or anything with an ARM core), or Intels (proprietary h/w in the chip, details of which are never made public).
As far as I was aware, the use of multicore processors was so new that there were'nt any guidelines on it.
EASA actually commissioned a consortium of avionics developers (of which my employer, but not me was part of) to investigate the use of multicore cpus and the Kintex FPGA.  link below:

http://easa.europa.eu/system/files/dfu/CCC_12_006898-REV07%20-%20MULCORS%20Final%20Report.pdf

It gives you a good idea of some of the things you have to look into for H/W and S/W. I remember they announced the purchase of OS provider SysGO for 20m because it was cheaper than developing one in house and verifying it.
 

 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4099
  • Country: us
Re: FPGA Vs µC
« Reply #60 on: October 24, 2015, 08:49:14 pm »
Quote
Quote
Now, as I think someone already mentioned, what I consider to be a (very) good practice is that you do the least possible inside an interrupt handler, you just raise a flag or something and let the main loop deal with the bulk of it.

NO. A really big NO!
If you have to do an ADC reading at the exact time of the interrupt, then of course you will do it in the ISR. Or the triggering event may precede the exact time you need to run the ADC... in which case you may use a timer interrupt to get the reading at the exact time you need, rather than wait in the ISR. The idea that was conveyed was to spend as little time in the ISR as possible, in general. And, in general, that is a big YES, IMO.

I think of ISR as processor bandwidth. How much bandwidth does each of your ISR routines take up under various conditions? The example of the doorbell is a good one, because it shows how an interrupt can take up an unexpected amount of bandwidth. Timer interrupts, however, are extremely easy to calculate and/or observe on an oscope. Easier than managing same in a code loop which changes are you add/edit code.

Prioritized interrupts are just another tool. It is just as easy to prioritize interrupts in software, just not necessarily as instantaneous. You can poll for a higher priority interrupt flag within an ISR subroutine and call the higher priority ISR subroutine within the lower priority interrupt service subroutine. Because you have control over where that call occurs, you have more control over stack management to where the high priorty interrupt can't occur at the bottom of a 10-stack deep lower priority ISR sub-sub-subroutine... at the cost of not being truly instantaneous. Even when using a device with prioritized interrupt levels, I have yet to find a good reason to use them. I have more control in software.
« Last Edit: October 24, 2015, 09:43:29 pm by KL27x »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19280
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #61 on: October 24, 2015, 08:54:57 pm »
Edit : MCU shearing critical process, must always sync their state machine.
could be fun to do that but since most of the time they should do different task along the main 'secure' task , pretending to be totally sure about how synchronized they are mean sync every line of code ... what a headache

Consider "high availability" systems such as clustered telecoms controllers. Here there is a single cluster consisting of multiple machines with shared state. There is one primary master and a secondary master that takes over when the primary master dies. Now consider a network fault in which the primary and secondary masters become split so they both think they are the primary master -and then the network fault is removed. This is colloquially called the "split brain problem", and it does not have a clean solution.

Similar problems occur in a token ring network that becomes partitioned, each with their own token. When they are reconnected there is the "difficult" issue of which token should be dropped.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #62 on: October 24, 2015, 10:40:28 pm »
Edit : MCU shearing critical process, must always sync their state machine.
could be fun to do that but since most of the time they should do different task along the main 'secure' task , pretending to be totally sure about how synchronized they are mean sync every line of code ... what a headache

Consider "high availability" systems such as clustered telecoms controllers. Here there is a single cluster consisting of multiple machines with shared state. There is one primary master and a secondary master that takes over when the primary master dies. Now consider a network fault in which the primary and secondary masters become split so they both think they are the primary master -and then the network fault is removed. This is colloquially called the "split brain problem", and it does not have a clean solution.

Similar problems occur in a token ring network that becomes partitioned, each with their own token. When they are reconnected there is the "difficult" issue of which token should be dropped.


i will have "split brain problem" once i figure out how to make everything in a software safe and could prove it  :-DD
well that's an other problem to consider ...
i think starting with this thread we end up
- trying to define what a safe system is how we can prove it is.
- trying to figure out if straight forward polling software is safer than main loop with bunch of background tasks triggered by interrupt.
- trying to figure out if multi-processor are better than a single one.

no i think i already have a split brain problem now xD
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #63 on: October 24, 2015, 10:42:24 pm »
both have their uses as safety critical applications. You're selection really depends on the complexity of the task. If its a simple application that can be done with an MCU then KISS. The real difficulty is in the burden of proof/verification required to meet your safety standard.

You didnt say which standard or industry you are trying to develop for?
I left the Avionics industry 2 years ago and the last project I worked on was to consider the use of Multicore CPUs for DAL A applications.

We had to prove that all cores were segregated (each with separate memory management units, cache levels etc). Where we couldnt segregate we had to disable.
ie L3 cache had to be disabled. the OSes and associated kernels on each core had to sit on top of a hypervisor which controlled access to the hardware. You're not allowed to use any form of code optimization that would result in loss of deterministic code execution (so not predictive branching, out of order execution).
Verifying all this presents a massive problem for h/w and theres no way to do it without the manufacturer of each device helping you. You wouldnt be able to use Zynqs (or anything with an ARM core), or Intels (proprietary h/w in the chip, details of which are never made public).
As far as I was aware, the use of multicore processors was so new that there were'nt any guidelines on it.
EASA actually commissioned a consortium of avionics developers (of which my employer, but not me was part of) to investigate the use of multicore cpus and the Kintex FPGA.  link below:

http://easa.europa.eu/system/files/dfu/CCC_12_006898-REV07%20-%20MULCORS%20Final%20Report.pdf

It gives you a good idea of some of the things you have to look into for H/W and S/W. I remember they announced the purchase of OS provider SysGO for 20m because it was cheaper than developing one in house and verifying it.

well that's an interesting doc that i will certainly take time to look at , thanks for sharing   :-+
 

Offline MT

  • Super Contributor
  • ***
  • Posts: 1616
  • Country: aq
Re: FPGA Vs µC
« Reply #64 on: October 25, 2015, 01:05:01 am »
Quote
author=hamdi.tn link=topic=57137.msg784987#msg784987 date=1445726428]
- trying to figure out if multi-processor are better than a single one.

Lets say you now today had a single MCU that run at 1Thz has 1M byte memory that also
runs at 1Thz would you have any system related problem left in your application?
 

Offline hamdi.tnTopic starter

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Re: FPGA Vs µC
« Reply #65 on: October 25, 2015, 08:28:58 am »
Quote
author=hamdi.tn link=topic=57137.msg784987#msg784987 date=1445726428]
- trying to figure out if multi-processor are better than a single one.

Lets say you now today had a single MCU that run at 1Thz has 1M byte memory that also
runs at 1Thz would you have any system related problem left in your application?


it's not performance issue, we talking reliability issue in multi-MCU design. so 1K memory to 1M memory , 8Mhz to 1THz theoretically will face the same problems
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19280
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: FPGA Vs µC
« Reply #66 on: October 25, 2015, 08:57:16 am »
Quote
author=hamdi.tn link=topic=57137.msg784987#msg784987 date=1445726428]
- trying to figure out if multi-processor are better than a single one.
Lets say you now today had a single MCU that run at 1Thz has 1M byte memory that also
runs at 1Thz would you have any system related problem left in your application?

You would get the same problems, only with less latency.

Whether they would be more frequent would depend on whether the problem was provoked by the external system. Thus multiprocessor cache "interactions" might be more frequent, but interrupt problems would be the same frequency.

If you have a loot at, for example, the causes and effects of priority inversion you will see there is no discussion of speed.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf