Author Topic: MCU with FPGA vs. SoC FPGA  (Read 25044 times)

0 Members and 1 Guest are viewing this topic.

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
MCU with FPGA vs. SoC FPGA
« on: July 06, 2023, 12:29:48 am »
Hi all,

I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.

As such, I'm considering an FPGA to receive these digital inputs. The FPGA would perform the required measurements on the inputs, and then timestamp the latest measurements (across all inputs) at a specified, deterministic frequency and present the timestamped results to the MCU for further processing. I'm not quite sure how best to approach the MCU-FPGA interface, and would be very open to recommendation. A simple SPI/QSPI interface would probably do, but perhaps an AHB master/slave arrangement could also work.

With the above said, I'm also mindful of future applications, whereby I might want to use the FPGA for additional hard processing requirements (e.g. high throughput ADC). In this scenario, the MCU-FPGA interface may become a limitation/bottleneck for the throughput of the system, and I may actually be better off committing to an SoC FPGA from the outset, where the bulk of the sensor processing is managed in Programmable Logic, with post-processing, storage, transmission, etc. handled by the processor, making use of the high throughput interface/fabric between the two.

Any thoughts, opinions or recommendations very much appreciated!
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4555
  • Country: au
    • send complaints here
Re: MCU with FPGA vs. SoC FPGA
« Reply #1 on: July 06, 2023, 02:19:23 am »
There are 2 general reasons to go for a SoC:
you are moving "fast" data: low latency and or high throughput, say 10's MB/s and up
or
there is a SoC which is cheaper than buying a suitable FPGA + processor

Don't discount the option of using cheap microcontrollers as smart peripherals in parallel, plenty of systems sitting in the sort of space you describe yet there is no go-to solution for them.
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #2 on: July 06, 2023, 02:23:03 am »
Thanks for your input, it's greatly appreciated.

The option of having additional 'front-end' MCUs, feeding into the larger, more capable MCU is actually quite an interesting one.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #3 on: July 06, 2023, 04:00:35 am »
Quite a bit of application processor SoCs are designed that way - for example pretty much entire iMX family of SoCs from NXP contain both "fast" cores for running Linux and applications, and small MCU cores for real time tasks.
Though there is an opinion out there - and I'm sure we're about to hear it voiced here in the short order - that having more than one programmable element in a circuit is a bad idea. I personally disagree with such blanket statement (like pretty much all blanket statements), but I generally try to consolidate programmable elements into a single IC if I can help it. For example, a lot of FPGA designs tend to require a small MCU for orchestrating main processing pipeline as well as for auxiliary tasks like handing user input, and so it's quite common to include an MCU softcore in FPGA design to avoid having to deal with designing external interconnects, as well as programming two devices separately from each other.

Online Berni

  • Super Contributor
  • ***
  • Posts: 4984
  • Country: si
Re: MCU with FPGA vs. SoC FPGA
« Reply #4 on: July 06, 2023, 06:04:18 am »
Yep you are spot on.

SPI is one of the best simple interfaces for a MCU to FPGA since it does not need many wires and implementing a SPI slave inside the FPGA is mostly just a glorified shift register. Not super fast but still plenty fast enough for applications where you don't have to move large buffers worth of data across constantly. Most MCUs will do 10 to 50Mbit/s over SPI. Easy way to boost it is indeed Quad or Octal SPI getting you into the 10s of MB/s

Going faster indeed involves an external memory bus. A lot of MCUs have one of those and takes very little logic in the FPGA to implement. This will need a LOT of pins for all the data and address lines. But in return you get >100 MB/s and most importantly the ability to simply DMA data in and out of the FPGA just like it was a peripheral inside the MCU. The MCU likely can't even handle dealing with more speed than this, so it is as fast as you would want to go on it.

Next step up are ARM+FPGA SoC. Those also have a memory bus between the ARM and FPGA side but the difference is that the bus is humongous (since both live on the same silicon die it is easy to connect many signals across) here we are talking letting it go into the 10 GB/s range of speeds. This is useful for implementing hardware accelerated co processors in the FPGA.

For just pulse counting id say SPI is plenty fast enough. But using tiny MCUs to do it is not a bad idea. This is beneficial because tiny MCUs might be cheep (FPGAs get expensive real fast) while they might need very little firmware in them if the only thing they do is: wait for pulse, measure it, send out a number.
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #5 on: July 06, 2023, 09:34:22 am »
Thanks all, much appreciated as always.

I've just had a bit of a look around at potential MCUs, and it looks like there are some excellent options if I were to add a small MCU for this purpose, and then interface with the primary MCU as suggested above. For example, the STM32H5 offers a 250MHz clock which would be great from a timestamping perspective (assuming the TIM1/2/3/etc. timers run at full clock speed, I haven't checked), and exposes 12 capture/compare channels (across 3 Timer Counter modules) in a 7x7mm LQFP48 package (in addition to the SPI, etc. peripherals required for interfacing).
 

Online iMo

  • Super Contributor
  • ***
  • Posts: 4817
  • Country: pm
Re: MCU with FPGA vs. SoC FPGA
« Reply #6 on: July 06, 2023, 10:01:57 am »
I tried couple of times with mcu+cpld/fpga, as well as with an embedded mcu (forth j1a, risc-v, microblaze) inside an fpga.
I would say a combination of a standard mcu (ie like the stm32) and an fpga is the easiest way to start with.
In past I did with the smallest "fpga" - the lattice ice40lp384 with only 384 cells (free dev tools under Win) - and the Bluepill (stm32duino with stm32f103 under Win) and the development was pretty easy.
Putting the mcu inside the fpga creates a larger complexity with development (fpga dev chains/compilers, tools, etc). Also it is quite difficult/laborious (!!) to create all those peripherals you may find easily in any standard mcu.
Of course it depends on the application, the datarate, and amount of data you want to process inside the mcu.
« Last Edit: July 06, 2023, 10:11:40 am by iMo »
I got to the very edge of the abyss, but since then I have already taken a step forward..
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #7 on: July 06, 2023, 02:31:28 pm »
Putting the mcu inside the fpga creates a larger complexity with development (fpga dev chains/compilers, tools, etc). Also it is quite difficult/laborious (!!) to create all those peripherals you may find easily in any standard mcu.
Of course it depends on the application, the datarate, and amount of data you want to process inside the mcu.
It depends on the device. Xilinx for example offers an MCU softcore with the full set of peripheral IPs for free, and includes intergrated toolchain & IDE with live debigging for software development which makes firmware development a breeze with zero HDL knowledge required (unless you want/need to implement some custom peripherals). It's actually much easier than using separate FPGA and MCU toolchain, because here it's all unified in a single package.
« Last Edit: July 06, 2023, 10:01:14 pm by asmi »
 
The following users thanked this post: iMo

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #8 on: July 06, 2023, 09:43:18 pm »
Putting the mcu inside the fpga creates a larger complexity with development (fpga dev chains/compilers, tools, etc). Also it is quite difficult/laborious (!!) to create all those peripherals you may find easily in any standard mcu.
Of course it depends on the application, the datarate, and amount of data you want to process inside the mcu.
It depends in the device. Xilinx for example offers an MCU softcore with the full set of peripheral IPs for free, and includes intergrated toolchain & IDE with live debigging for software development which makes firmware development a breeze with zero HDL knowledge required (unless you want/need to implement some custom peripherals). It's actually much easier than using separate FPGA and MCU toolchain, because here it's all unified in a single package.

Thanks for clarifying. I'm not at all knowledgeable about the MicroBlaze core, but that certainly does look like an ideal (and extensible/scalable) solution. Just to confirm, is Vitis the software development platform you mention? I assume I can design both hardware and software in Vitis, or would I need to use Vivado for the hardware and Vitis for MicroBlaze?
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #9 on: July 06, 2023, 10:01:02 pm »
Thanks for clarifying. I'm not at all knowledgeable about the MicroBlaze core, but that certainly does look like an ideal (and extensible/scalable) solution. Just to confirm, is Vitis the software development platform you mention? I assume I can design both hardware and software in Vitis, or would I need to use Vivado for the hardware and Vitis for MicroBlaze?
Vivado and Vitis are shipped as a single package, you can install both at the same time. Vitis is what used to be called SDK - IDE for software development, while Vivado is used for hardware side of things. Once you design hardware, you can import platform specification into Vitis, and it will generate BSP for it, including drivers for whatever IPs you've included into the hardware, it will also import stuff like base addresses for your peripherals, and memory locations, so you can get on with software development pretty much immediately. That same platform specification file can be also used in Petalinux package to create a bootable Linux image, but there are certain restrictions such system needs to meet hardware-wise to actually be able to run Linux. I never actually used Linux on Microblaze in any projects beyond just playing around with it, but it's there in case you need it.

Online iMo

  • Super Contributor
  • ***
  • Posts: 4817
  • Country: pm
Re: MCU with FPGA vs. SoC FPGA
« Reply #10 on: July 06, 2023, 10:22:41 pm »
I have the Petalinux running on the ML-605 Virtex 6 board (the stock example). No idea how they did it with the ISE 13, it had to be pretty intense exercise, imho.. I ran my microblaze on Spartan 6 with some basic i/o (ISE 14.7) and it took me a month messing with all the missing information I needed to build a running version with some simple C code..  :D
« Last Edit: July 06, 2023, 10:31:42 pm by iMo »
I got to the very edge of the abyss, but since then I have already taken a step forward..
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #11 on: July 06, 2023, 10:31:19 pm »
That's really interesting, I can only imagine the complexity at play within the MicroBlaze IP, all the peripheral blocks, etc. I would likely look to use FreeRTOS within MicroBlaze (which appears to be supported from a cursory look), I'm not sure if I'd have need of a full Linux OS at this stage.
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1596
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #12 on: July 06, 2023, 11:16:01 pm »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.



As such, I'm considering an FPGA to receive these digital inputs. The FPGA would perform the required measurements on the inputs, and then timestamp the latest measurements (across all inputs) at a specified, deterministic frequency and present the timestamped results to the MCU for further processing. I'm not quite sure how best to approach the MCU-FPGA interface, and would be very open to recommendation. A simple SPI/QSPI interface would probably do, but perhaps an AHB master/slave arrangement could also work.
That will depend on how much data needs to move. SPI will always be simplest, until it can no longer shift enough data.

I would look first for a MCU that can manage the captures, as the capture ability on MCUs is improving quite rapidly.

Even the PIO on the Pi PICO might be good enough ?

Addit: The examples I found for PIO suggest it can only wait on a single pin.
The Parallax P2 P2X8C4M64P (costs more in TQFP100), but it can wait on pin-pattern events to 180~250MHz sysclks.

Addit 2:
If I'm reading the PICO PIO info correctly, I think it can wide-wait-poll to granularity PCLK/2.5 and capture time to PCLK/2 and can capture pulses as narrow as a bit under 100ns.
Any number of pins can change at the same time.
Using 2 of the 8 state engines, two FIFO's export 32b dT timestamp  and 32b NewPins GPIO pattern, so it can do 20+ pins
« Last Edit: July 07, 2023, 06:40:26 am by PCB.Wiz »
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #13 on: July 06, 2023, 11:59:48 pm »
I have the Petalinux running on the ML-605 Virtex 6 board (the stock example). No idea how they did it with the ISE 13, it had to be pretty intense exercise, imho.. I ran my microblaze on Spartan 6 with some basic i/o (ISE 14.7) and it took me a month messing with all the missing information I needed to build a running version with some simple C code..  :D
With Vivado/Vitis now it takes about 10-20 minutes (depending on your workstation's horsepower) to get a basic Microblaze system (DDR3, QSPI, UART, GPIO, timer, I2C) up and running if you know what are you doing.

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #14 on: July 07, 2023, 12:01:36 am »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).

As such, I'm considering an FPGA to receive these digital inputs. The FPGA would perform the required measurements on the inputs, and then timestamp the latest measurements (across all inputs) at a specified, deterministic frequency and present the timestamped results to the MCU for further processing. I'm not quite sure how best to approach the MCU-FPGA interface, and would be very open to recommendation. A simple SPI/QSPI interface would probably do, but perhaps an AHB master/slave arrangement could also work.
That will depend on how much data needs to move. SPI will always be simplest, until it can no longer shift enough data.

I would look first for a MCU that can manage the captures, as the capture ability on MCUs is improving quite rapidly.

Even the PIO on the Pi PICO might be good enough ?

Addit: The examples I found for PIO suggest it can only wait on a single pin.
The Parallax P2 P2X8C4M64P (costs more in TQFP100), but it can wait on pin-pattern events to 180~250MHz sysclks.

The Parallax is actually a really interesting option. I've not used them before, but on paper they would certainly handle this particular task. Having not used that platform before, the assembly language is certainly a bit foreign!
« Last Edit: July 07, 2023, 12:04:04 am by jars121 »
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #15 on: July 07, 2023, 12:06:48 am »
I have the Petalinux running on the ML-605 Virtex 6 board (the stock example). No idea how they did it with the ISE 13, it had to be pretty intense exercise, imho.. I ran my microblaze on Spartan 6 with some basic i/o (ISE 14.7) and it took me a month messing with all the missing information I needed to build a running version with some simple C code..  :D
With Vivado/Vitis now it takes about 10-20 minutes (depending on your workstation's horsepower) to get a basic Microblaze system (DDR3, QSPI, UART, GPIO, timer, I2C) up and running if you know what are you doing.

I've spent a couple of hours this morning reading documentation and watching some demonstrations, it does indeed look relatively straight forward. I think the next step would be to purchase a Spartan- or Artix-based board and implement a MicroBlaze solution. I'm still not clear on the process by which FPGA data (e.g. the input signal measurement data) is made available to the MCU, but I suppose that'll become clear when I start digging into the AXI platform.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #16 on: July 07, 2023, 12:57:20 am »
I think the next step would be to purchase a Spartan- or Artix-based board and implement a MicroBlaze solution.
You don't actually need any hardware to start playing around with building hardware, but if you want to run software, of course you will need some board. I typically recommend one of Digilent's boards because they are one of the very few devboard vendors which actually provide support to their customers.

I'm still not clear on the process by which FPGA data (e.g. the input signal measurement data) is made available to the MCU, but I suppose that'll become clear when I start digging into the AXI platform.
Depending on the bandwidth required, there are several way to go about it.
1. If you don't require a lot of bandwitch and you are OK with PIO (when CPU is going to read data piece by piece from peripheral), you can implement a simple memory mapped AXI4-lite peripheral with a few registers, one of which is going to be used to return a next piece of data from peripheral (or write the next piece of data into peripheral), and a status and command registers to control the peripheral, just like many standard peripherals do. Advantage of this approach is simplicity - it literally takes a couple of hours to get this up and running in HDL, disadvantage is it can consume a lot of CPU time for such I/O operations (just like using PIO in any "regular" MCU can eat up a lot of CPU cycles).
2. If you need high bandwidth, you can implement an AXI-stream interface which can be used with AXI DMA IP provided by Xilinx to perform a DMA operations without CPU involvement. You will still need an AXI4-lite slave interface to control your peripheral (typically a pair of registers - command register to issue commands, and a status register to read back status information), but once a CPU configures DMA and commands beginning of a transfer, it can go about it's own business until it receives an interrupt informing it that the transfer is complete. Or DMA can be configured to work as a circular buffer and run indefinitely constantly refreshing the buffer, and using interrupts to let CPU know when certain part of a buffer has been filled to it can do something with it.

If you worked with any half-recent MCU, you will recognize both approaches as they are widely used in just about any of them.
« Last Edit: July 07, 2023, 12:58:55 am by asmi »
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1596
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #17 on: July 07, 2023, 01:09:43 am »
The Parallax is actually a really interesting option. I've not used them before, but on paper they would certainly handle this particular task. Having not used that platform before, the assembly language is certainly a bit foreign!
You can avoid going as low as ASM level, in most cases, as there are compiled languages that will place small programs (like this will be) into a core.
You just look as the assembler listing created.
The smaller and cheaper P8X32A is good to 80~100MHz wait granularity, which might be good enough ?
With wait-pattern type coding, there will be some minimum pulse width that can be captured, but the wait-exit is sysclk granular.
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1596
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #18 on: July 07, 2023, 01:18:12 am »

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
Yes, you would pick a part that at least could manage a HW capture function, so the time stamp is ensured, and then you just need to read before the next capture.
If you might expect very narrow pulses, they can be managed by allocating an edge per capture, but that doubles the capture storage needed.
If you are quite unsure about how many inputs, and expect to jump 12 to 20+ inputs, a MCU that can do wide pattern waits may be best.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27098
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #19 on: July 08, 2023, 09:27:43 pm »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
IMHO this approach is just horribly wrong. Actually in two ways: A) never use interrupts on inputs that can change at will because they can and will lockup your application due to an unforeseen circumstance (which can be as simple as a lose wire, nearby noise source, etc). B) don't use interrupts when they interfere with eachother; find a common denominator and combine multiple interrupts into 1. IOW: Use a single timer interrupt in which you first sample the GPIO port (preferably having all the input pins on the same port) and then process the state of the inputs as needed. Keep in mind that after the GPIO input register (containing ALL input levels as a single read operation) has been read, there is no source of jitter that can be added to the signal so processing time doesn't matter for as long as you do it fast enough before the timer interval has passed. On a dual core MCU running at several hundred MHz (NXP's RT1000 series for example) you should be able to achieve time resolution in the sub-microsecond regions without much effort.

If interrupt jitter gives too much uncertainty, an option is to look for a controller that supports timer-triggered DMA transfers (one address to double buffer) which allows to process captured data 'en bloc' without having interrupt overhead as well.
« Last Edit: July 08, 2023, 10:34:01 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Infraviolet

  • Super Contributor
  • ***
  • Posts: 1035
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #20 on: July 10, 2023, 12:04:24 am »
"100Hz is just an example in this case..."
Have you done this for <<12 channels succesfully at frequencies up to the maximum you'll ever need? If so you could always have a bunch of microcontrollers in parallel each handling just a few of the 12+ channels, maybe have an extra interrupt on all of them which can be connected to a common line which could be toggled up and down by a master microcontroller as a way to provide a reference so all the other mcus can be in sync.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19724
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #21 on: July 10, 2023, 10:00:23 am »
100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
...
The Parallax is actually a really interesting option. I've not used them before, but on paper they would certainly handle this particular task. Having not used that platform before, the assembly language is certainly a bit foreign!

There is, IMNSHO, a better alternative if you are prepared to accept single source ICs that you can buy at Digikey: the XMOS xCORE.

Overall my summary is that it slots into the area where traditional microprocessors are pushing it but where FPGAs are overkill. You get the benefits of fast software iteration and timings guaranteed by design, not measurement.

Why so good? The hardware and software are designed from the ground up for hard realtime operation:
  • each i/o port has its own (easily accessible) timer and FPGA-like simple/strobed/clocked/SERDES with pattern matching
  • dedicate one (of 32) core and task to each I/O or group of I/Os
  • software typically looks like the obvious
    • initialise
    • loop forever, waiting for input or timeout or message then instantly resume and do the processing

You mention ISRs as introducing jitter, but omit to mention caches. The xCORE devices have neither :) They do have up to 32 cores/chip 4000MIPS/chip (expandable).

Each IO port has its own timers, so guaranteeing output on a specific clock cycle, and measuring the specific clock cycle on which input arrived.

The equivalent of an RTOS (comms, scheduling) is implemented in silicon.

There is a long and solid theoretical and practical pedigree for the hardware (Transputer 1980s) and software (CSP/Occam 1970s).

The development tools (command line and IDE) will inspect the optimised code to determine exactly how many clock cycles it will take to get from here to there. None of this "measure and hope you have spotted the worst case" rubbish!

When I used it, I found it stunningly easy, having the first iteration of code working within a a day. There were no hardware nor software surprises, no errata sheets.

For a remarkably information dense flyer on the basic architecture, see https://www.xmos.ai/download/xCORE-Architecture-Flyer(1.3).pdf

For a glimpse of the software best/worst case timing calculations, see

« Last Edit: July 10, 2023, 10:09:08 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #22 on: July 10, 2023, 01:19:24 pm »
XMOS xCORE.
Why did I expect to see these exact words from this exact person? :-DD I hope you are getting a good commission for all that shilling.
 
The following users thanked this post: Siwastaja

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19724
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #23 on: July 10, 2023, 03:53:54 pm »
XMOS xCORE.
Why did I expect to see these exact words from this exact person? :-DD I hope you are getting a good commission for all that shilling.

My payout is, unfortunately, a big fat zero. I wouldn't want it any other way (=> I wouldn't be a successful politician!).

The same questions/problems+preconceptions seem to arise regularly and always lead to the same answers, complete with many caveats.
When the unnecessary preconceptions are challenged and removed, it is unsurprising that some novel answers (with far fewer caveats) come into view.

I will admit to being a fanboy, though.
Recent experiences of developing with atmega328 processors were successful but boring.
Recent experiences of developing with xCORE processors were not only more successful than I hoped, but more fun and surprisingly pain-free. The latter mean a lot to me.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4984
  • Country: si
Re: MCU with FPGA vs. SoC FPGA
« Reply #24 on: July 10, 2023, 05:00:27 pm »
XMOS xCORE is actually a pretty interesting architecture that packs a lot of power.

It is just that there range of applications is a bit niche, they are too big, complex and power hungry to be a MCU replacement while they don't quite have the throughput of FPGAs. They tend to shine the most when you need weird interfaces at moderate data rates. This could be a fitting use for it.

But you can also get around the cache timing issues on modern MCUs. Most of them are ARM and it can execute code from anywhere, so you can just put a ISR into RAM closest to the CPU and that one typically runs at the same clock speed as the CPU so there is no latency variability. But yeah not when you have to watch for multiple pulses, timer peripherals are there for a reason.

As for edge interrupts locking up your MCU, Yeah that is something to watch for. The way i deal with it is using a counter to disable them for a period of time if they are coming in more often than they should.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf