Author Topic: MCU with FPGA vs. SoC FPGA  (Read 24401 times)

0 Members and 1 Guest are viewing this topic.

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
MCU with FPGA vs. SoC FPGA
« on: July 06, 2023, 12:29:48 am »
Hi all,

I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.

As such, I'm considering an FPGA to receive these digital inputs. The FPGA would perform the required measurements on the inputs, and then timestamp the latest measurements (across all inputs) at a specified, deterministic frequency and present the timestamped results to the MCU for further processing. I'm not quite sure how best to approach the MCU-FPGA interface, and would be very open to recommendation. A simple SPI/QSPI interface would probably do, but perhaps an AHB master/slave arrangement could also work.

With the above said, I'm also mindful of future applications, whereby I might want to use the FPGA for additional hard processing requirements (e.g. high throughput ADC). In this scenario, the MCU-FPGA interface may become a limitation/bottleneck for the throughput of the system, and I may actually be better off committing to an SoC FPGA from the outset, where the bulk of the sensor processing is managed in Programmable Logic, with post-processing, storage, transmission, etc. handled by the processor, making use of the high throughput interface/fabric between the two.

Any thoughts, opinions or recommendations very much appreciated!
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4545
  • Country: au
    • send complaints here
Re: MCU with FPGA vs. SoC FPGA
« Reply #1 on: July 06, 2023, 02:19:23 am »
There are 2 general reasons to go for a SoC:
you are moving "fast" data: low latency and or high throughput, say 10's MB/s and up
or
there is a SoC which is cheaper than buying a suitable FPGA + processor

Don't discount the option of using cheap microcontrollers as smart peripherals in parallel, plenty of systems sitting in the sort of space you describe yet there is no go-to solution for them.
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #2 on: July 06, 2023, 02:23:03 am »
Thanks for your input, it's greatly appreciated.

The option of having additional 'front-end' MCUs, feeding into the larger, more capable MCU is actually quite an interesting one.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #3 on: July 06, 2023, 04:00:35 am »
Quite a bit of application processor SoCs are designed that way - for example pretty much entire iMX family of SoCs from NXP contain both "fast" cores for running Linux and applications, and small MCU cores for real time tasks.
Though there is an opinion out there - and I'm sure we're about to hear it voiced here in the short order - that having more than one programmable element in a circuit is a bad idea. I personally disagree with such blanket statement (like pretty much all blanket statements), but I generally try to consolidate programmable elements into a single IC if I can help it. For example, a lot of FPGA designs tend to require a small MCU for orchestrating main processing pipeline as well as for auxiliary tasks like handing user input, and so it's quite common to include an MCU softcore in FPGA design to avoid having to deal with designing external interconnects, as well as programming two devices separately from each other.

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4961
  • Country: si
Re: MCU with FPGA vs. SoC FPGA
« Reply #4 on: July 06, 2023, 06:04:18 am »
Yep you are spot on.

SPI is one of the best simple interfaces for a MCU to FPGA since it does not need many wires and implementing a SPI slave inside the FPGA is mostly just a glorified shift register. Not super fast but still plenty fast enough for applications where you don't have to move large buffers worth of data across constantly. Most MCUs will do 10 to 50Mbit/s over SPI. Easy way to boost it is indeed Quad or Octal SPI getting you into the 10s of MB/s

Going faster indeed involves an external memory bus. A lot of MCUs have one of those and takes very little logic in the FPGA to implement. This will need a LOT of pins for all the data and address lines. But in return you get >100 MB/s and most importantly the ability to simply DMA data in and out of the FPGA just like it was a peripheral inside the MCU. The MCU likely can't even handle dealing with more speed than this, so it is as fast as you would want to go on it.

Next step up are ARM+FPGA SoC. Those also have a memory bus between the ARM and FPGA side but the difference is that the bus is humongous (since both live on the same silicon die it is easy to connect many signals across) here we are talking letting it go into the 10 GB/s range of speeds. This is useful for implementing hardware accelerated co processors in the FPGA.

For just pulse counting id say SPI is plenty fast enough. But using tiny MCUs to do it is not a bad idea. This is beneficial because tiny MCUs might be cheep (FPGAs get expensive real fast) while they might need very little firmware in them if the only thing they do is: wait for pulse, measure it, send out a number.
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #5 on: July 06, 2023, 09:34:22 am »
Thanks all, much appreciated as always.

I've just had a bit of a look around at potential MCUs, and it looks like there are some excellent options if I were to add a small MCU for this purpose, and then interface with the primary MCU as suggested above. For example, the STM32H5 offers a 250MHz clock which would be great from a timestamping perspective (assuming the TIM1/2/3/etc. timers run at full clock speed, I haven't checked), and exposes 12 capture/compare channels (across 3 Timer Counter modules) in a 7x7mm LQFP48 package (in addition to the SPI, etc. peripherals required for interfacing).
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4801
  • Country: pm
  • It's important to try new things..
Re: MCU with FPGA vs. SoC FPGA
« Reply #6 on: July 06, 2023, 10:01:57 am »
I tried couple of times with mcu+cpld/fpga, as well as with an embedded mcu (forth j1a, risc-v, microblaze) inside an fpga.
I would say a combination of a standard mcu (ie like the stm32) and an fpga is the easiest way to start with.
In past I did with the smallest "fpga" - the lattice ice40lp384 with only 384 cells (free dev tools under Win) - and the Bluepill (stm32duino with stm32f103 under Win) and the development was pretty easy.
Putting the mcu inside the fpga creates a larger complexity with development (fpga dev chains/compilers, tools, etc). Also it is quite difficult/laborious (!!) to create all those peripherals you may find easily in any standard mcu.
Of course it depends on the application, the datarate, and amount of data you want to process inside the mcu.
« Last Edit: July 06, 2023, 10:11:40 am by iMo »
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #7 on: July 06, 2023, 02:31:28 pm »
Putting the mcu inside the fpga creates a larger complexity with development (fpga dev chains/compilers, tools, etc). Also it is quite difficult/laborious (!!) to create all those peripherals you may find easily in any standard mcu.
Of course it depends on the application, the datarate, and amount of data you want to process inside the mcu.
It depends on the device. Xilinx for example offers an MCU softcore with the full set of peripheral IPs for free, and includes intergrated toolchain & IDE with live debigging for software development which makes firmware development a breeze with zero HDL knowledge required (unless you want/need to implement some custom peripherals). It's actually much easier than using separate FPGA and MCU toolchain, because here it's all unified in a single package.
« Last Edit: July 06, 2023, 10:01:14 pm by asmi »
 
The following users thanked this post: iMo

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #8 on: July 06, 2023, 09:43:18 pm »
Putting the mcu inside the fpga creates a larger complexity with development (fpga dev chains/compilers, tools, etc). Also it is quite difficult/laborious (!!) to create all those peripherals you may find easily in any standard mcu.
Of course it depends on the application, the datarate, and amount of data you want to process inside the mcu.
It depends in the device. Xilinx for example offers an MCU softcore with the full set of peripheral IPs for free, and includes intergrated toolchain & IDE with live debigging for software development which makes firmware development a breeze with zero HDL knowledge required (unless you want/need to implement some custom peripherals). It's actually much easier than using separate FPGA and MCU toolchain, because here it's all unified in a single package.

Thanks for clarifying. I'm not at all knowledgeable about the MicroBlaze core, but that certainly does look like an ideal (and extensible/scalable) solution. Just to confirm, is Vitis the software development platform you mention? I assume I can design both hardware and software in Vitis, or would I need to use Vivado for the hardware and Vitis for MicroBlaze?
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #9 on: July 06, 2023, 10:01:02 pm »
Thanks for clarifying. I'm not at all knowledgeable about the MicroBlaze core, but that certainly does look like an ideal (and extensible/scalable) solution. Just to confirm, is Vitis the software development platform you mention? I assume I can design both hardware and software in Vitis, or would I need to use Vivado for the hardware and Vitis for MicroBlaze?
Vivado and Vitis are shipped as a single package, you can install both at the same time. Vitis is what used to be called SDK - IDE for software development, while Vivado is used for hardware side of things. Once you design hardware, you can import platform specification into Vitis, and it will generate BSP for it, including drivers for whatever IPs you've included into the hardware, it will also import stuff like base addresses for your peripherals, and memory locations, so you can get on with software development pretty much immediately. That same platform specification file can be also used in Petalinux package to create a bootable Linux image, but there are certain restrictions such system needs to meet hardware-wise to actually be able to run Linux. I never actually used Linux on Microblaze in any projects beyond just playing around with it, but it's there in case you need it.

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4801
  • Country: pm
  • It's important to try new things..
Re: MCU with FPGA vs. SoC FPGA
« Reply #10 on: July 06, 2023, 10:22:41 pm »
I have the Petalinux running on the ML-605 Virtex 6 board (the stock example). No idea how they did it with the ISE 13, it had to be pretty intense exercise, imho.. I ran my microblaze on Spartan 6 with some basic i/o (ISE 14.7) and it took me a month messing with all the missing information I needed to build a running version with some simple C code..  :D
« Last Edit: July 06, 2023, 10:31:42 pm by iMo »
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #11 on: July 06, 2023, 10:31:19 pm »
That's really interesting, I can only imagine the complexity at play within the MicroBlaze IP, all the peripheral blocks, etc. I would likely look to use FreeRTOS within MicroBlaze (which appears to be supported from a cursory look), I'm not sure if I'd have need of a full Linux OS at this stage.
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1569
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #12 on: July 06, 2023, 11:16:01 pm »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.



As such, I'm considering an FPGA to receive these digital inputs. The FPGA would perform the required measurements on the inputs, and then timestamp the latest measurements (across all inputs) at a specified, deterministic frequency and present the timestamped results to the MCU for further processing. I'm not quite sure how best to approach the MCU-FPGA interface, and would be very open to recommendation. A simple SPI/QSPI interface would probably do, but perhaps an AHB master/slave arrangement could also work.
That will depend on how much data needs to move. SPI will always be simplest, until it can no longer shift enough data.

I would look first for a MCU that can manage the captures, as the capture ability on MCUs is improving quite rapidly.

Even the PIO on the Pi PICO might be good enough ?

Addit: The examples I found for PIO suggest it can only wait on a single pin.
The Parallax P2 P2X8C4M64P (costs more in TQFP100), but it can wait on pin-pattern events to 180~250MHz sysclks.

Addit 2:
If I'm reading the PICO PIO info correctly, I think it can wide-wait-poll to granularity PCLK/2.5 and capture time to PCLK/2 and can capture pulses as narrow as a bit under 100ns.
Any number of pins can change at the same time.
Using 2 of the 8 state engines, two FIFO's export 32b dT timestamp  and 32b NewPins GPIO pattern, so it can do 20+ pins
« Last Edit: July 07, 2023, 06:40:26 am by PCB.Wiz »
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #13 on: July 06, 2023, 11:59:48 pm »
I have the Petalinux running on the ML-605 Virtex 6 board (the stock example). No idea how they did it with the ISE 13, it had to be pretty intense exercise, imho.. I ran my microblaze on Spartan 6 with some basic i/o (ISE 14.7) and it took me a month messing with all the missing information I needed to build a running version with some simple C code..  :D
With Vivado/Vitis now it takes about 10-20 minutes (depending on your workstation's horsepower) to get a basic Microblaze system (DDR3, QSPI, UART, GPIO, timer, I2C) up and running if you know what are you doing.

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #14 on: July 07, 2023, 12:01:36 am »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).

As such, I'm considering an FPGA to receive these digital inputs. The FPGA would perform the required measurements on the inputs, and then timestamp the latest measurements (across all inputs) at a specified, deterministic frequency and present the timestamped results to the MCU for further processing. I'm not quite sure how best to approach the MCU-FPGA interface, and would be very open to recommendation. A simple SPI/QSPI interface would probably do, but perhaps an AHB master/slave arrangement could also work.
That will depend on how much data needs to move. SPI will always be simplest, until it can no longer shift enough data.

I would look first for a MCU that can manage the captures, as the capture ability on MCUs is improving quite rapidly.

Even the PIO on the Pi PICO might be good enough ?

Addit: The examples I found for PIO suggest it can only wait on a single pin.
The Parallax P2 P2X8C4M64P (costs more in TQFP100), but it can wait on pin-pattern events to 180~250MHz sysclks.

The Parallax is actually a really interesting option. I've not used them before, but on paper they would certainly handle this particular task. Having not used that platform before, the assembly language is certainly a bit foreign!
« Last Edit: July 07, 2023, 12:04:04 am by jars121 »
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #15 on: July 07, 2023, 12:06:48 am »
I have the Petalinux running on the ML-605 Virtex 6 board (the stock example). No idea how they did it with the ISE 13, it had to be pretty intense exercise, imho.. I ran my microblaze on Spartan 6 with some basic i/o (ISE 14.7) and it took me a month messing with all the missing information I needed to build a running version with some simple C code..  :D
With Vivado/Vitis now it takes about 10-20 minutes (depending on your workstation's horsepower) to get a basic Microblaze system (DDR3, QSPI, UART, GPIO, timer, I2C) up and running if you know what are you doing.

I've spent a couple of hours this morning reading documentation and watching some demonstrations, it does indeed look relatively straight forward. I think the next step would be to purchase a Spartan- or Artix-based board and implement a MicroBlaze solution. I'm still not clear on the process by which FPGA data (e.g. the input signal measurement data) is made available to the MCU, but I suppose that'll become clear when I start digging into the AXI platform.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #16 on: July 07, 2023, 12:57:20 am »
I think the next step would be to purchase a Spartan- or Artix-based board and implement a MicroBlaze solution.
You don't actually need any hardware to start playing around with building hardware, but if you want to run software, of course you will need some board. I typically recommend one of Digilent's boards because they are one of the very few devboard vendors which actually provide support to their customers.

I'm still not clear on the process by which FPGA data (e.g. the input signal measurement data) is made available to the MCU, but I suppose that'll become clear when I start digging into the AXI platform.
Depending on the bandwidth required, there are several way to go about it.
1. If you don't require a lot of bandwitch and you are OK with PIO (when CPU is going to read data piece by piece from peripheral), you can implement a simple memory mapped AXI4-lite peripheral with a few registers, one of which is going to be used to return a next piece of data from peripheral (or write the next piece of data into peripheral), and a status and command registers to control the peripheral, just like many standard peripherals do. Advantage of this approach is simplicity - it literally takes a couple of hours to get this up and running in HDL, disadvantage is it can consume a lot of CPU time for such I/O operations (just like using PIO in any "regular" MCU can eat up a lot of CPU cycles).
2. If you need high bandwidth, you can implement an AXI-stream interface which can be used with AXI DMA IP provided by Xilinx to perform a DMA operations without CPU involvement. You will still need an AXI4-lite slave interface to control your peripheral (typically a pair of registers - command register to issue commands, and a status register to read back status information), but once a CPU configures DMA and commands beginning of a transfer, it can go about it's own business until it receives an interrupt informing it that the transfer is complete. Or DMA can be configured to work as a circular buffer and run indefinitely constantly refreshing the buffer, and using interrupts to let CPU know when certain part of a buffer has been filled to it can do something with it.

If you worked with any half-recent MCU, you will recognize both approaches as they are widely used in just about any of them.
« Last Edit: July 07, 2023, 12:58:55 am by asmi »
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1569
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #17 on: July 07, 2023, 01:09:43 am »
The Parallax is actually a really interesting option. I've not used them before, but on paper they would certainly handle this particular task. Having not used that platform before, the assembly language is certainly a bit foreign!
You can avoid going as low as ASM level, in most cases, as there are compiled languages that will place small programs (like this will be) into a core.
You just look as the assembler listing created.
The smaller and cheaper P8X32A is good to 80~100MHz wait granularity, which might be good enough ?
With wait-pattern type coding, there will be some minimum pulse width that can be captured, but the wait-exit is sysclk granular.
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1569
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #18 on: July 07, 2023, 01:18:12 am »

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
Yes, you would pick a part that at least could manage a HW capture function, so the time stamp is ensured, and then you just need to read before the next capture.
If you might expect very narrow pulses, they can be managed by allocating an edge per capture, but that doubles the capture storage needed.
If you are quite unsure about how many inputs, and expect to jump 12 to 20+ inputs, a MCU that can do wide pattern waits may be best.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #19 on: July 08, 2023, 09:27:43 pm »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
IMHO this approach is just horribly wrong. Actually in two ways: A) never use interrupts on inputs that can change at will because they can and will lockup your application due to an unforeseen circumstance (which can be as simple as a lose wire, nearby noise source, etc). B) don't use interrupts when they interfere with eachother; find a common denominator and combine multiple interrupts into 1. IOW: Use a single timer interrupt in which you first sample the GPIO port (preferably having all the input pins on the same port) and then process the state of the inputs as needed. Keep in mind that after the GPIO input register (containing ALL input levels as a single read operation) has been read, there is no source of jitter that can be added to the signal so processing time doesn't matter for as long as you do it fast enough before the timer interval has passed. On a dual core MCU running at several hundred MHz (NXP's RT1000 series for example) you should be able to achieve time resolution in the sub-microsecond regions without much effort.

If interrupt jitter gives too much uncertainty, an option is to look for a controller that supports timer-triggered DMA transfers (one address to double buffer) which allows to process captured data 'en bloc' without having interrupt overhead as well.
« Last Edit: July 08, 2023, 10:34:01 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Infraviolet

  • Super Contributor
  • ***
  • Posts: 1033
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #20 on: July 10, 2023, 12:04:24 am »
"100Hz is just an example in this case..."
Have you done this for <<12 channels succesfully at frequencies up to the maximum you'll ever need? If so you could always have a bunch of microcontrollers in parallel each handling just a few of the 12+ channels, maybe have an extra interrupt on all of them which can be connected to a common line which could be toggled up and down by a master microcontroller as a way to provide a reference so all the other mcus can be in sync.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #21 on: July 10, 2023, 10:00:23 am »
100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
...
The Parallax is actually a really interesting option. I've not used them before, but on paper they would certainly handle this particular task. Having not used that platform before, the assembly language is certainly a bit foreign!

There is, IMNSHO, a better alternative if you are prepared to accept single source ICs that you can buy at Digikey: the XMOS xCORE.

Overall my summary is that it slots into the area where traditional microprocessors are pushing it but where FPGAs are overkill. You get the benefits of fast software iteration and timings guaranteed by design, not measurement.

Why so good? The hardware and software are designed from the ground up for hard realtime operation:
  • each i/o port has its own (easily accessible) timer and FPGA-like simple/strobed/clocked/SERDES with pattern matching
  • dedicate one (of 32) core and task to each I/O or group of I/Os
  • software typically looks like the obvious
    • initialise
    • loop forever, waiting for input or timeout or message then instantly resume and do the processing

You mention ISRs as introducing jitter, but omit to mention caches. The xCORE devices have neither :) They do have up to 32 cores/chip 4000MIPS/chip (expandable).

Each IO port has its own timers, so guaranteeing output on a specific clock cycle, and measuring the specific clock cycle on which input arrived.

The equivalent of an RTOS (comms, scheduling) is implemented in silicon.

There is a long and solid theoretical and practical pedigree for the hardware (Transputer 1980s) and software (CSP/Occam 1970s).

The development tools (command line and IDE) will inspect the optimised code to determine exactly how many clock cycles it will take to get from here to there. None of this "measure and hope you have spotted the worst case" rubbish!

When I used it, I found it stunningly easy, having the first iteration of code working within a a day. There were no hardware nor software surprises, no errata sheets.

For a remarkably information dense flyer on the basic architecture, see https://www.xmos.ai/download/xCORE-Architecture-Flyer(1.3).pdf

For a glimpse of the software best/worst case timing calculations, see

« Last Edit: July 10, 2023, 10:09:08 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #22 on: July 10, 2023, 01:19:24 pm »
XMOS xCORE.
Why did I expect to see these exact words from this exact person? :-DD I hope you are getting a good commission for all that shilling.
 
The following users thanked this post: Siwastaja

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #23 on: July 10, 2023, 03:53:54 pm »
XMOS xCORE.
Why did I expect to see these exact words from this exact person? :-DD I hope you are getting a good commission for all that shilling.

My payout is, unfortunately, a big fat zero. I wouldn't want it any other way (=> I wouldn't be a successful politician!).

The same questions/problems+preconceptions seem to arise regularly and always lead to the same answers, complete with many caveats.
When the unnecessary preconceptions are challenged and removed, it is unsurprising that some novel answers (with far fewer caveats) come into view.

I will admit to being a fanboy, though.
Recent experiences of developing with atmega328 processors were successful but boring.
Recent experiences of developing with xCORE processors were not only more successful than I hoped, but more fun and surprisingly pain-free. The latter mean a lot to me.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4961
  • Country: si
Re: MCU with FPGA vs. SoC FPGA
« Reply #24 on: July 10, 2023, 05:00:27 pm »
XMOS xCORE is actually a pretty interesting architecture that packs a lot of power.

It is just that there range of applications is a bit niche, they are too big, complex and power hungry to be a MCU replacement while they don't quite have the throughput of FPGAs. They tend to shine the most when you need weird interfaces at moderate data rates. This could be a fitting use for it.

But you can also get around the cache timing issues on modern MCUs. Most of them are ARM and it can execute code from anywhere, so you can just put a ISR into RAM closest to the CPU and that one typically runs at the same clock speed as the CPU so there is no latency variability. But yeah not when you have to watch for multiple pulses, timer peripherals are there for a reason.

As for edge interrupts locking up your MCU, Yeah that is something to watch for. The way i deal with it is using a counter to disable them for a period of time if they are coming in more often than they should.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #25 on: July 10, 2023, 06:06:54 pm »
XMOS xCORE is actually a pretty interesting architecture that packs a lot of power.

It is just that there range of applications is a bit niche, they are too big, complex and power hungry to be a MCU replacement while they don't quite have the throughput of FPGAs. They tend to shine the most when you need weird interfaces at moderate data rates. This could be a fitting use for it.

I think that is a little harsh; they are much more than "weird interfaces" :) They certainly aren't FPGAs and won't replace them, but they are a half-way house between conventional MCUs and FPGAs. They have the advantages of both, within their fundamental constraints.

I sure as hell hope they won't be the final answer to hard realtime and parallel computation - we need better. But they sure as hell are a good way of making people realise the limitations of conventional MCUs and languages.

Quote
But you can also get around the cache timing issues on modern MCUs. Most of them are ARM and it can execute code from anywhere, so you can just put a ISR into RAM closest to the CPU and that one typically runs at the same clock speed as the CPU so there is no latency variability. But yeah not when you have to watch for multiple pulses, timer peripherals are there for a reason.

That can be part of the solution, but with limitations.

There is no way in hell that an off-the-shelf toolchain can accurately predict that multiple i/o operations and processing are guaranteed to complete on time. You might be able to infer part of those guarantees from the design architecture and implementation, but such inferences will require a lot of manual intervention and faith you haven't missed something.

Notable points about the xCORE ecosystem is the extremely competent way in which the hardware supports parallel operation and timing guarantees, the language concepts support parallel computation, and the language details support timely interaction with hardware. In comparison, C on conventional MCUs is stone-age technology!

As someone that has long been frustrated by the needless, false and damaging division between "hardware" and "software", the xCORE hardware/software/toolchain ecosystem beautifully blurs the distinctions - and shows what is possible.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1569
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #26 on: July 11, 2023, 02:52:53 am »
There is, IMNSHO, a better alternative if you are prepared to accept single source ICs that you can buy at Digikey: the XMOS xCORE.
Yes, XMOS parts have a place, but the price is quite high - cheapest at Digikey is over $10 100+, so you have to really need the MIPs on offer.

The Pi PICO PIO is simpler, but more than good enough for the OP's 100ns and some kHz specs, and it is far cheaper and available in module form.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4961
  • Country: si
Re: MCU with FPGA vs. SoC FPGA
« Reply #27 on: July 11, 2023, 05:32:03 am »
I haven't been using XMOS since around the 2nd generation chips (Around where they renamed threads to being 'cores' and cores to 'tiles' for marketing reasons).

Back then they ware a bit limited by having only 64KB or RAM for each core (including instruction memory) and had basically no peripherals, so a lot of those impressive MIPS numbers got wasted on bit banging things like UART, SPI, I2C etc.. Admittedly they are incredibly good at 'bitbanging' protocols due to the super tight timing control they do, but it would be nice to have some standard peripherals for the common stuff. It is also really cool how you could link together multiple chips and it acts as 'one big computer'

It is certainly a big step forward, but was not quite mature back then. Even the compiler was a bit rough around the edges at first.

I do want something between Xcore and a regular MCU.
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1569
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #28 on: July 11, 2023, 06:54:04 am »
I do want something between Xcore and a regular MCU.

The Parallax parts are probably the closest, with Pi PICO PIO giving another point on the price curve.
Parallax parts have 8 real/true separate cores, and 32b operations.
The older 40/44 pin P8X32A has 32k RAM for code/data and 512 words for code in each core.
The newer 100 pin P2X8C4M64P, has 512k RAM and smarter peripheral support at the pins, plus a DMA like streamer.

The Pi PICO PIO supports just 32 opcodes in the state engine area, that can be across up to 4 state engines.
Still, it's surprising what people have that doing, and it is much cheaper than XMOS or Parallax parts.

lcsc shows RP2040 at US$0.7366/100+ and 229870 In Stock

 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4961
  • Country: si
Re: MCU with FPGA vs. SoC FPGA
« Reply #29 on: July 11, 2023, 07:19:12 am »
Yep i been messing with the Pi Pico and i quite liked it.

Didn't go too deep in it, but one thing i hated is how much dicking around is needed to get a C++ IDE with debugging working on Windows.  I ended up using Visual Studio with VisualGDB and a spare Pico flashed into being a SWD interface dongle, They seamed to be pushing that microPython thing too much.

Never used the PIO functionality, but i was certainly really impressed with what others done to it.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #30 on: July 11, 2023, 09:57:59 am »
I haven't been using XMOS since around the 2nd generation chips (Around where they renamed threads to being 'cores' and cores to 'tiles' for marketing reasons).

Back then they ware a bit limited by having only 64KB or RAM for each core (including instruction memory) and had basically no peripherals, so a lot of those impressive MIPS numbers got wasted on bit banging things like UART, SPI, I2C etc.. Admittedly they are incredibly good at 'bitbanging' protocols due to the super tight timing control they do, but it would be nice to have some standard peripherals for the common stuff. It is also really cool how you could link together multiple chips and it acts as 'one big computer'

Some do have standard peripherals, e.g. ethernet and USB interfaces. But you don't need them, e.g. you can capture+generate 100Mb/s ethernet packet bit streams in software, as well as processing the information in the packets.

But there is an argument that while conventional MCUs absolutely require highly specialised peripherals, they are less necessary in the xCORE ecosystem....

Overall peripherals are merely means to bang bits, one way or another. A conventional MCU's processor could do bit banging with a very simple peripheral - but it couldn't do much else while guaranteeing the timing. In order that the MCU can do other "useful" processing, its peripherals have to offload processing - which makes them more specialised, complex, complicated.

With xCORE you simply dedicate one of many cores for the I/O, thus avoiding the need for specific peripheral hardware. Plus there is the major benefit that the i/o information is delivered directly to your application for processing, without a complicated software lump in the middle. (The RTOS is implemented in silicon :) )

Quote
I do want something between Xcore and a regular MCU.

The benefit is derived from the hardware plus the software plus the toolchain, as I'm sure you are aware. The specific processor is less important. That was demonstrated by the XMOS device that had an ARM as one core, integrated into the software ecosystem. It doesn't seem to have been beneficial.

I wonder whether some processor's PIO "mini processors" can achieve something similar. I've looked at some, decided they looked complex and complicated to use, and if you discovered something that didn't quite fit (size, complexity) then you hit a brick wall. Both FPGAs and xCORE avoid such brick walls. I don't like discovering brick walls halfway through an implementation :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #31 on: July 11, 2023, 10:10:38 am »
Yep i been messing with the Pi Pico and i quite liked it.

Didn't go too deep in it, but one thing i hated is how much dicking around is needed to get a C++ IDE with debugging working on Windows.  I ended up using Visual Studio with VisualGDB and a spare Pico flashed into being a SWD interface dongle, They seamed to be pushing that microPython thing too much.

Never used the PIO functionality, but i was certainly really impressed with what others done to it.

Yeah, I hate that too.

As I've said, I was gobsmacked at how long it didn't take between receiving the IDE and devboard, and having key features of my application running. Hours, not days. Everything just worked as expected, without surprises.

One question I would have about PIO functionality is whether the knowledge and experience would be useful in two years time.

The PIO functionality seems very low level and tied to a specific device. Overall they are "just" a way of making your own slightly specialised hardware peripheral.

While XMOS is single sourced, the high level concepts have been demontrated and proven over many decades. They strongly encourage thinking in terms of multiple events being processed simultaneously by the (multiple) FSMs as seen in the requirements documents. I don't feel the PIOs help much at that level.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online JPortici

  • Super Contributor
  • ***
  • Posts: 3469
  • Country: it
Re: MCU with FPGA vs. SoC FPGA
« Reply #32 on: July 11, 2023, 10:37:00 am »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
IMHO this approach is just horribly wrong. Actually in two ways: A) never use interrupts on inputs that can change at will because they can and will lockup your application due to an unforeseen circumstance (which can be as simple as a lose wire, nearby noise source, etc). B) don't use interrupts when they interfere with eachother; find a common denominator and combine multiple interrupts into 1. IOW: Use a single timer interrupt in which you first sample the GPIO port (preferably having all the input pins on the same port) and then process the state of the inputs as needed. Keep in mind that after the GPIO input register (containing ALL input levels as a single read operation) has been read, there is no source of jitter that can be added to the signal so processing time doesn't matter for as long as you do it fast enough before the timer interval has passed. On a dual core MCU running at several hundred MHz (NXP's RT1000 series for example) you should be able to achieve time resolution in the sub-microsecond regions without much effort.

If interrupt jitter gives too much uncertainty, an option is to look for a controller that supports timer-triggered DMA transfers (one address to double buffer) which allows to process captured data 'en bloc' without having interrupt overhead as well.

Agree 100%, i also think the problem is being tackled the wrong way
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #33 on: July 11, 2023, 02:08:18 pm »
Quote
timer-triggered DMA transfers (one address to double buffer) which allows to process captured data 'en bloc' without having interrupt overhead as well.

yup, nowadays they are not rare, so why do not take advantage of dedicated hw?
best advice ever!
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4801
  • Country: pm
  • It's important to try new things..
Re: MCU with FPGA vs. SoC FPGA
« Reply #34 on: July 11, 2023, 07:02:47 pm »
Forth is interesting when running an MCU in FPGA. The machine is simple and small, while pretty fast.
I ran the j1a 16bit machine under MecrispForth in Lattice ice40UP5k FPGA at 24MHz and it processed 4+ concurrent firing random rising edge triggered interrupts at 100kHz just fine (a BluePill generated a 4bit wide random pattern each 10us). In each ISR a 32bit counter was incremented and I knew the final numbers in the 4 counters after a given number of sequences (ie 2billion) and it always matched. In addition the millis interrupt was firing as well.
« Last Edit: July 11, 2023, 07:14:17 pm by iMo »
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #35 on: July 11, 2023, 10:39:23 pm »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
IMHO this approach is just horribly wrong. Actually in two ways: A) never use interrupts on inputs that can change at will because they can and will lockup your application due to an unforeseen circumstance (which can be as simple as a lose wire, nearby noise source, etc). B) don't use interrupts when they interfere with eachother; find a common denominator and combine multiple interrupts into 1. IOW: Use a single timer interrupt in which you first sample the GPIO port (preferably having all the input pins on the same port) and then process the state of the inputs as needed. Keep in mind that after the GPIO input register (containing ALL input levels as a single read operation) has been read, there is no source of jitter that can be added to the signal so processing time doesn't matter for as long as you do it fast enough before the timer interval has passed. On a dual core MCU running at several hundred MHz (NXP's RT1000 series for example) you should be able to achieve time resolution in the sub-microsecond regions without much effort.

If interrupt jitter gives too much uncertainty, an option is to look for a controller that supports timer-triggered DMA transfers (one address to double buffer) which allows to process captured data 'en bloc' without having interrupt overhead as well.

Thanks for your input, I really appreciate it.

I have actually considered this use case as well. I could set aside an entire 32 bit IO port for these digital input signals, allowing for a single read of the entire port (either in a software interrupt or via DMA transfer to buffer as you've suggested).

One clarification I do have though. If the port read is driven by a timer interrupt, is the only way to guarantee that a given channel hasn't toggled between timer interrupts to select a timer frequency that is guaranteed to be faster than the input signals?
 

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #36 on: July 11, 2023, 10:40:25 pm »
100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
...
The Parallax is actually a really interesting option. I've not used them before, but on paper they would certainly handle this particular task. Having not used that platform before, the assembly language is certainly a bit foreign!

There is, IMNSHO, a better alternative if you are prepared to accept single source ICs that you can buy at Digikey: the XMOS xCORE.

Overall my summary is that it slots into the area where traditional microprocessors are pushing it but where FPGAs are overkill. You get the benefits of fast software iteration and timings guaranteed by design, not measurement.

Why so good? The hardware and software are designed from the ground up for hard realtime operation:
  • each i/o port has its own (easily accessible) timer and FPGA-like simple/strobed/clocked/SERDES with pattern matching
  • dedicate one (of 32) core and task to each I/O or group of I/Os
  • software typically looks like the obvious
    • initialise
    • loop forever, waiting for input or timeout or message then instantly resume and do the processing

You mention ISRs as introducing jitter, but omit to mention caches. The xCORE devices have neither :) They do have up to 32 cores/chip 4000MIPS/chip (expandable).

Each IO port has its own timers, so guaranteeing output on a specific clock cycle, and measuring the specific clock cycle on which input arrived.

The equivalent of an RTOS (comms, scheduling) is implemented in silicon.

There is a long and solid theoretical and practical pedigree for the hardware (Transputer 1980s) and software (CSP/Occam 1970s).

The development tools (command line and IDE) will inspect the optimised code to determine exactly how many clock cycles it will take to get from here to there. None of this "measure and hope you have spotted the worst case" rubbish!

When I used it, I found it stunningly easy, having the first iteration of code working within a a day. There were no hardware nor software surprises, no errata sheets.

For a remarkably information dense flyer on the basic architecture, see https://www.xmos.ai/download/xCORE-Architecture-Flyer(1.3).pdf

For a glimpse of the software best/worst case timing calculations, see



This is an excellent post, thank you for taking the time to share your knowledge. I haven't considered the xCORE platform before, but it sounds like it would be of considerable interest to me as I look to move from the MCU domain into FPGAs.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #37 on: July 11, 2023, 11:03:11 pm »
I'm considering a particular application and would very much appreciated some thoughts and/or recommendations.

I've used a particular MCU on a number of projects, and am now looking at a slight variation on a previous design, for which I'm considering the addition of an FPGA. I need to measure (frequency, duty cycle, etc.) a number of input signals (12+), and perform deterministic timestamping of each input at a specified rate (i.e. 100Hz). I've achieved something similar using the MCU with high priority interrupts in the past, but the overhead of servicing these interrupts, as well as the jitter of the timestamping function make this approach unsuitable in this particular application. I also looked at using the Timer Counter (TC) functionality of the MCU, but despite having 12 16-bit counters, there are only three available external inputs, which obviously doesn't suffice.
What timing resolution and bits do you need to capture ? How many edges/second ?
You mention 100Hz, which suggests modest capture rates ?  a HW capture module could be SW unloaded fine, at those rates.

100Hz is just an example in this case. I might see requirements orders of magnitude faster and slower than that depending on the particular implementation. Even if the logging rate is relatively slow, the input signals themselves might be relatively fast, so what I'm trying to avoid is having numerous (12 to 20+) inputs, all with 'relatively' high frequency (kHz+) having to be measured in ISRs within a single core MCU. Each rising and falling edge would need to be timestamped (if measuring duty cycle), which means multiple ISRs per channel, which can introduce a not insignificant amount of jitter in servicing the interrupts, timestamping, etc., as well as in servicing of the other MCU tasks (i.e. ADC result processing, non-DMA memory management, data staging, logging and transmission, etc.).
IMHO this approach is just horribly wrong. Actually in two ways: A) never use interrupts on inputs that can change at will because they can and will lockup your application due to an unforeseen circumstance (which can be as simple as a lose wire, nearby noise source, etc). B) don't use interrupts when they interfere with eachother; find a common denominator and combine multiple interrupts into 1. IOW: Use a single timer interrupt in which you first sample the GPIO port (preferably having all the input pins on the same port) and then process the state of the inputs as needed. Keep in mind that after the GPIO input register (containing ALL input levels as a single read operation) has been read, there is no source of jitter that can be added to the signal so processing time doesn't matter for as long as you do it fast enough before the timer interval has passed. On a dual core MCU running at several hundred MHz (NXP's RT1000 series for example) you should be able to achieve time resolution in the sub-microsecond regions without much effort.

If interrupt jitter gives too much uncertainty, an option is to look for a controller that supports timer-triggered DMA transfers (one address to double buffer) which allows to process captured data 'en bloc' without having interrupt overhead as well.

Thanks for your input, I really appreciate it.

I have actually considered this use case as well. I could set aside an entire 32 bit IO port for these digital input signals, allowing for a single read of the entire port (either in a software interrupt or via DMA transfer to buffer as you've suggested).

One clarification I do have though. If the port read is driven by a timer interrupt, is the only way to guarantee that a given channel hasn't toggled between timer interrupts to select a timer frequency that is guaranteed to be faster than the input signals?
Yes. But this is true for any kind of solution you choose. There will always be a minimum sample interval so pulses which are narrower than the sample interval will be missed. You can opt to include a pulse-stretching circuit to make narrow pulses become wide enough to detect them. OTOH extremely narrow pulses are typically noise due to interference of some sort so you might want to filter the inputs. But this is very dependant on the actual signals you are sampling so I can't tell you what is best; I can online outline the options.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: jars121

Offline jars121Topic starter

  • Regular Contributor
  • *
  • Posts: 51
  • Country: 00
Re: MCU with FPGA vs. SoC FPGA
« Reply #38 on: July 11, 2023, 11:09:07 pm »
Understood, thanks for clarifying.

Given the low speed of the input signals (<100kHz), I'm fairly comfortable that I wouldn't miss any channel toggles in this particular application, but it's good to know for future applications where the signal frequencies might be considerably faster.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
Re: MCU with FPGA vs. SoC FPGA
« Reply #39 on: July 11, 2023, 11:22:47 pm »
Thanks for your input, it's greatly appreciated.

The option of having additional 'front-end' MCUs, feeding into the larger, more capable MCU is actually quite an interesting one.

I have done it that way before with extra microcontrollers of the same type handling the display, keyboard, real time I/O, whatever.  Microcontrollers are so inexpensive that it can make sense, and if they are of the same type, then you already have the development system and tools.

It seems like an awful waste to replace a bunch of discrete logic with an entire microcontroller, but the economics favor it.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #40 on: July 11, 2023, 11:43:19 pm »
This is an excellent post, thank you for taking the time to share your knowledge. I haven't considered the xCORE platform before, but it sounds like it would be of considerable interest to me as I look to move from the MCU domain into FPGAs.

You're welcome. It may be worth your while searching the entire forum for xCORE posts by tggzzz. There will be a lot of repetition, but some posts are more detailed than others, and in a few cases there has been a useful discussion with other people that have used it.

As with any technology, it comes with a set of advantages and disadvantages.

This technology is sufficiently different that it demonstrates that discussions about the relative merits of processor X59 vs processor X60 are relatively tedious when considering the bigger picture of the hardware+software+tooling and how that fits with high-level design concepts. That is emphasised when it is noted that XMOS have indicated that future cores will be RISC-V compatible. I hope and expect that will change nothing of any significance, but that remains to be verified.

For understanding how to program the devices, see https://www.xmos.ai/download/XMOS-Programming-Guide-(documentation)(F).pdf It is remarkably easy to read, and introduces the key concepts gently, but not too gently. Basically it assumes you know how to program in C, and want to find out the ways in which this ecosystem is an improvement.

For understanding the i/o ports' features and capabilities, see https://www.xmos.ai/download/Introduction-to-XS1-ports(3).pdf

I will contend that even if you don't use xCORE, using the high level concepts will improve the sructure of your designs. After all, they have been around since the 1970s (CAR Hoare's Communicating Sequential Processes), and some of the concepts keep reappearing in various processors and languages (e.g. Go, most recently).
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #41 on: July 11, 2023, 11:50:24 pm »
Thanks for your input, it's greatly appreciated.

The option of having additional 'front-end' MCUs, feeding into the larger, more capable MCU is actually quite an interesting one.

I have done it that way before with extra microcontrollers of the same type handling the display, keyboard, real time I/O, whatever.  Microcontrollers are so inexpensive that it can make sense, and if they are of the same type, then you already have the development system and tools.

It seems like an awful waste to replace a bunch of discrete logic with an entire microcontroller, but the economics favor it.

As always, the hardware is relatively easy and cheap. Hells teeth, doesn't a very simple MCU cost less than a 555 timer+passives now?!

The software is yet to catch up. xC is one good starting point, but we need more.

As you know, the xCORE approach is to note that ALUs+registers are far faster than memory, so it makes sense to timeshare the processor's ALU hardware between different cores. Intel/AMD do the same (and call it SMT), and Sun did it in their Niagara processors.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14536
  • Country: fr
Re: MCU with FPGA vs. SoC FPGA
« Reply #42 on: July 12, 2023, 12:20:20 am »
For understanding how to program the devices, see https://www.xmos.ai/download/XMOS-Programming-Guide-(documentation)(F).pdf It is remarkably easy to read, and introduces the key concepts gently, but not too gently. Basically it assumes you know how to program in C, and want to find out the ways in which this ecosystem is an improvement.

For understanding the i/o ports' features and capabilities, see https://www.xmos.ai/download/Introduction-to-XS1-ports(3).pdf

I will contend that even if you don't use xCORE, using the high level concepts will improve the sructure of your designs. After all, they have been around since the 1970s (CAR Hoare's Communicating Sequential Processes), and some of the concepts keep reappearing in various processors and languages (e.g. Go, most recently).

I concur.
And I agree the above read is very good indeed, even if you don't plan on using XMOS products.
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1569
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #43 on: July 12, 2023, 01:51:09 am »
I have actually considered this use case as well. I could set aside an entire 32 bit IO port for these digital input signals, allowing for a single read of the entire port (either in a software interrupt or via DMA transfer to buffer as you've suggested).

One clarification I do have though. If the port read is driven by a timer interrupt, is the only way to guarantee that a given channel hasn't toggled between timer interrupts to select a timer frequency that is guaranteed to be faster than the input signals?
If you timer sample, then yes, your aperture is the timer rate.
Typically, to reduce memory needed, you check for any change, and only save (Pin+Time) pairs on a change.

However, you do not have to use a timer tick, if your inputs are low rate and you need better edge precision, you can use a pin-change / port-match interrupt, that then captures the timer.

The granularity is the timer, but there is a rate-window/blanking window which is the time to service that interrupt.

Pin-Change interrupts can even capture very narrow pulses by inference.
If you get an interrupt, but the pins appear unchanged when captured, you know the impulse was shorter than the check delay.
 

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1569
  • Country: au
Re: MCU with FPGA vs. SoC FPGA
« Reply #44 on: July 12, 2023, 01:55:18 am »
It seems like an awful waste to replace a bunch of discrete logic with an entire microcontroller, but the economics favor it.

Yup, MCUs are so cheap and universal, that even using a single peripheral or interrupt feature, can make economic sense in a project.
 
The following users thanked this post: BrianHG

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #45 on: July 12, 2023, 08:50:00 am »
I have actually considered this use case as well. I could set aside an entire 32 bit IO port for these digital input signals, allowing for a single read of the entire port (either in a software interrupt or via DMA transfer to buffer as you've suggested).

One clarification I do have though. If the port read is driven by a timer interrupt, is the only way to guarantee that a given channel hasn't toggled between timer interrupts to select a timer frequency that is guaranteed to be faster than the input signals?
If you timer sample, then yes, your aperture is the timer rate.
Typically, to reduce memory needed, you check for any change, and only save (Pin+Time) pairs on a change.

However, you do not have to use a timer tick, if your inputs are low rate and you need better edge precision, you can use a pin-change / port-match interrupt, that then captures the timer.

The granularity is the timer, but there is a rate-window/blanking window which is the time to service that interrupt.

Pin-Change interrupts can even capture very narrow pulses by inference.
For that you have to define narrow. Typically there will be some synchronisation logic that samples the I/O pin using some clock frequency. So in such cases minimum pulse width be governed by the sampling interval. The same goes for timer capture, these inputs are typically sampled as well.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
Re: MCU with FPGA vs. SoC FPGA
« Reply #46 on: July 12, 2023, 04:20:01 pm »
As always, the hardware is relatively easy and cheap. Hells teeth, doesn't a very simple MCU cost less than a 555 timer+passives now?!

The software is yet to catch up. xC is one good starting point, but we need more.

But if programmable logic is being added to a microcontroller, then that requires "software" as well, and it will be developed in a completely different programming environment.  At least if the same series of microcontrollers is used, the software development environment can be the same for all of them.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #47 on: July 12, 2023, 04:38:47 pm »
As always, the hardware is relatively easy and cheap. Hells teeth, doesn't a very simple MCU cost less than a 555 timer+passives now?!

The software is yet to catch up. xC is one good starting point, but we need more.

But if programmable logic is being added to a microcontroller, then that requires "software" as well, and it will be developed in a completely different programming environment.  At least if the same series of microcontrollers is used, the software development environment can be the same for all of them.

A lot will depend on the complexity of the PL. If in an FPGA then everything will be very different and will probably require different staff (until softies can think in fine grained parallel :) ). If a limited PIO FSM type peripheral, then the toolset will still be different - but much simpler and with a smaller learning curve.

In either case, it highlights the potential benefits of an ecosystem that pushes from traditional MCU space towards traditional FPGA space.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: nctnico

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #48 on: July 12, 2023, 04:56:08 pm »
until softies can think in fine grained parallel :)
Actually FPGAs are not fine grained parallel, they are just parallel, because FPGA don't execute anything, but wire up and program logic gates. The term "fine grained parallelism" means instruction-level parallel execution (meaning execution unit can execute instructions from different instruction stream on each cycle), which is what GPUs and a lot of various "AI accelerators" do. And with proliferation of the likes of CUDA this is no longer something out of ordinary for software folks. But this term can not be applied to FPGA because there is no "execution units" (unless FPGA designer puts one in), not does it "calculate" anything (again unless designed to do so), but it wires an actual logic gates and other hardware blocks.

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #49 on: July 12, 2023, 05:16:40 pm »
until softies can think in fine grained parallel :)
Actually FPGAs are not fine grained parallel, they are just parallel, because FPGA don't execute anything, but wire up and program logic gates. The term "fine grained parallelism" means instruction-level parallel execution (meaning execution unit can execute instructions from different instruction stream on each cycle), which is what GPUs and a lot of various "AI accelerators" do. And with proliferation of the likes of CUDA this is no longer something out of ordinary for software folks. But this term can not be applied to FPGA because there is no "execution units" (unless FPGA designer puts one in), not does it "calculate" anything (again unless designed to do so), but it wires an actual logic gates and other hardware blocks.

There is no single definition of fine grained parallelism, especially since parallelism can be defined and expressed at many many levels and in many many ways.

But don't ignore the wood and concentrate on one clump of trees. That isn't enlightening.

For many people it is more enlightening for them to consider the relationship between a specification/algorithm, how it can be implemented in hardware and/or software, and the deep equivalence between hardware and software. More useful, too :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #50 on: July 12, 2023, 05:35:10 pm »
There is no single definition of fine grained parallelism, especially since parallelism can be defined and expressed at many many levels and in many many ways.
Of course there is. Fine grained parallelism is defined as I stated in the post above.

But don't ignore the wood and concentrate on one clump of trees. That isn't enlightening.
That difference is crucial to understand the fundamental difference between FPGA and microcontrollers.

For many people it is more enlightening for them to consider the relationship between a specification/algorithm, how it can be implemented in hardware and/or software, and the deep equivalence between hardware and software. More useful, too :)
There is absolutely NO equivalence between hardware and software, they differ in the very fundamental way, which is why stuff like XMOS is nothing but a crutch for software developers who can't into hardware. Anything saying otherwise simply doesn't understand that difference, and that's why they embrace crutches like bit-banging, which is totally and fundamentally different to how things happen inside FPGA (or any hardware for that matter).
And as someone who came into hardware from the software world, it wasn't easy to fully understand this fundamental difference, but like everything, understanding comes with practice.

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #51 on: July 12, 2023, 08:54:47 pm »
But don't ignore the wood and concentrate on one clump of trees. That isn't enlightening.
That difference is crucial to understand the fundamental difference between FPGA and microcontrollers.

Of course there are differences; that's not the point.

The interesting point is the similarities in the specifications and designs, even if there are differences in the implementations.

Quote
For many people it is more enlightening for them to consider the relationship between a specification/algorithm, how it can be implemented in hardware and/or software, and the deep equivalence between hardware and software. More useful, too :)
There is absolutely NO equivalence between hardware and software, they differ in the very fundamental way, which is why stuff like XMOS is nothing but a crutch for software developers who can't into hardware. Anything saying otherwise simply doesn't understand that difference, and that's why they embrace crutches like bit-banging, which is totally and fundamentally different to how things happen inside FPGA (or any hardware for that matter).
And as someone who came into hardware from the software world, it wasn't easy to fully understand this fundamental difference, but like everything, understanding comes with practice.

I don't think you know as much as you think you know.

Please define the your boundary between hardware and software. Start with something we all know and use: a system with an Intel x86 processor and some memory. Where does software stop and the hardware start?

Background: some of the things I've designed and implemented include low-noise analogue electronics (including "DSP" circuits without ADCs/DACs :) ), semi-custom digital ICs, designing and implementing an application specific processor, through to life-critical software/hardware systems, cellular system RF modelling/instrumentation/measurement, and telecom server systems. I have a solid feel for where the boundaries aren't.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #52 on: July 12, 2023, 09:16:37 pm »
I have to agree with tggzzz here. Where is the difference between a programmable statemachine working according to instructions from a flash memory compared to a bunch of programmable logic elements + flipflops working according to programmable connections read from a flash memory?

The only real difference is, is that the programmable statemachine (AKA CPU) offers a much more constrained interface and thus is easier to manage compared to a collection of logic that needs to be pieced together one by one. In the end only the level of abstraction is different. And more interestingly: a lot of effort has been put into 'defining' (by lack of a better word I can come up with right now) programmable logic functions by using high level programming languages.

Or more practical: if I have a bunch of software engineers and a problem that would lend itself to be solved in software somehow (even if it comes down to bitbanging signals using paralled processors), I'd go for a software solution instead of trying to turn software engineers into programmable logic designers.
« Last Edit: July 12, 2023, 09:22:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #53 on: July 12, 2023, 09:30:57 pm »
I have to agree with tggzzz here. Where is the difference between a programmable statemachine working according to instructions from a flash memory compared to a bunch of programmable logic elements + flipflops working according to programmable connections read from a flash memory?

The only real difference is, is that the programmable statemachine (AKA CPU) offers a much more constrained interface and thus is easier to manage compared to a collection of logic that needs to be pieced together one by one. In the end only the level of abstraction is different. And more interestingly: a lot of effort has been put into 'defining' (by lack of a better word I can come up with right now) programmable logic functions by using high level programming languages.

Or more practical: if I have a bunch of software engineers and a problem that would lend itself to be solved in software somehow, I'd go for a software solution instead of trying to turn software engineers into programmable logic designers.

Yup :) (Plus it is much deeper and more pervasive than that good example, but we'll see what ASMI has to say :) )

Hardware engineers can make the transition to software more easily than the reverse.

Parallel thinking is one example.

Another example is to grok the triumphant recent software concept of "Inversion of Control", and understand the hardware equivalent. When you finally work out the key concepts behind IoC and why they really are useful, you realise IoC is just a fancy name for creating components that can be included in a hierarchical composition. You know, like a memory chip and a counter/timer can be wired together on a PCB, multiple PCBs and a PSU can be wired together in a crate, etc etc.

Traditional software engineers are hung up thinking in terms of composition of algorithms and composition of data. Systems engineers, not so much.
« Last Edit: July 12, 2023, 09:35:44 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #54 on: July 12, 2023, 10:37:25 pm »
I don't think you know as much as you think you know.
Don't worry, I know more than enough.

Please define the your boundary between hardware and software. Start with something we all know and use: a system with an Intel x86 processor and some memory. Where does software stop and the hardware start?
Hardware works the way it works due to the way it's interconnected, software always uses the same circuit, so hardware don't change from one operation to another.

Background: some of the things I've designed and implemented include low-noise analogue electronics (including "DSP" circuits without ADCs/DACs :) ), semi-custom digital ICs, designing and implementing an application specific processor, through to life-critical software/hardware systems, cellular system RF modelling/instrumentation/measurement, and telecom server systems. I have a solid feel for where the boundaries aren't.
And yet apparently you are still missing the crucial difference.

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #55 on: July 12, 2023, 10:38:14 pm »
I have to agree with tggzzz here. Where is the difference between a programmable statemachine working according to instructions from a flash memory compared to a bunch of programmable logic elements + flipflops working according to programmable connections read from a flash memory?
The difference begins and ends with circuit.

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #56 on: July 12, 2023, 10:47:45 pm »
I have to agree with tggzzz here. Where is the difference between a programmable statemachine working according to instructions from a flash memory compared to a bunch of programmable logic elements + flipflops working according to programmable connections read from a flash memory?
The difference begins and ends with circuit.
You can always try and see differences and then get totally worked up about how things are not equal. Try look at it a different way: a screw and a nail are completely different and yet they perform the same function: keep materials together. Which one is the best to use is highly debatable in many cases.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #57 on: July 12, 2023, 11:09:53 pm »
You can always try and see differences and then get totally worked up about how things are not equal.
That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.

Try look at it a different way: a screw and a nail are completely different and yet they perform the same function: keep materials together. Which one is the best to use is highly debatable in many cases.
Mechanical folks might have a problem with that stament :-DD

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #58 on: July 12, 2023, 11:12:47 pm »
I don't think you know as much as you think you know.
Don't worry, I know more than enough.

Good. Use that to describe what is and isn't hardware in an x86 processor.

That should be easy enough.

Quote
Please define the your boundary between hardware and software. Start with something we all know and use: a system with an Intel x86 processor and some memory. Where does software stop and the hardware start?
Hardware works the way it works due to the way it's interconnected, software always uses the same circuit, so hardware don't change from one operation to another.

I don't understand what you are attempting to say. However, looking at "hardware don't change from one operation to another".

If that's the case then some FPGAs aren't hardware. Do you really mean that?

Example: the Xilinx Partial Reconfiguration. "Partial reconfiguration is a technique that allows replacing the logic of some parts of the FPGA, while its other parts are working normally. This consists of feeding the FPGA with a bitstream, exactly like the initial bitstream that programs its functionality on powerup. However the bitstream for Partial Reconfiguration doesn't cause the FPGA to halt. Instead, it works on specific logic elements, and updates the memory cells that control their behavior. It's a hot replacement of specific logic blocks." https://www.01signal.com/vendor-specific/xilinx/partial-reconfiguration/part1-introduction/

That, of course, is exactly equivalent to what happens when an operating system loads and runs an application program.

Quote
Background: some of the things I've designed and implemented include low-noise analogue electronics (including "DSP" circuits without ADCs/DACs :) ), semi-custom digital ICs, designing and implementing an application specific processor, through to life-critical software/hardware systems, cellular system RF modelling/instrumentation/measurement, and telecom server systems. I have a solid feel for where the boundaries aren't.
And yet apparently you are still missing the crucial difference.

There are many differences, just as there are many differences between design strategies, differences between computer languages - and screws and nails.

You are missing the crucial similarities.
You are overestimating the differences.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #59 on: July 12, 2023, 11:23:55 pm »
You can always try and see differences and then get totally worked up about how things are not equal.
That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.

The same observation can be made about different software programming paradigms, and even languages.

Fundamentally language syntax is trivial; language semantics is far more significant.

Simple example: "7 - 7 - 7" gives different results in different languages. In one language , "7 - 7 - 7" is numerically equal to "7 - 7 - 7 - 7 -7".

Other examples might include the radically different semantics of FSM languages, logic programming languages, constraint satisfaction language etc.

Quote
Try look at it a different way: a screw and a nail are completely different and yet they perform the same function: keep materials together. Which one is the best to use is highly debatable in many cases.
Mechanical folks might have a problem with that stament :-DD

The competent ones would understand nctnico's point.

Trivial example: frequently screws are inserted with a hammer - and then screwed up for the last quarter turn.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #60 on: July 13, 2023, 01:55:55 am »
Good. Use that to describe what is and isn't hardware in an x86 processor.

That should be easy enough.
The circuit is hardware. Everything else is software. And yes I'm aware about microops and all that jazz. That's still software and not hardware.

I don't understand what you are attempting to say. However, looking at "hardware don't change from one operation to another".

If that's the case then some FPGAs aren't hardware. Do you really mean that?

Example: the Xilinx Partial Reconfiguration. "Partial reconfiguration is a technique that allows replacing the logic of some parts of the FPGA, while its other parts are working normally. This consists of feeding the FPGA with a bitstream, exactly like the initial bitstream that programs its functionality on powerup. However the bitstream for Partial Reconfiguration doesn't cause the FPGA to halt. Instead, it works on specific logic elements, and updates the memory cells that control their behavior. It's a hot replacement of specific logic blocks." https://www.01signal.com/vendor-specific/xilinx/partial-reconfiguration/part1-introduction/
When PR is occuring the circuit is not functional, so at that stage it's neither. But once it's completed, it behaves like a regular circuit.

That, of course, is exactly equivalent to what happens when an operating system loads and runs an application program.
Absolutely NOT. OS Task switch doesn't change the hardware. That is an example of a coarse grained parallelism, when commands streams are chopped into chunks, and each chunk is executed sequentially, but if they are made small enough and observed over large enough intervals, it appears as if those commands are being executed in parallel.

There are many differences, just as there are many differences between design strategies, differences between computer languages - and screws and nails.

You are missing the crucial similarities.
You are overestimating the differences.
Similarities are superficial, while differences are fundamental.

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #61 on: July 13, 2023, 01:57:03 am »
Trivial example: frequently screws are inserted with a hammer - and then screwed up for the last quarter turn.
Now I see why you love XMOS so much - driving screws with hammer also seem to be your thing :palm:
« Last Edit: July 13, 2023, 02:13:12 pm by asmi »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #62 on: July 13, 2023, 10:10:12 am »
Good. Use that to describe what is and isn't hardware in an x86 processor.

That should be easy enough.
The circuit is hardware. Everything else is software. And yes I'm aware about microops and all that jazz. That's still software and not hardware.

So an intel processor is software? Most people would disagree with you on that point.

Quote
I don't understand what you are attempting to say. However, looking at "hardware don't change from one operation to another".

If that's the case then some FPGAs aren't hardware. Do you really mean that?

Example: the Xilinx Partial Reconfiguration. "Partial reconfiguration is a technique that allows replacing the logic of some parts of the FPGA, while its other parts are working normally. This consists of feeding the FPGA with a bitstream, exactly like the initial bitstream that programs its functionality on powerup. However the bitstream for Partial Reconfiguration doesn't cause the FPGA to halt. Instead, it works on specific logic elements, and updates the memory cells that control their behavior. It's a hot replacement of specific logic blocks." https://www.01signal.com/vendor-specific/xilinx/partial-reconfiguration/part1-introduction/
When PR is occuring the circuit is not functional, so at that stage it's neither. But once it's completed, it behaves like a regular circuit.

That, of course, is exactly equivalent to what happens when an operating system loads and runs an application program.
Absolutely NOT. OS Task switch doesn't change the hardware. That is an example of a coarse grained parallelism, when commands streams are chopped into chunks, and each chunk is executed sequentially, but if they are made small enough and observed over large enough intervals, it appears as if those commands are being executed in parallel.

The FPGA continues to operate during partial reconfiguration.

Task switching is not the point; it occurs after the program has been loaded and is running.

CPM and MSDOS load and run a single application at a time; there is no task switching per-se.

Quote
There are many differences, just as there are many differences between design strategies, differences between computer languages - and screws and nails.

You are missing the crucial similarities.
You are overestimating the differences.
Similarities are superficial, while differences are fundamental.

You've got that the wrong way round.

The fundamental similarities are easily understood when identical functionality is implemented using either hardware or software or a combination of the two. Frequently the fundamental behaviour remains fixed while the exact partitioning changes over time as the superficial differences change[1], but also according to different constraints[2].

[1] during development and/or product lifetime
[2] especially cost and size and performance
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #63 on: July 13, 2023, 10:13:23 am »
You can always try and see differences and then get totally worked up about how things are not equal.
That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.

The same observation can be made about different software programming paradigms, and even languages.

Fundamentally language syntax is trivial; language semantics is far more significant.

Simple example: "7 - 7 - 7" gives different results in different languages. In one language , "7 - 7 - 7" is numerically equal to "7 - 7 - 7 - 7 -7".

Other examples might include the radically different semantics of FSM languages, logic programming languages, constraint satisfaction language etc.

Since you, amsi, have chosen to omit commenting on that, do you realise your contention has limited validity in the context of trying to draw a single unambiguous boundary between hardware and software?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #64 on: July 13, 2023, 02:45:55 pm »
So an intel processor is software? Most people would disagree with you on that point.
Don't be so dense. Hardware of CPU is hardware, firmware (microcode) is software.

The FPGA continues to operate during partial reconfiguration.
But the part being reconfigured does not.

Task switching is not the point; it occurs after the program has been loaded and is running.
Not sure what this has to do with anything.

CPM and MSDOS load and run a single application at a time; there is no task switching per-se.
But it can "multitask" if application is coded as such, and there was also a concept of "resident" code (I'm old enough to remember that).

You've got that the wrong way round.
No, you've got this wrong way around.

The fundamental similarities are easily understood when identical functionality is implemented using either hardware or software or a combination of the two. Frequently the fundamental behaviour remains fixed while the exact partitioning changes over time as the superficial differences change[1], but also according to different constraints[2].

[1] during development and/or product lifetime
[2] especially cost and size and performance
Software and hardware implementatiuons are never identical, there are always fundamental differences, even if sometimes they appear subtle to some who doesn't understand hardware. For example, a logical gate reacts immediately (ignoring propagation delay) on a change of input signal, while software can only "react" at a fixed intervals of time. That is a fundamental difference.

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #65 on: July 13, 2023, 03:23:47 pm »
You are choosing to snip so much context that the points I make are being obscured in favour of the points you would like to make.

Not going to fall for that debating technique!

So an intel processor is software? Most people would disagree with you on that point.
Don't be so dense. Hardware of CPU is hardware, firmware (microcode) is software.

So which are the electrons stored in a transistor's gate?

<omitted points where the context was removed>

Quote
The fundamental similarities are easily understood when identical functionality is implemented using either hardware or software or a combination of the two. Frequently the fundamental behaviour remains fixed while the exact partitioning changes over time as the superficial differences change[1], but also according to different constraints[2].

[1] during development and/or product lifetime
[2] especially cost and size and performance
Software and hardware implementatiuons are never identical, there are always fundamental differences, even if sometimes they appear subtle to some who doesn't understand hardware.

Insufficient distinction.

Take some functionality and implement it using radically different software paradigms. The resulting implementations will not be identical and will have fundamental differences.

Hardware is just another step on the continuum between formal mathematical expressions and particles and waves.

Quote
For example, a logical gate reacts immediately (ignoring propagation delay) on a change of input signal, while software can only "react" at a fixed intervals of time. That is a fundamental difference.

Not true, of course - for several reasons related to the intractability of asynchronous behaviour of systems at various conceptual levels.

Almost all practical hardware only reacts only at fixed intervals of time, due to the intractibility of creating designs where the ordering of events is undefined. If you don't understand why, then take time to understand when and why it is necessary to insert "bridging terms" into logic implementations expressed in the form of Karnaugh maps.

There are deep reasons why distributed software systems have similar problems. If you don't understand why, then take time to understand Leslie Lamport's seminal works anbd their consequences.
« Last Edit: July 13, 2023, 03:26:13 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2733
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #66 on: July 13, 2023, 05:08:27 pm »
You are choosing to snip so much context that the points I make are being obscured in favour of the points you would like to make.

Not going to fall for that debating technique!
I've skipped irrelevant details to focus on what's important such that readers won't have to suffer through your irrelevant brain dumps.

So which are the electrons stored in a transistor's gate?
I've answered your question with the best clarity I can manage. If you can't understand this, this is on you.

Insufficient distinction.

Take some functionality and implement it using radically different software paradigms. The resulting implementations will not be identical and will have fundamental differences.

Hardware is just another step on the continuum between formal mathematical expressions and particles and waves.
There is no continuum between hardware and software. They might solve the same ultimate problem, but the way they do it is fundamentally different.

Not true, of course - for several reasons related to the intractability of asynchronous behaviour of systems at various conceptual levels.
Of course it is true. Anybody can see it by taking a logical gate and feeding in a signal and observing the output, and then compare it to software implementation of the same logic.

Almost all practical hardware only reacts only at fixed intervals of time, due to the intractibility of creating designs where the ordering of events is undefined. If you don't understand why, then take time to understand when and why it is necessary to insert "bridging terms" into logic implementations expressed in the form of Karnaugh maps.
It becomes more and more clear to me that you simply don't understand what hardware is, hence you constant pitching of XMOS. You are too stuck to software world to see that there is a whole other world, which actually provides that you have something to run your software on.

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #67 on: July 13, 2023, 05:19:47 pm »
Almost all practical hardware only reacts only at fixed intervals of time, due to the intractibility of creating designs where the ordering of events is undefined. If you don't understand why, then take time to understand when and why it is necessary to insert "bridging terms" into logic implementations expressed in the form of Karnaugh maps.
They are not intractable. They are just difficult. There are, for example, fully asynchronous DSPs. I'm not clear whether they show any cost performance benefits over more conventional designs. What bothers many people about them is you just don't know how much latency there will be between input and output, as it varies from sample to sample of the device. As long as you design your system for the specified worst case of the device it shouldn't be an issue.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #68 on: July 13, 2023, 07:06:39 pm »
You can always try and see differences and then get totally worked up about how things are not equal.
That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.
IMHO your view is way too narrow and really focussed on the edge of an FPGA where signals go in & out. But that is just a very tiny part of doing digital logic design. Back when I was doing my EE study I also took several classes on digital IC logic design. Compared to software development, that was/is highly abstract. Didn't even involve controlling anything real (unlike software). Just running simulation after simulation and doing analysis of test vector coverage. And this is true for a lot of logic design work. When I design a new part for a complicated piece of logic in an FPGA, I start out with simulations and once the logic does what I need it to do, I add it somewhere to the rest of the design. However, I really can't see how this workflow is any different compared to developing a new software module in C (which also involves providing stimuli and checking output aimed to maximise test coverage).

So yes, HDLs look like software development tools because they are software development tools.

Biggest hurdle for me when starting digital logic design and HDL was the fact that in HDL all statements are executed at the same time and any form of sequencing must be implemented through a statemachine. This is likely the case for all software people entering into digital logic design.

Interaction with hardware is a different subject where software people have trouble because they don't know tools like an oscilloscope / logic analyser. But the hurdle they need to get over is equal whether they are using an FPGA, microcontroller or embedded system. It is connecting the dots between something they define in software and something that happens in hardware and vice versa. And then learn to understand the criteria surrounding the border between hardware and software (with timing as a recurring topic).
« Last Edit: July 13, 2023, 07:26:22 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #69 on: July 13, 2023, 07:22:18 pm »
You are choosing to snip so much context that the points I make are being obscured in favour of the points you would like to make.

Not going to fall for that debating technique!
I've skipped irrelevant details to focus on what's important such that readers won't have to suffer through your irrelevant brain dumps.

You think they are irrelevant, but that's because you don't have sufficient experience in analogue and digital systems.

We know that since you have stated that you have only just started moving into hardware.

Quote
It becomes more and more clear to me that you simply don't understand what hardware is, hence you constant pitching of XMOS. You are too stuck to software world to see that there is a whole other world, which actually provides that you have something to run your software on.

I first designed and implemented hardware systems over 60 years ago; some included logic gates made from individual transistors and resistors.
I designed and implemented an Altair 8080 class computer in 1976 (it has 128 bytes of RAM, all I could afford).
I've been designing and implementing hardware and software systems professionally for 45 years.

For clarity, how many of these technologies have you used when designing and implementing systems:
  • low-noise analogue electronics controlled by logic gates
  • semi-custom digital ICs, where operation cannot be changed after manufacture - a mistake cost 3 months delay and a years salary
  • programmable logic based on PLAs and GALs
  • programmable logic based on FPGAs
  • application specific processor to discriminate between differing pulse streams (to replace a pure logic-gate implementation)
  • life support hardware+software systems (and many similar systems)
  • RF modelling, then instrumentation and processing of live cellular systems
  • and I'll omit the pure software, so as not to confuse you

So I do have a clue.

What similar evidence can you cite to demonstrate that you have a clue?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #70 on: July 13, 2023, 07:37:28 pm »
Almost all practical hardware only reacts only at fixed intervals of time, due to the intractibility of creating designs where the ordering of events is undefined. If you don't understand why, then take time to understand when and why it is necessary to insert "bridging terms" into logic implementations expressed in the form of Karnaugh maps.
They are not intractable. They are just difficult.

I'm not convinced it is a good use of our time to discuss the boundary between "very difficult" and "intractible" :)

Asynchronous logic has so many potential advantages (especially high speed and low power) that people have come up with many many design concepts[1] over the decades. They have all come to nothing because they are so difficult to design and test, and don't offer sufficient advantages over conventional clocked synchronous designs.

Perhaps it is worth revisiting the technology, now that device geometry has hit the brickwall and leakage currents are so significant. Similar considerations are rubbing peoples' noses in the problems with C, and leading to languages like xC, Rust and Go.

[1] one of the more interesting was Steve Furber's group's AMULET1, an asynchronous ARM processor 30 years ago

Quote
There are, for example, fully asynchronous DSPs. I'm not clear whether they show any cost performance benefits over more conventional designs. What bothers many people about them is you just don't know how much latency there will be between input and output, as it varies from sample to sample of the device. As long as you design your system for the specified worst case of the device it shouldn't be an issue.

Specified worst case is an "interesting" concept in asynchronous systems, where metastability rears its ugly head.

Not knowing the timing is a problem during manufacturing test and product certification.

Do you have a reference to the DSP you mention? I'm curious to find out more.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #71 on: July 13, 2023, 07:50:57 pm »
You can always try and see differences and then get totally worked up about how things are not equal.
That difference has important implications for FPGA designs which hinders a lot of folks who are coming into FPGA world from software world. And the fact that HDLs visually look similar to software languages doesn't help to bridge this gap.
IMHO your view is way too narrow and really focussed on the edge of an FPGA where signals go in & out. But that is just a very tiny part of doing digital logic design. Back when I was doing my EE study I also took several classes on digital IC logic design. Compared to software development, that was/is highly abstract. Didn't even involve controlling anything real (unlike software). Just running simulation after simulation and doing analysis of test vector coverage. And this is true for a lot of logic design work. When I design a new part for a complicated piece of logic in an FPGA, I start out with simulations and once the logic does what I need it to do, I add it somewhere to the rest of the design. However, I really can't see how this workflow is any different compared to developing a new software module in C (which also involves providing stimuli and checking output aimed to maximise test coverage).

So yes, HDLs look like software development tools because they are software development tools.

Just so.

There's an additional step that can be added to "help" ASMI.
His contention appears to be that functions defined by doped silicon plus deposited metal (e.g. a 7400) that cannot be changed after manufacture is hardware (or "circuit" as he calls it).
His contention appears to be that functions defined by HDLs/HLLs, stored as charges in FETs' gates or in flip flops (e.g. memory or FPGA) and which can be changed after manufacture is software.

Semi-custom ICs, which I implemented 40 (gulp) years ago had their function defined in a HiLo (a precursor of Verilog and VHDL), and implemented in metalisation. No changes after manufacturing, each manufacturing iteration cost 3 months (at best) and a year's salary.

I wonder whether ASMI would class those ICs as software or "circuits".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #72 on: July 13, 2023, 07:57:19 pm »
They are not intractable. They are just difficult. There are, for example, fully asynchronous DSPs. I'm not clear whether they show any cost performance benefits over more conventional designs. What bothers many people about them is you just don't know how much latency there will be between input and output, as it varies from sample to sample of the device. As long as you design your system for the specified worst case of the device it shouldn't be an issue.

"Asynchronous" means different things to different people.  I learned asynchronous state machine design, which used no FFs.  In essence, the feedback paths, created latches in the logic, but still, there was no clock or enable. 

I've also studied asynchronous processors, which have no free running master clock.  They do have FFs with clock inputs, but the clocks are generated locally, and are stopped when there is no data to process.  The clock is often generated with variable delay, corresponding to the timing of the particular circuit processing the data.  The Green Arrays GA144 has 144 such processors running at 700 MIPS peak in each processor!

The speed advantage of the async processor is in being able to take advantage of portions of the design that are faster than the remainder of the logic.  In a properly clocked design, the entire circuit runs at the speed of the slowest circuit. 

It also achieves speed gains from PVT (Process, Voltage and Temperature).  However, these gains can not be counted on, other than perhaps running at higher or lower voltage to tradeoff speed and power. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #73 on: July 13, 2023, 08:26:48 pm »
"Asynchronous" means different things to different people. 

Just so.

In addition to the cases you mentioned, "asynchronous" has analogous meaning in software systems, especially distributed systems.

Programming those is, in practice, difficult and error prone, largely - I believe - because softies can't think of things occurring in parallel in systems which cannot have a unique concept of time. (For those that doubt that, I refer you to Leslie Lamport's seminal and well-known works!)

One widely used and successful set of design strategies is used in the telecom system specifications and LAN specifications. There the specification and implementation is in the form of multiple independent FSMs each with their own internal state and communicating via events. Often the spec is written in the SDL language. Sometimes the events/states are implemented in logic gates, sometimes in packets, CPUs and memory.

For ASMI's benefit...

In token ring LANs (and probably others) sequences of voltage transitions at the PHY level are pattern matched by an FSM implemented in a few gates and interpreted as a packet representing the token.

The arrival of such a packet causes a logic signal to change, which in turn changes the state of some FSMs implemented in hardware and some FSMs implemented in software.

Other transition sequences are interpreted as the beginning and end of information packets, and the information is passed from the PHY level to other FSMs implemented in hardware or software at various levels of the networking stack.

The choice between hardware and software implementation is somewhat arbitrary, and different manufacturers make different choices.
« Last Edit: July 13, 2023, 08:38:27 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4452
  • Country: dk
Re: MCU with FPGA vs. SoC FPGA
« Reply #74 on: July 13, 2023, 08:37:15 pm »
They are not intractable. They are just difficult. There are, for example, fully asynchronous DSPs. I'm not clear whether they show any cost performance benefits over more conventional designs. What bothers many people about them is you just don't know how much latency there will be between input and output, as it varies from sample to sample of the device. As long as you design your system for the specified worst case of the device it shouldn't be an issue.

"Asynchronous" means different things to different people.  I learned asynchronous state machine design, which used no FFs.  In essence, the feedback paths, created latches in the logic, but still, there was no clock or enable. 

I've also studied asynchronous processors, which have no free running master clock.  They do have FFs with clock inputs, but the clocks are generated locally, and are stopped when there is no data to process.  The clock is often generated with variable delay, corresponding to the timing of the particular circuit processing the data.  The Green Arrays GA144 has 144 such processors running at 700 MIPS peak in each processor!

The speed advantage of the async processor is in being able to take advantage of portions of the design that are faster than the remainder of the logic.  In a properly clocked design, the entire circuit runs at the speed of the slowest circuit. 

It also achieves speed gains from PVT (Process, Voltage and Temperature).  However, these gains can not be counted on, other than perhaps running at higher or lower voltage to tradeoff speed and power.

and it is much harder to analyse because it is no longer a matter of just checking path from one FF to another is shorter than the clock period
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #75 on: July 13, 2023, 08:46:06 pm »
They are not intractable. They are just difficult. There are, for example, fully asynchronous DSPs. I'm not clear whether they show any cost performance benefits over more conventional designs. What bothers many people about them is you just don't know how much latency there will be between input and output, as it varies from sample to sample of the device. As long as you design your system for the specified worst case of the device it shouldn't be an issue.

"Asynchronous" means different things to different people.  I learned asynchronous state machine design, which used no FFs.  In essence, the feedback paths, created latches in the logic, but still, there was no clock or enable. 

I've also studied asynchronous processors, which have no free running master clock.  They do have FFs with clock inputs, but the clocks are generated locally, and are stopped when there is no data to process.  The clock is often generated with variable delay, corresponding to the timing of the particular circuit processing the data.  The Green Arrays GA144 has 144 such processors running at 700 MIPS peak in each processor!

The speed advantage of the async processor is in being able to take advantage of portions of the design that are faster than the remainder of the logic.  In a properly clocked design, the entire circuit runs at the speed of the slowest circuit. 

It also achieves speed gains from PVT (Process, Voltage and Temperature).  However, these gains can not be counted on, other than perhaps running at higher or lower voltage to tradeoff speed and power.

and it is much harder to analyse because it is no longer a matter of just checking path from one FF to another is shorter than the clock period

and last time I looked, the precondition to analysis was that only one input changes at a time. Metastability? What's that?

In addition while event transition from one state to another may be well defined, the path it takes to get from one state to the next is not defined or definable. That can lead through illegal intermediate states, which can cause output glitches or even FSM lockup. Hence the need for bridging terms in asynchronous logic specified in Karnaugh maps.

Try getting someone used to HDL simulators and synchronous logic to recognise that! It is an uphill struggle, especially since they don't want to think of such possibilities.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #76 on: July 13, 2023, 08:46:45 pm »
Do you have a reference to the DSP you mention? I'm curious to find out more.
Octasic claim to use asynchronous processing in some of their DSPs. I know another in more detail that I can't talk about.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #77 on: July 13, 2023, 08:47:18 pm »
Programming those is, in practice, difficult and error prone, largely - I believe - because softies can't think of things occurring in parallel in systems which cannot have a unique concept of time. (For those that doubt that, I refer you to Leslie Lamport's seminal and well-known works!)
'Can't think' is a bit too strong. IMHO the main problem is that parallel execution is not getting much attention during educational programs and hence people trip over something they are not familiar with. When the first multi-threading Intel processors came to market, there was quite a bit of software that couldn't deal with being executed on two CPUs in parallel. The software worked because of luck on a single core machine but fell apart on a multi core machine.

In addition while event transition from one state to another may be well defined, the path it takes to get from one state to the next is not defined or definable. That can lead through illegal intermediate states, which can cause output glitches or even FSM lockup. Hence the need for bridging terms in asynchronous logic specified in Karnaugh maps.

Try getting someone used to HDL simulators and synchronous logic to recognise that! It is an uphill struggle, especially since they don't want to think of such possibilities.
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.
« Last Edit: July 13, 2023, 08:56:11 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #78 on: July 13, 2023, 09:11:35 pm »
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.

Some years back, a Xilinx representative said the Xilinx LUT (which is not exactly logic, but transmission gates) is designed to not glitch from inputs changing "at the same time" for some definition of "same time".  I don't recall the basis for this statement, but it included the impact of the output capacitance (of the LUT, not the logic block) preserving state during the transitions.  Hmmm...  it's possible he was talking about a single input changing... :(

I have no idea what it would take to prevent glitches when using more than one LUT.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #79 on: July 13, 2023, 09:35:16 pm »
A quick Google search seems to indicate Xilinx has a patent on a LUT structure that does not glitch when 1 input changes. However, with multiple inputs changing, there will be different delays and thus several glitches before a LUT reaches a stable state. Which is what you'd expect from a memory (which a LUT it) because the address decoder has different delays per select line.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #80 on: July 13, 2023, 10:03:10 pm »
Programming those is, in practice, difficult and error prone, largely - I believe - because softies can't think of things occurring in parallel in systems which cannot have a unique concept of time. (For those that doubt that, I refer you to Leslie Lamport's seminal and well-known works!)
'Can't think' is a bit too strong. IMHO the main problem is that parallel execution is not getting much attention during educational programs and hence people trip over something they are not familiar with. When the first multi-threading Intel processors came to market, there was quite a bit of software that couldn't deal with being executed on two CPUs in parallel. The software worked because of luck on a single core machine but fell apart on a multi core machine.
 

Agreed in all respects

Quote
In addition while event transition from one state to another may be well defined, the path it takes to get from one state to the next is not defined or definable. That can lead through illegal intermediate states, which can cause output glitches or even FSM lockup. Hence the need for bridging terms in asynchronous logic specified in Karnaugh maps.

Try getting someone used to HDL simulators and synchronous logic to recognise that! It is an uphill struggle, especially since they don't want to think of such possibilities.
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.

Complex, no, and it would be perverse to try. There are potentially some strange niche cases that might benefit, but that doesn't change the point.

It might be possible to implement some asynchronous logic by devious use of the toolset. I don't know how the timing parameters and convergence would interact with the logic.

Of course there are some PL blocks that internally have asynchronous behaviour (e.g. Clock domain crossing blocks), but within a clock domain the tools and implementation are synchronous.
« Last Edit: July 13, 2023, 10:08:05 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #81 on: July 13, 2023, 10:22:19 pm »
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.

Some years back, a Xilinx representative said the Xilinx LUT (which is not exactly logic, but transmission gates) is designed to not glitch from inputs changing "at the same time" for some definition of "same time".  I don't recall the basis for this statement, but it included the impact of the output capacitance (of the LUT, not the logic block) preserving state during the transitions.  Hmmm...  it's possible he was talking about a single input changing... :(

I have no idea what it would take to prevent glitches when using more than one LUT.

In synchronous logic there is no problem with multiple combinatorial inputs changing simultaneously, since there are no internal loops (latches/metastability) and provided the outputs become stable before the next clock.

Xilinx's statement may be crect for asynchronous circuits (with internal loops) with "changes at the same time", but if so there must be some combination of changes and times which will cause problems. Unless, I suppose, they have found a way of circumventing metastability in flip flops.:)

But we (well most of us) realise why FPGAs are for practical intents and purposes, synchronous devices.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #82 on: July 14, 2023, 03:09:29 am »
A quick Google search seems to indicate Xilinx has a patent on a LUT structure that does not glitch when 1 input changes. However, with multiple inputs changing, there will be different delays and thus several glitches before a LUT reaches a stable state. Which is what you'd expect from a memory (which a LUT it) because the address decoder has different delays per select line.

What was the date on that patent?   Patents are only good for 21 years or so. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #83 on: July 14, 2023, 03:12:10 am »
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.

Some years back, a Xilinx representative said the Xilinx LUT (which is not exactly logic, but transmission gates) is designed to not glitch from inputs changing "at the same time" for some definition of "same time".  I don't recall the basis for this statement, but it included the impact of the output capacitance (of the LUT, not the logic block) preserving state during the transitions.  Hmmm...  it's possible he was talking about a single input changing... :(

I have no idea what it would take to prevent glitches when using more than one LUT.

In synchronous logic there is no problem with multiple combinatorial inputs changing simultaneously, since there are no internal loops (latches/metastability) and provided the outputs become stable before the next clock.

Xilinx's statement may be crect for asynchronous circuits (with internal loops) with "changes at the same time", but if so there must be some combination of changes and times which will cause problems. Unless, I suppose, they have found a way of circumventing metastability in flip flops.:)

But we (well most of us) realise why FPGAs are for practical intents and purposes, synchronous devices.

Not sure where you are going with this.  I was discussing asynch logic state machines which have no FFs.   Why are you talking about this in combination with metastability, which is an issue with clocked FFs?  Did I miss something about metastability?
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #84 on: July 14, 2023, 07:37:06 am »
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.

Some years back, a Xilinx representative said the Xilinx LUT (which is not exactly logic, but transmission gates) is designed to not glitch from inputs changing "at the same time" for some definition of "same time".  I don't recall the basis for this statement, but it included the impact of the output capacitance (of the LUT, not the logic block) preserving state during the transitions.  Hmmm...  it's possible he was talking about a single input changing... :(

I have no idea what it would take to prevent glitches when using more than one LUT.

In synchronous logic there is no problem with multiple combinatorial inputs changing simultaneously, since there are no internal loops (latches/metastability) and provided the outputs become stable before the next clock.

Xilinx's statement may be crect for asynchronous circuits (with internal loops) with "changes at the same time", but if so there must be some combination of changes and times which will cause problems. Unless, I suppose, they have found a way of circumventing metastability in flip flops.:)

But we (well most of us) realise why FPGAs are for practical intents and purposes, synchronous devices.

Not sure where you are going with this.  I was discussing asynch logic state machines which have no FFs.   Why are you talking about this in combination with metastability, which is an issue with clocked FFs?  Did I miss something about metastability?

Nctnico's statement (which you quoted, unlike some here :) ), is about asynchronous logic in FPGAs.
Your response didn't indicate you were only considering synchronous logic.
Hence my comments. That's all.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #85 on: July 14, 2023, 07:55:28 am »
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.

Some years back, a Xilinx representative said the Xilinx LUT (which is not exactly logic, but transmission gates) is designed to not glitch from inputs changing "at the same time" for some definition of "same time".  I don't recall the basis for this statement, but it included the impact of the output capacitance (of the LUT, not the logic block) preserving state during the transitions.  Hmmm...  it's possible he was talking about a single input changing... :(

I have no idea what it would take to prevent glitches when using more than one LUT.

In synchronous logic there is no problem with multiple combinatorial inputs changing simultaneously, since there are no internal loops (latches/metastability) and provided the outputs become stable before the next clock.

Xilinx's statement may be crect for asynchronous circuits (with internal loops) with "changes at the same time", but if so there must be some combination of changes and times which will cause problems. Unless, I suppose, they have found a way of circumventing metastability in flip flops.:)

But we (well most of us) realise why FPGAs are for practical intents and purposes, synchronous devices.

Not sure where you are going with this.  I was discussing asynch logic state machines which have no FFs.   Why are you talking about this in combination with metastability, which is an issue with clocked FFs?  Did I miss something about metastability?

Nctnico's statement (which you quoted, unlike some here :) ), is about asynchronous logic in FPGAs.
Your response didn't indicate you were only considering synchronous logic.
Hence my comments. That's all.

Sorry, you have stated the situation exactly backwards.  I'm talking about purely asynchronous logic, not synchronous.

How exactly do you see metastability being involved in asynchronous logic?  I'm totally lost.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #86 on: July 14, 2023, 08:06:55 am »
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.

Some years back, a Xilinx representative said the Xilinx LUT (which is not exactly logic, but transmission gates) is designed to not glitch from inputs changing "at the same time" for some definition of "same time".  I don't recall the basis for this statement, but it included the impact of the output capacitance (of the LUT, not the logic block) preserving state during the transitions.  Hmmm...  it's possible he was talking about a single input changing... :(

I have no idea what it would take to prevent glitches when using more than one LUT.

In synchronous logic there is no problem with multiple combinatorial inputs changing simultaneously, since there are no internal loops (latches/metastability) and provided the outputs become stable before the next clock.

Xilinx's statement may be crect for asynchronous circuits (with internal loops) with "changes at the same time", but if so there must be some combination of changes and times which will cause problems. Unless, I suppose, they have found a way of circumventing metastability in flip flops.:)

But we (well most of us) realise why FPGAs are for practical intents and purposes, synchronous devices.

Not sure where you are going with this.  I was discussing asynch logic state machines which have no FFs.   Why are you talking about this in combination with metastability, which is an issue with clocked FFs?  Did I miss something about metastability?

Nctnico's statement (which you quoted, unlike some here :) ), is about asynchronous logic in FPGAs.
Your response didn't indicate you were only considering synchronous logic.
Hence my comments. That's all.

Sorry, you have stated the situation exactly backwards.  I'm talking about purely asynchronous logic, not synchronous.

How exactly do you see metastability being involved in asynchronous logic?  I'm totally lost.

Is there a confusion between "asynchronous" and "combinatorial"?

If you have a loop in asynchronous logic, then metastability becomes an issue. The simplest example is an RS latch formed by cross-coupled nand gates.

If you are only considering the traditional design pattern where combinatorial logic without loops has its inputs and outputs connected to registers, then metastability in that combinatorial logic cannot occur.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #87 on: July 14, 2023, 08:32:54 am »
In theory you can have latches in asynchronous logic but you'll need to setup delay paths to meet setup & hold to avoid metastability. I guess you'd have to analyse the whole circuit as being combinatorial logic. There are probably good books on how to tackle such a project.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #88 on: July 14, 2023, 01:07:33 pm »
In theory you can have latches in asynchronous logic but you'll need to setup delay paths to meet setup & hold to avoid metastability. I guess you'd have to analyse the whole circuit as being combinatorial logic. There are probably good books on how to tackle such a project.

If you can find a way to avoid metastability without putting limitations on the external inputs, do yourself (and everybody else) a favour and write it up for publication in, say, an IEEE Journal of record :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #89 on: July 14, 2023, 02:25:17 pm »
In theory you can have latches in asynchronous logic but you'll need to setup delay paths to meet setup & hold to avoid metastability. I guess you'd have to analyse the whole circuit as being combinatorial logic. There are probably good books on how to tackle such a project.

If you can find a way to avoid metastability without putting limitations on the external inputs, do yourself (and everybody else) a favour and write it up for publication in, say, an IEEE Journal of record :)
There are always limitations where it comes to timing. But the nice thing about asynchronous logic is that it can react to external signals without needing those signals being synchronised to a clock first. Think of a simple D-latch as an example. Or the good old ripple counters like the 7493 (and all its later CMOS incarnations)
« Last Edit: July 14, 2023, 02:29:42 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #90 on: July 14, 2023, 02:57:54 pm »
if you want to see a documented CPU architecture example of what is "microcode", "macrocode", and what runs on the top of what, so to see the boundary between hardware and software, as far as a CPU is concerned, look at the "MIC-1", a processor architecture invented by Andrew Tanenbaum to use as a simple but complete example in his teaching book "Structured Computer Organization", the design is 100% accademic and consists of a very simple control unit that runs microcode from a 512-words store.

oh, now we have the 5th edition of the Holy Book :o :o :o
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #91 on: July 14, 2023, 03:12:07 pm »
There are always limitations where it comes to timing. But the nice thing about asynchronous logic is that it can react to external signals without needing those signals being synchronised to a clock first.

The perfect example is
- designing a memory mapped device with a FPGA for an asynchronous CPU BUS.
- trying to address an asynchronous static ram with a FPGA.

in both cases
- signals being synchronised to a clock
- latch
- proper sample and hold time as low level high priority project constraints
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #92 on: July 14, 2023, 05:34:37 pm »
if you want to see a documented CPU architecture example of what is "microcode", "macrocode", and what runs on the top of what, so to see the boundary between hardware and software, as far as a CPU is concerned, look at the "MIC-1", a processor architecture invented by Andrew Tanenbaum to use as a simple but complete example in his teaching book "Structured Computer Organization", the design is 100% accademic and consists of a very simple control unit that runs microcode from a 512-words store.

oh, now we have the 5th edition of the Holy Book :o :o :o

When, in the late 70s, I was wondering how processors were implemented, I triumphantly reinvented microcoding. Didn't realise it at the time, of course.

Then at the turn of the decade I came across the AMD2900 series plus the "Mick and Brick" book.

In the mid 80s I was told to investigate the Research Machines RM1 based on the 2900. Didn't use it.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: DiTBho

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #93 on: July 14, 2023, 05:37:09 pm »
In theory you can have latches in asynchronous logic but you'll need to setup delay paths to meet setup & hold to avoid metastability. I guess you'd have to analyse the whole circuit as being combinatorial logic. There are probably good books on how to tackle such a project.

If you can find a way to avoid metastability without putting limitations on the external inputs, do yourself (and everybody else) a favour and write it up for publication in, say, an IEEE Journal of record :)
There are always limitations where it comes to timing. But the nice thing about asynchronous logic is that it can react to external signals without needing those signals being synchronised to a clock first. Think of a simple D-latch as an example. Or the good old ripple counters like the 7493 (and all its later CMOS incarnations)

Yes, those are advantages.

But all technologies have their disadvantages; with asynchronous logic there is metastability and dynamic hazards.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #94 on: July 14, 2023, 05:41:54 pm »
if you want to see a documented CPU architecture example of what is "microcode", "macrocode", and what runs on the top of what, so to see the boundary between hardware and software, as far as a CPU is concerned, look at the "MIC-1", a processor architecture invented by Andrew Tanenbaum to use as a simple but complete example in his teaching book "Structured Computer Organization", the design is 100% accademic and consists of a very simple control unit that runs microcode from a 512-words store.

oh, now we have the 5th edition of the Holy Book :o :o :o

When, in the late 70s, I was wondering how processors were implemented, I triumphantly reinvented microcoding. Didn't realise it at the time, of course.

Then at the turn of the decade I came across the AMD2900 series plus the "Mick and Brick" book.
I developed with both the AM2900 and Intel 3000 chipsets for building microcoded machines. In all cases sliding between pure mcirocode, and microcode plus macrocode. Mostly for DSP and graphics applications. You worked with the right chipset. It was way ahead of the Intel one.

That Mick and Brick book was foundational it getting a lot of places started in DSP development.
 
The following users thanked this post: DiTBho

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #95 on: July 14, 2023, 05:46:46 pm »
if you want to see a documented CPU architecture example of what is "microcode", "macrocode", and what runs on the top of what, so to see the boundary between hardware and software, as far as a CPU is concerned, look at the "MIC-1", a processor architecture invented by Andrew Tanenbaum to use as a simple but complete example in his teaching book "Structured Computer Organization", the design is 100% accademic and consists of a very simple control unit that runs microcode from a 512-words store.

oh, now we have the 5th edition of the Holy Book :o :o :o

When, in the late 70s, I was wondering how processors were implemented, I triumphantly reinvented microcoding. Didn't realise it at the time, of course.

Then at the turn of the decade I came across the AMD2900 series plus the "Mick and Brick" book.
I developed with both the AM2900 and Intel 3000 chipsets for building microcoded machines. In all cases sliding between pure mcirocode, and microcode plus macrocode. Mostly for DSP and graphics applications. You worked with the right chipset. It was way ahead of the Intel one.

That Mick and Brick book was foundational it getting a lot of places started in DSP development.

I think I still have a databook with the Intel 3000 series. Not many people believe Intel made 2 bit CPUs :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: DiTBho

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #96 on: July 14, 2023, 05:52:52 pm »
if you want to see a documented CPU architecture example of what is "microcode", "macrocode", and what runs on the top of what, so to see the boundary between hardware and software, as far as a CPU is concerned, look at the "MIC-1", a processor architecture invented by Andrew Tanenbaum to use as a simple but complete example in his teaching book "Structured Computer Organization", the design is 100% accademic and consists of a very simple control unit that runs microcode from a 512-words store.

oh, now we have the 5th edition of the Holy Book :o :o :o

When, in the late 70s, I was wondering how processors were implemented, I triumphantly reinvented microcoding. Didn't realise it at the time, of course.

Then at the turn of the decade I came across the AMD2900 series plus the "Mick and Brick" book.
I developed with both the AM2900 and Intel 3000 chipsets for building microcoded machines. In all cases sliding between pure mcirocode, and microcode plus macrocode. Mostly for DSP and graphics applications. You worked with the right chipset. It was way ahead of the Intel one.

That Mick and Brick book was foundational it getting a lot of places started in DSP development.

I think I still have a databook with the Intel 3000 series. Not many people believe Intel made 2 bit CPUs :)
I had to develop my own line and arc drawing algorithms for an Intel 3000 vector graphics engine we developed. No rasters for us. All, pure point to point scanning across a CRT. Later I found Bresenham had developed the same algorithms, although I included tweaks to improve the smoothness that he didn't seem to document. This was how things were before the Internet let us look things up easily. :)
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #97 on: July 14, 2023, 07:36:09 pm »
I'm not quite sure typical FPGA fabric (consisting of muxes and LUTs) and HDL tools are suitable to create complex pieces of asynchronous logic in the first place. Sure you can hand craft some amount of logic but for a large design, you'd need tools specially geared to avoid glitches from becoming a problem. This means that the tools need to be able to determine / specify the order in which the signals are connected to a LUT in order to manage output glitches. OR the hardware should somehow filter the glitches but that would mean slowing the logic down. The tools from Xilinx I worked with so far do optimisations at several levels that may screw things up badly at each level.

Some years back, a Xilinx representative said the Xilinx LUT (which is not exactly logic, but transmission gates) is designed to not glitch from inputs changing "at the same time" for some definition of "same time".  I don't recall the basis for this statement, but it included the impact of the output capacitance (of the LUT, not the logic block) preserving state during the transitions.  Hmmm...  it's possible he was talking about a single input changing... :(

I have no idea what it would take to prevent glitches when using more than one LUT.

In synchronous logic there is no problem with multiple combinatorial inputs changing simultaneously, since there are no internal loops (latches/metastability) and provided the outputs become stable before the next clock.

Xilinx's statement may be crect for asynchronous circuits (with internal loops) with "changes at the same time", but if so there must be some combination of changes and times which will cause problems. Unless, I suppose, they have found a way of circumventing metastability in flip flops.:)

But we (well most of us) realise why FPGAs are for practical intents and purposes, synchronous devices.

Not sure where you are going with this.  I was discussing asynch logic state machines which have no FFs.   Why are you talking about this in combination with metastability, which is an issue with clocked FFs?  Did I miss something about metastability?

Nctnico's statement (which you quoted, unlike some here :) ), is about asynchronous logic in FPGAs.
Your response didn't indicate you were only considering synchronous logic.
Hence my comments. That's all.

Sorry, you have stated the situation exactly backwards.  I'm talking about purely asynchronous logic, not synchronous.

How exactly do you see metastability being involved in asynchronous logic?  I'm totally lost.

Is there a confusion between "asynchronous" and "combinatorial"?

If you have a loop in asynchronous logic, then metastability becomes an issue. The simplest example is an RS latch formed by cross-coupled nand gates.

If you are only considering the traditional design pattern where combinatorial logic without loops has its inputs and outputs connected to registers, then metastability in that combinatorial logic cannot occur.

I've never heard anyone refer to metastability other than in clocked FFs.  Combine that with your comment, "I suppose, they have found a way of circumventing metastability in flip flops" and I have no idea what you are talking about. 

Whatever.  This is not really important.  FPGAs are not intended to be used for asynch logic and no tools for this are provided, to the best of my knowledge.  Such a design would be very difficult to design and verify in an FPGA. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #98 on: July 14, 2023, 07:43:42 pm »
if you want to see a documented CPU architecture example of what is "microcode", "macrocode", and what runs on the top of what, so to see the boundary between hardware and software, as far as a CPU is concerned, look at the "MIC-1", a processor architecture invented by Andrew Tanenbaum to use as a simple but complete example in his teaching book "Structured Computer Organization", the design is 100% accademic and consists of a very simple control unit that runs microcode from a 512-words store.

oh, now we have the 5th edition of the Holy Book :o :o :o

When, in the late 70s, I was wondering how processors were implemented, I triumphantly reinvented microcoding. Didn't realise it at the time, of course.

Then at the turn of the decade I came across the AMD2900 series plus the "Mick and Brick" book.
I developed with both the AM2900 and Intel 3000 chipsets for building microcoded machines. In all cases sliding between pure mcirocode, and microcode plus macrocode. Mostly for DSP and graphics applications. You worked with the right chipset. It was way ahead of the Intel one.

That Mick and Brick book was foundational it getting a lot of places started in DSP development.

I remember using a CAD system based on a 68000 type processor (don't recall exactly which one).  It ran simulations slow as crap.  They had an upgraded system that emulated the 680xx in a bit slice design for a lot more bucks.  A year later they had a system using the next gen of 680xx which was faster than the bit slice and a lot cheaper! 

Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #99 on: July 14, 2023, 07:53:13 pm »
In theory you can have latches in asynchronous logic but you'll need to setup delay paths to meet setup & hold to avoid metastability. I guess you'd have to analyse the whole circuit as being combinatorial logic. There are probably good books on how to tackle such a project.

If you can find a way to avoid metastability without putting limitations on the external inputs, do yourself (and everybody else) a favour and write it up for publication in, say, an IEEE Journal of record :)
There are always limitations where it comes to timing. But the nice thing about asynchronous logic is that it can react to external signals without needing those signals being synchronised to a clock first. Think of a simple D-latch as an example. Or the good old ripple counters like the 7493 (and all its later CMOS incarnations)

Yes, those are advantages.

But all technologies have their disadvantages; with asynchronous logic there is metastability and dynamic hazards.
No. Metastability only occurs when your timing analysis is incorrect and thus setup & hold times of flipflops are violated. This is equally true for synchronous and asynchronous logic. Metastability is such a fundamental problem for digital designs (*) that working around it, is about the first thing they teach you and thus create robust designs.

* Actually not only for digital designs but for any system be it software or hardware.
« Last Edit: July 14, 2023, 08:26:39 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #100 on: July 14, 2023, 08:04:09 pm »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.
Er, very much no. Bit slice was the basis for a large part of the mini computer industry for a number of years. It was how the DSP industry got started, especially the AM2900 family. It was the basis for many microcoded systems, like graphics machines and video manipulators. It was big before the design of the MC68k family had even started.

 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #101 on: July 14, 2023, 08:10:15 pm »
CAD system based on a 68000 type processor (don't recall exactly which one)

must have been 68020 or 68030.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #102 on: July 14, 2023, 08:37:29 pm »
CAD system based on a 68000 type processor (don't recall exactly which one)

must have been 68020 or 68030.

No, that would have been much faster.  It was either a 68000 or 68010.  I can't recall which one, but either the 68010 or the 68020 was quite a leap from the previous version. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #103 on: July 14, 2023, 08:47:48 pm »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.
Er, very much no.

I'm not saying it was not used.  I'm saying the LSI overtook it rapidly.  Bit slice has significant limitations, such as the slow carry propagation, due to going off chip so much.  They had to produce ECL versions to try to maintain reasonable speeds, but were still outclassed by LSI when the entire ALU was on chip.


Quote
Bit slice was the basis for a large part of the mini computer industry for a number of years. It was how the DSP industry got started, especially the AM2900 family. It was the basis for many microcoded systems, like graphics machines and video manipulators. It was big before the design of the MC68k family had even started.

Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
Re: MCU with FPGA vs. SoC FPGA
« Reply #104 on: July 14, 2023, 08:49:06 pm »
No, that would have been much faster.  It was either a 68000 or 68010.  I can't recall which one, but either the 68010 or the 68020 was quite a leap from the previous version.

Apollo?

https://en.wikipedia.org/wiki/Apollo_Computer

The dual 68000 processor configuration was designed to provide automatic page fault switching, with the main processor executing the OS and program instructions, and the "fixer" processor satisfying the page faults. When a page fault was raised, the main CPU was halted in mid (memory) cycle while the fixer CPU would bring the page into memory and then allow the main CPU to continue, unaware of the page fault.[8] Later improvements in the Motorola 68010 processor obviated the need for the dual-processor design.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #105 on: July 14, 2023, 08:52:47 pm »
In theory you can have latches in asynchronous logic but you'll need to setup delay paths to meet setup & hold to avoid metastability. I guess you'd have to analyse the whole circuit as being combinatorial logic. There are probably good books on how to tackle such a project.

If you can find a way to avoid metastability without putting limitations on the external inputs, do yourself (and everybody else) a favour and write it up for publication in, say, an IEEE Journal of record :)
There are always limitations where it comes to timing. But the nice thing about asynchronous logic is that it can react to external signals without needing those signals being synchronised to a clock first. Think of a simple D-latch as an example. Or the good old ripple counters like the 7493 (and all its later CMOS incarnations)

Yes, those are advantages.

But all technologies have their disadvantages; with asynchronous logic there is metastability and dynamic hazards.
No. Metastability only occurs when your timing analysis is incorrect and thus setup & hold times of flipflops are violated. This is equally true for synchronous and asynchronous logic. Metastability is such a fundamental problem for digital designs (*) that working around it, is about the first thing they teach you and thus create robust designs.

* Actually not only for digital designs but for any system be it software or hardware.


It appears that "asynchronous" is being used with two different meanings: "no clock" and "no defined timing relationship".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #106 on: July 14, 2023, 08:53:13 pm »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #107 on: July 14, 2023, 08:54:48 pm »
No, that would have been much faster.  It was either a 68000 or 68010.  I can't recall which one, but either the 68010 or the 68020 was quite a leap from the previous version.

Apollo?

I don't  think so, but I can't say for sure. 


Quote
https://en.wikipedia.org/wiki/Apollo_Computer

The dual 68000 processor configuration was designed to provide automatic page fault switching, with the main processor executing the OS and program instructions, and the "fixer" processor satisfying the page faults. When a page fault was raised, the main CPU was halted in mid (memory) cycle while the fixer CPU would bring the page into memory and then allow the main CPU to continue, unaware of the page fault.[8] Later improvements in the Motorola 68010 processor obviated the need for the dual-processor design.

I read the book, "Soul of the new Machine", which was pretty good.  He mentions how they designed it to handle page faults, but forgot to consider a page fault in the page fault handler.  lol  They had to wire in the pages for the page fault handler, so they were never swapped out.

This book also has one of my favorite quotes.  A guy on the team got so tired of counting nanoseconds, that he quit, bought a farm saying, "I don't want to deal with any time frames shorter than a season."  LOL  I know the feeling.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #108 on: July 14, 2023, 08:56:34 pm »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

I don't want to argue with you.  Believe what you wish. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #109 on: July 14, 2023, 09:01:40 pm »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips
« Last Edit: July 14, 2023, 09:03:43 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #110 on: July 14, 2023, 09:02:55 pm »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

I don't want to argue with you.  Believe what you wish.
Early 70s to early 80s. If you look at the number of generations things like TI's bit slice chips went through, they were obviously selling, with customers demanding better follow ons.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: MCU with FPGA vs. SoC FPGA
« Reply #111 on: July 14, 2023, 09:05:08 pm »
In theory you can have latches in asynchronous logic but you'll need to setup delay paths to meet setup & hold to avoid metastability. I guess you'd have to analyse the whole circuit as being combinatorial logic. There are probably good books on how to tackle such a project.

If you can find a way to avoid metastability without putting limitations on the external inputs, do yourself (and everybody else) a favour and write it up for publication in, say, an IEEE Journal of record :)
There are always limitations where it comes to timing. But the nice thing about asynchronous logic is that it can react to external signals without needing those signals being synchronised to a clock first. Think of a simple D-latch as an example. Or the good old ripple counters like the 7493 (and all its later CMOS incarnations)

Yes, those are advantages.

But all technologies have their disadvantages; with asynchronous logic there is metastability and dynamic hazards.
No. Metastability only occurs when your timing analysis is incorrect and thus setup & hold times of flipflops are violated. This is equally true for synchronous and asynchronous logic. Metastability is such a fundamental problem for digital designs (*) that working around it, is about the first thing they teach you and thus create robust designs.

* Actually not only for digital designs but for any system be it software or hardware.


It appears that "asynchronous" is being used with two different meanings: "no clock" and "no defined timing relationship".
As a real world circuit can't work without a defined timing relationship, it means I -as someone that is dealing with mostly practical sides of things- am discussing circuits with no clock.  ;D
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #112 on: July 14, 2023, 09:08:53 pm »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips
The 11/730 was the biggest selling mini-computer of all time, and it was awful. I used to use them. I was shocked when I found how well they sold.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14536
  • Country: fr
Re: MCU with FPGA vs. SoC FPGA
« Reply #113 on: July 14, 2023, 09:16:42 pm »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

They were used, a lot.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #114 on: July 14, 2023, 11:06:15 pm »
In theory you can have latches in asynchronous logic but you'll need to setup delay paths to meet setup & hold to avoid metastability. I guess you'd have to analyse the whole circuit as being combinatorial logic. There are probably good books on how to tackle such a project.

If you can find a way to avoid metastability without putting limitations on the external inputs, do yourself (and everybody else) a favour and write it up for publication in, say, an IEEE Journal of record :)
There are always limitations where it comes to timing. But the nice thing about asynchronous logic is that it can react to external signals without needing those signals being synchronised to a clock first. Think of a simple D-latch as an example. Or the good old ripple counters like the 7493 (and all its later CMOS incarnations)

Yes, those are advantages.

But all technologies have their disadvantages; with asynchronous logic there is metastability and dynamic hazards.
No. Metastability only occurs when your timing analysis is incorrect and thus setup & hold times of flipflops are violated. This is equally true for synchronous and asynchronous logic. Metastability is such a fundamental problem for digital designs (*) that working around it, is about the first thing they teach you and thus create robust designs.

* Actually not only for digital designs but for any system be it software or hardware.


It appears that "asynchronous" is being used with two different meanings: "no clock" and "no defined timing relationship".
As a real world circuit can't work without a defined timing relationship, it means I -as someone that is dealing with mostly practical sides of things- am discussing circuits with no clock.  ;D

A synchroniser is asynchronous circuit with two inputs and no defined time relationship between the two inputs.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline chickenHeadKnob

  • Super Contributor
  • ***
  • Posts: 1056
  • Country: ca
Re: MCU with FPGA vs. SoC FPGA
« Reply #115 on: July 14, 2023, 11:08:59 pm »
No, that would have been much faster.  It was either a 68000 or 68010.  I can't recall which one, but either the 68010 or the 68020 was quite a leap from the previous version.

Apollo?

https://en.wikipedia.org/wiki/Apollo_Computer

The dual 68000 processor configuration was designed to provide automatic page fault switching, with the main processor executing the OS and program instructions, and the "fixer" processor satisfying the page faults. When a page fault was raised, the main CPU was halted in mid (memory) cycle while the fixer CPU would bring the page into memory and then allow the main CPU to continue, unaware of the page fault.[8] Later improvements in the Motorola 68010 processor obviated the need for the dual-processor design.

I bet it was Mentor Graphics software running on an Apollo DN660. I used one of those. It had a bit sliced main CPU that had a horrible number of design revisions. The board was covered with patch wires and was very unreliable. The repair guy admitted it was a stop gap design because the 68020 was delayed, so somehow they were using the 2900  to emulate a 68020.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #116 on: July 14, 2023, 11:11:25 pm »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips
The 11/730 was the biggest selling mini-computer of all time, and it was awful. I used to use them. I was shocked when I found how well they sold.

The alternatives weren't wonderful.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #117 on: July 15, 2023, 01:59:31 am »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips

Dude, read what I write.  "bit slice would always be overrun quickly".  Every one of the examples you cite, were either very low volume, or quickly overrun by fast CPUs.  Why do you continue to argue about this???
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #118 on: July 15, 2023, 02:07:40 am »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

I don't want to argue with you.  Believe what you wish.
Early 70s to early 80s. If you look at the number of generations things like TI's bit slice chips went through, they were obviously selling, with customers demanding better follow ons.

None of that is in contradiction of what I said.  The volumes of bit-slice products were never large, and any given product with potential of high volume was redesigned with custom chips, or even general purpose CPUs as they ramped up in speed. 

BTW, the AM2900 family was not released until 1975, so "early 70s" is a bit of a stretch.

Bit-slice was always a niche, able to obtain high performance for very high power consumption, and high cost.  It had inherent speed limitations that doomed it from continued improvements.  The entire history of electronics has been as much about economics as it has been technology.  Now, with the huge market for mobile products, it's as much about power. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #119 on: July 15, 2023, 02:09:15 am »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

They were used, a lot.

Hard to argue with such a clearly defined point.  "A lot"!  LOL
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #120 on: July 15, 2023, 02:12:01 am »
No, that would have been much faster.  It was either a 68000 or 68010.  I can't recall which one, but either the 68010 or the 68020 was quite a leap from the previous version.

Apollo?

https://en.wikipedia.org/wiki/Apollo_Computer

The dual 68000 processor configuration was designed to provide automatic page fault switching, with the main processor executing the OS and program instructions, and the "fixer" processor satisfying the page faults. When a page fault was raised, the main CPU was halted in mid (memory) cycle while the fixer CPU would bring the page into memory and then allow the main CPU to continue, unaware of the page fault.[8] Later improvements in the Motorola 68010 processor obviated the need for the dual-processor design.

I bet it was Mentor Graphics software running on an Apollo DN660. I used one of those. It had a bit sliced main CPU that had a horrible number of design revisions. The board was covered with patch wires and was very unreliable. The repair guy admitted it was a stop gap design because the 68020 was delayed, so somehow they were using the 2900  to emulate a 68020.

What I used was definitely not Mentor Graphics.  This was before Mentor was much of a player.  We talked to their rep, who had a "mouse", which he described as a fantastic invention, which he couldn't get to work! 

No, their product was more of a toy at the time, very incomplete.  A lot of the "magic" of Mentor came a bit later, through acquisitions.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #121 on: July 15, 2023, 06:38:33 am »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips

Dude, read what I write.  "bit slice would always be overrun quickly".  Every one of the examples you cite, were either very low volume, or quickly overrun by fast CPUs.  Why do you continue to argue about this???

I, and other people, did read what you wrote, and have pointed out the inaccuracies.

The DEC machines were the highest volume (mini)computers of the time. Not research tools.

Arcade machines were widely distributed production machines. Not research tools.

Everything was "quickly overrun" in the 70s and 80s, from mainframes downwards. In the ARM/x86 era, it may be difficult for youngsters to imagine, but that was a "pre-Cambian" evolutionary era.

And I'm not a "dude".
« Last Edit: July 15, 2023, 06:43:05 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #122 on: July 15, 2023, 07:59:30 am »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips

Dude, read what I write.  "bit slice would always be overrun quickly".  Every one of the examples you cite, were either very low volume, or quickly overrun by fast CPUs.  Why do you continue to argue about this???

I, and other people, did read what you wrote, and have pointed out the inaccuracies.

The DEC machines were the highest volume (mini)computers of the time. Not research tools.

How long were the bit-slice models sold?  Like I said, DEC had the LSI-11 in 1975, using custom LSI chips.  They were not designed to be especially fast, but it was inexpensive (relatively speaking).  I did a bit of reading and found the VAX-11/730 and VAX-11/725 (same CPU) were the only bit-slice renditions of the VAX line.  DEC was using custom LSI for the VAX line, and used bit-slice to build less expensive and much slower versions of the VAX line.  So, in reality, this was not a "high performance" machine using bit-slice, but low end economy models!  LOL 


Quote
Arcade machines were widely distributed production machines. Not research tools.

Sorry, arcade machines were not widely distributed in any real sense.  They sold a tiny fraction compared to high volume devices like PCs.  If you consider arcade machines to be "high volume", then  you are right.  Bit-slice was wildly successful.  But again, how long were the bit-slice models sold, before being replaced with much cheaperCPU based designs?


Quote
Everything was "quickly overrun" in the 70s and 80s, from mainframes downwards. In the ARM/x86 era, it may be difficult for youngsters to imagine, but that was a "pre-Cambian" evolutionary era.

And I'm not a "dude".

Ok, dudett... 

Yes, every individual design is quickly obsoleted, but that's not the same as obsoleting a technology, or design approach.  Once LSI started producing CPUs with significant processing power, they outpaced what could be done in bit-slice and that technology was effectively buried.  Do you actually read what I write?  Read the next paragraph carefully. 

Bit-slice was made up of TTL logic slice chips.  That was faster than CMOS in 1975, when the devices were introduced.  But anytime a signal goes off chip, it is much slower than signals remaining on the chip.  The carry chain of bit-slice had to propagate through multiple chips giving a fundamental limit to the speed.  By 1980 or so CMOS was faster, because the entire CPU could be put on a single die.  Why didn't they use CMOS for bit-slice?  Because it was effectively dead at that point.  They tried ECL, which gave them more speed, but at a huge cost of power.  So, used in specialized products with big price tags and lots of power. 

This is essentially the same thing that happened with the array processing business.  They were cabinet sized machines that cost $200,000 and up, performing 100 MFLOPS (in the case of the machines I worked on).  Within 10 years, this technology was available in a chip from Intel.  Maybe not 100 MFLOPS, but a significant number.  Slave a few together and you now have a $10,000 machine with more performance.  The company is now out of business. 
« Last Edit: July 15, 2023, 08:22:06 am by gnuarm »
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #123 on: July 15, 2023, 09:45:09 am »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips

Dude, read what I write.  "bit slice would always be overrun quickly".  Every one of the examples you cite, were either very low volume, or quickly overrun by fast CPUs.  Why do you continue to argue about this???

I, and other people, did read what you wrote, and have pointed out the inaccuracies.

The DEC machines were the highest volume (mini)computers of the time. Not research tools.

How long were the bit-slice models sold?  Like I said, DEC had the LSI-11 in 1975, using custom LSI chips.  They were not designed to be especially fast, but it was inexpensive (relatively speaking).  I did a bit of reading and found the VAX-11/730 and VAX-11/725 (same CPU) were the only bit-slice renditions of the VAX line.  DEC was using custom LSI for the VAX line, and used bit-slice to build less expensive and much slower versions of the VAX line.  So, in reality, this was not a "high performance" machine using bit-slice, but low end economy models!  LOL 


Quote
Arcade machines were widely distributed production machines. Not research tools.

Sorry, arcade machines were not widely distributed in any real sense.  They sold a tiny fraction compared to high volume devices like PCs.  If you consider arcade machines to be "high volume", then  you are right.  Bit-slice was wildly successful.  But again, how long were the bit-slice models sold, before being replaced with much cheaperCPU based designs?


Quote
Everything was "quickly overrun" in the 70s and 80s, from mainframes downwards. In the ARM/x86 era, it may be difficult for youngsters to imagine, but that was a "pre-Cambian" evolutionary era.

And I'm not a "dude".

Ok, dudett... 

Yes, every individual design is quickly obsoleted, but that's not the same as obsoleting a technology, or design approach.  Once LSI started producing CPUs with significant processing power, they outpaced what could be done in bit-slice and that technology was effectively buried.  Do you actually read what I write?  Read the next paragraph carefully. 

Bit-slice was made up of TTL logic slice chips.  That was faster than CMOS in 1975, when the devices were introduced.  But anytime a signal goes off chip, it is much slower than signals remaining on the chip.  The carry chain of bit-slice had to propagate through multiple chips giving a fundamental limit to the speed.  By 1980 or so CMOS was faster, because the entire CPU could be put on a single die.  Why didn't they use CMOS for bit-slice?  Because it was effectively dead at that point.  They tried ECL, which gave them more speed, but at a huge cost of power.  So, used in specialized products with big price tags and lots of power. 

This is essentially the same thing that happened with the array processing business.  They were cabinet sized machines that cost $200,000 and up, performing 100 MFLOPS (in the case of the machines I worked on).  Within 10 years, this technology was available in a chip from Intel.  Maybe not 100 MFLOPS, but a significant number.  Slave a few together and you now have a $10,000 machine with more performance.  The company is now out of business.

Some of those points have some validity, but mostly they miss the context of the times.

They completely fail to support your false contention that "Bit slice was always, in essence, a research tool".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #124 on: July 15, 2023, 03:49:32 pm »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips
The 11/730 was the biggest selling mini-computer of all time, and it was awful. I used to use them. I was shocked when I found how well they sold.

The alternatives weren't wonderful.
A modestly configured 11/730 was 100k pounds. There were a lot of things you could buy for that much which performed so much better. We used them because of software and hardware issues that locked us into "needing a VAX". VMS was the problem. It was a dog on most VAX machines, because of its weird file system. This required a super complex disc control to recover some of the performance its design lost you.

 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #125 on: July 15, 2023, 04:05:19 pm »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

I don't want to argue with you.  Believe what you wish.
Early 70s to early 80s. If you look at the number of generations things like TI's bit slice chips went through, they were obviously selling, with customers demanding better follow ons.

None of that is in contradiction of what I said.  The volumes of bit-slice products were never large, and any given product with potential of high volume was redesigned with custom chips, or even general purpose CPUs as they ramped up in speed. 

BTW, the AM2900 family was not released until 1975, so "early 70s" is a bit of a stretch.

Bit-slice was always a niche, able to obtain high performance for very high power consumption, and high cost.  It had inherent speed limitations that doomed it from continued improvements.  The entire history of electronics has been as much about economics as it has been technology.  Now, with the huge market for mobile products, it's as much about power.
You seem very confused. The subject was bit-slice chips and now you talk about the Am2900 family. The Am2900 series appeared quite late in the day. By their launch numerous minicomputers were already using things like the TI TTL or Motorola ECL bit slice families. The Intel 3000 family was from 1973, but I'm not sure that ever got into any high volume minicomputers. All the places I saw it were niche things, like defence projects. Various companies, especially ones with good internal silicon processes (e.g. DEC) or close ties with a silicon vendor who regularly produced custom parts for them, had in house bit slice chip sets that you won't find too much information about now. DEC's first move in single chip processors was the Western Digital/DEC LSI11 in 1976, but it struggled for several years. You saw a lot of them around in 81 or 82, but very few before that. Of course bit slice didn't last for decades. It was big for just one decade. However, one decade is a long time in electronics. In the early 70s you'd find a bit slice based, vector scanning. arcade machine in every pub, but you'd probably never see a microcomputer of any kind. By the end of the 70s things like the Apple ][ were all over the place. By the end of the 80s the Apple ][ was also dead.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9893
  • Country: us
Re: MCU with FPGA vs. SoC FPGA
« Reply #126 on: July 15, 2023, 04:54:20 pm »
I like the concept of microcoding processors - perhaps more than actually writing the code.  The AMD 2900 was terrific for this and at least one early adopter used them to create disk drive controllers in the early days of hard drives for PCs.  The SCSI interface comes to mind.

What might be fun is to design FPGA components to match the original 2900 series devices and then create a project using the components and a chunk of BlockRAM to hold the microcode.

The LC3 project is intended to use microcoding although I built it using a conventional FPGA style FSM.  A microcode worksheet is provided.   At least one version is missing a signal.  The newer LC3b with byte addressing may be a more popular project.

https://people.cs.georgetown.edu/~squier/Teaching/HardwareFundamentals/LC3-trunk/docs/LC3-uArch-PPappendC.pdf
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #127 on: July 15, 2023, 05:32:58 pm »
The LC3 project is intended to use microcoding

When you design a CPU where micro-instructions are normally stored in permanent memory of the CPU itself, you are not correctly talking about RISC.

That's why LC3 is deprecated in modern CPU text books: because it should be intended to show how to implement a pure RISC CPU (like MIPS or RISC-V), while trying to implement a RISC-ish CPU with a microcode approach means, not only wastes BRAM in fpga, but it's nothing but making a non-educational mess.

LC3 made (not the past verb) sense in university courses. As well as MIC1.

(deprecated) LC3 ----> registers-load/store CPU, RISC approach
(still used) MIC1 ----> stack-CPU, micro/macro code approach
(deprecated) MIPS R2K ----> registers-load/store CPU, pure-RISC approach
(currently in use) RISC-V ----> registers-load/store CPU, pure-RISC approach

both MIPS R2K and RISC-V also offer the possibility to study the difference between "multi-cycle" and "pipelined", still with the possibility to run "serious" program (like a scheduler, a minimal kernel) at the end of the HDL implementation!

LC3 ... is a toy, the purpose is only to implement something simple that only takes students 3 weeks of university laboratory to complete to finally run "hEllo world" at the end.

No one would use either LC3 or MIC1 for anything other than a university exercise, as the ISA of both LC3 and MIC1 is too limited for serious useful task, so why do you continuosly bother people here with your "LC3" ?

all you do is repeat the same things and the same links like an old turntable that has jammed, and you do nothing, absolutely nothing, to possibly understand the ISA or to make a critical analysis of it, even when people here have already pointed this out to you.

why insist on sponsoring it in every single discussion, and why don't you correctly mention RISC-V, instead?
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #128 on: July 15, 2023, 05:44:23 pm »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips
The 11/730 was the biggest selling mini-computer of all time, and it was awful. I used to use them. I was shocked when I found how well they sold.

The alternatives weren't wonderful.
A modestly configured 11/730 was 100k pounds. There were a lot of things you could buy for that much which performed so much better. We used them because of software and hardware issues that locked us into "needing a VAX". VMS was the problem. It was a dog on most VAX machines, because of its weird file system. This required a super complex disc control to recover some of the performance its design lost you.

The ecosystem has always been more important than the processor.

I recognised that while an undergrad, and mentioned to an interviewer on the milkround. They were impressed that I looked beyond whether a 8080/6800/1802 was best :)

Beyond that, I'll mention the Intel blue box, the IBM 360 etc, the x86....

One of the reasons the XMOS devices have impressed me is their hardware+software+toolset. The processor is the least important aspect, and I believe they will replace their processor ISA with Risc-V.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #129 on: July 15, 2023, 05:54:34 pm »
A modestly configured 11/730 was 100k pounds. There were a lot of things you could buy for that much which performed so much better. We used them because of software and hardware issues that locked us into "needing a VAX". VMS was the problem. It was a dog on most VAX machines, because of its weird file system. This required a super complex disc control to recover some of the performance its design lost you.
The ecosystem has always been more important than the processor.
In our case it wasn't the general ecosystem. There were plenty of options which offered a decent ecosystem. The VAXes were there to control systems which only had interfaces to VAXes. Our only option was how big a VAX to choose, and the system vendor just warned us not to choose the very smallest VAXes as they were too slow to keep up with even controlling something.

I recognised that while an undergrad, and mentioned to an interviewer on the milkround. They were impressed that I looked beyond whether a 8080/6800/1802 was best :)
They would have been even more impressed if you'd answered "I'd use the one with the highest chance of actually being available when we need them". We must have had 30 or more EVMs for the various MPUs in the 70s, and most of them didn't survive long enough to be available when a product was finally developed. :)
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #130 on: July 15, 2023, 06:21:13 pm »
I did a bit of reading and found the VAX-11/730 and VAX-11/725 (same CPU) were the only bit-slice renditions of the VAX line.
As far as I know all the earlier VAXes were bit slice based designs. The 11/725 and 11/730 were probably the only ones using the Am2900 family. The original 11/780 must have been started before the Am2900 family was launched. The 11/780 use TI 74 family parts, as did many of the PDP11s. Starting with the TI 74181 in 1970, and using more of the 74 family as it was fleshed out, and improved with S and AS parts. Eventually DEC expanded the low end with the micro-VAX single chip processor, and the high end with the 8000 series based on ECL cell arrays.



« Last Edit: July 15, 2023, 06:26:51 pm by coppice »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9893
  • Country: us
Re: MCU with FPGA vs. SoC FPGA
« Reply #131 on: July 15, 2023, 06:40:29 pm »
The LC3 project is intended to use microcoding

When you design a CPU where micro-instructions are normally stored in permanent memory of the CPU itself, you are not correctly talking about RISC.

If a CPU is created with specific features for the application, RISC itself may not be a goal.
Quote

That's why LC3 is deprecated in modern CPU text books: because it should be intended to show how to implement a pure RISC CPU (like MIPS or RISC-V), while trying to implement a RISC-ish CPU with a microcode approach means, not only wastes BRAM in fpga, but it's nothing but making a non-educational mess.

None of which is important for domain specific CPUs.  There is no point in implementing an ISA that swamps the needs of the project.
Quote

LC3 ... is a toy, the purpose is only to implement something simple that only takes students 3 weeks of university laboratory to complete to finally run "hEllo world" at the end.

No one would use either LC3 or MIC1 for anything other than a university exercise, as the ISA of both LC3 and MIC1 is too limited for serious useful task, so why do you continuosly bother people here with your "LC3" ?
Because it is a simple fully documented CPU capable of handling many tasks.  The key being simple.  An undergrad student can easily implement it in a couple of days of diligent effort.  A week at the absolute outside including the time to learn VHDL.

For a first project it is kind of nice.  Multicore ARM processors will need a separate course although I have a book that goes into it.  MIPS is the subject of another book by the same authors.  Pipelining is fully discussed although code isn't provided for that case.  Code is provided for the non-pipelined case.

I can't get my head around trying to implement a pipelined processor (RISC or ARM) as a first project.
Quote

why insist on sponsoring it in every single discussion, and why don't you correctly mention RISC-V, instead?
It's a fully documented CPU with adequate features for many purposes and odd-ball peripherals can be easily added.

I don't mention RISC-V because I don't know anything about it.  I still wouldn't recommend it to the newcomer still struggling to grasp FSMs.
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #132 on: July 15, 2023, 07:53:36 pm »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

I don't want to argue with you.  Believe what you wish.
Early 70s to early 80s. If you look at the number of generations things like TI's bit slice chips went through, they were obviously selling, with customers demanding better follow ons.

None of that is in contradiction of what I said.  The volumes of bit-slice products were never large, and any given product with potential of high volume was redesigned with custom chips, or even general purpose CPUs as they ramped up in speed. 

BTW, the AM2900 family was not released until 1975, so "early 70s" is a bit of a stretch.

Bit-slice was always a niche, able to obtain high performance for very high power consumption, and high cost.  It had inherent speed limitations that doomed it from continued improvements.  The entire history of electronics has been as much about economics as it has been technology.  Now, with the huge market for mobile products, it's as much about power.
You seem very confused. The subject was bit-slice chips and now you talk about the Am2900 family. The Am2900 series appeared quite late in the day. By their launch numerous minicomputers were already using things like the TI TTL or Motorola ECL bit slice families. The Intel 3000 family was from 1973, but I'm not sure that ever got into any high volume minicomputers. All the places I saw it were niche things, like defence projects. Various companies, especially ones with good internal silicon processes (e.g. DEC) or close ties with a silicon vendor who regularly produced custom parts for them, had in house bit slice chip sets that you won't find too much information about now. DEC's first move in single chip processors was the Western Digital/DEC LSI11 in 1976, but it struggled for several years. You saw a lot of them around in 81 or 82, but very few before that. Of course bit slice didn't last for decades. It was big for just one decade. However, one decade is a long time in electronics. In the early 70s you'd find a bit slice based, vector scanning. arcade machine in every pub, but you'd probably never see a microcomputer of any kind. By the end of the 70s things like the Apple ][ were all over the place. By the end of the 80s the Apple ][ was also dead.

You can't seem to follow my train of thought.  I said the utility of bit-slice was never for mainstream products, other than for short times or for low volume, high priced devices.  All the examples you've posted supports this.  Then you talk about one specific product, the Apple II, being obsolete as if to indicate this is a universal principle. 

It is universal for any one product.  Of course products become obsolete.  But bit-slice is a design technology, a paradigm.  It has specific limitations, which allowed it to become obsolete in just a few years.  In the time it was used, it appeared in a few products which quickly became obsolete as newer technology made bit-slice too expensive, too large, too power hungry and too complex. 

I don't know what you are trying to say.  I've never said bit-slice was not used.  What I am saying is very clear, if you pay attention to what I actually say, rather than what you seem to want to hear. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #133 on: July 15, 2023, 08:03:30 pm »
I like the concept of microcoding processors - perhaps more than actually writing the code.  The AMD 2900 was terrific for this and at least one early adopter used them to create disk drive controllers in the early days of hard drives for PCs.  The SCSI interface comes to mind.

What might be fun is to design FPGA components to match the original 2900 series devices and then create a project using the components and a chunk of BlockRAM to hold the microcode.

The LC3 project is intended to use microcoding although I built it using a conventional FPGA style FSM.  A microcode worksheet is provided.   At least one version is missing a signal.  The newer LC3b with byte addressing may be a more popular project.

https://people.cs.georgetown.edu/~squier/Teaching/HardwareFundamentals/LC3-trunk/docs/LC3-uArch-PPappendC.pdf

While we are free to do anything we want in a hobby project, microcoding is mostly obsolete other than in custom silicon CPU designs.  The large word width consumes memory rapidly making it a questionable choice for use in FPGAs.  But, it can provide speed advantages if combined with a clean architecture. 

I believe Motorola originally wanted to use microcoding in the 68000, but the layout of the program store was longer than the chip!  So they broke it down a bit to create nanocoding where the microcode would invoke routines in the nanocode, exploiting the repetitive nature of most code.  This is why Forth code tends to be small.  It makes subroutines easier, and simpler to use, facilitating many, small routines that are used in many places in the code.  The resulting hierarchy has less repetition, so smaller, often significantly so.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #134 on: July 15, 2023, 08:17:47 pm »
The LC3 project is intended to use microcoding

When you design a CPU where micro-instructions are normally stored in permanent memory of the CPU itself, you are not correctly talking about RISC.

That's why LC3 is deprecated in modern CPU text books: because it should be intended to show how to implement a pure RISC CPU (like MIPS or RISC-V), while trying to implement a RISC-ish CPU with a microcode approach means, not only wastes BRAM in fpga, but it's nothing but making a non-educational mess.

LC3 made (not the past verb) sense in university courses. As well as MIC1.

(deprecated) LC3 ----> registers-load/store CPU, RISC approach
(still used) MIC1 ----> stack-CPU, micro/macro code approach
(deprecated) MIPS R2K ----> registers-load/store CPU, pure-RISC approach
(currently in use) RISC-V ----> registers-load/store CPU, pure-RISC approach

both MIPS R2K and RISC-V also offer the possibility to study the difference between "multi-cycle" and "pipelined", still with the possibility to run "serious" program (like a scheduler, a minimal kernel) at the end of the HDL implementation!

LC3 ... is a toy, the purpose is only to implement something simple that only takes students 3 weeks of university laboratory to complete to finally run "hEllo world" at the end.

No one would use either LC3 or MIC1 for anything other than a university exercise, as the ISA of both LC3 and MIC1 is too limited for serious useful task, so why do you continuosly bother people here with your "LC3" ?

all you do is repeat the same things and the same links like an old turntable that has jammed, and you do nothing, absolutely nothing, to possibly understand the ISA or to make a critical analysis of it, even when people here have already pointed this out to you.

why insist on sponsoring it in every single discussion, and why don't you correctly mention RISC-V, instead?

You missed the fact that by having only the memory delay, a microcoded architecture can be faster than a logic based design.  I believe it was Federico Faggin who, after designing the Z8000, said he would never do another CPU design without microcode.  The logic design was just too labor intensive.  I guess CAD was not so big back then.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14536
  • Country: fr
Re: MCU with FPGA vs. SoC FPGA
« Reply #135 on: July 15, 2023, 08:22:47 pm »
Of course the tools were not what they are now.
No one in their right mind would use anything else than a HDL for designing a CPU core these days, unless as a hobby project/a silly challenge.

But I've witnessed the logic part of ASICs designed with logic gates rather than Verilog or VHDL as late as the early 2000's at some companies. Probably because the engineers didn't master any HDL language. And this was simpler stuff than CPUs.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9893
  • Country: us
Re: MCU with FPGA vs. SoC FPGA
« Reply #136 on: July 15, 2023, 08:43:58 pm »

While we are free to do anything we want in a hobby project, microcoding is mostly obsolete other than in custom silicon CPU designs.  The large word width consumes memory rapidly making it a questionable choice for use in FPGAs.  But, it can provide speed advantages if combined with a clean architecture. 


One-hot encoding of the state variables of an FSM can be very wide but we still use one-hot to avoid decoding the state.  But there may be just two variables so it isn't really that much of a burden unless certain flops need to be duplicated by the router.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #137 on: July 16, 2023, 08:08:55 am »
I believe Motorola originally wanted to use microcoding in the 68000, but the layout of the program store was longer than the chip!  So they broke it down a bit to create nanocoding where the microcode would invoke routines in the nanocode, exploiting the repetitive nature of most code.

Ironically they had to do this to shorten the time to market of 68020.
I was there when it happened, I still have their brochure  :o :o :o
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #138 on: July 16, 2023, 08:30:01 am »
Because it is a simple fully documented CPU capable of handling many tasks
...
I don't mention RISC-V because I don't know anything about it.  I still wouldn't recommend it to the newcomer still struggling to grasp FSMs.

so, you know nothing but wouldn't recommend RISC-V to the newcomer still struggling to grasp FSMs.
And let's suggest the wrong approach ever.
Makes sense.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #139 on: July 16, 2023, 09:11:13 am »
There is no point in implementing an ISA that swamps the needs of the project.

Projects? LC3 comes with a just minimally decent C compiler, so ... even on the toolchain side that thing is totally useless.

LC3 has been around for a while (10 years?) and had time to become deprecated in every university course before someone tried to extend its ISA: you who are blabbering about it have you ever done it? obviously no, people just played some simle demos to fill their mouth and do nothing but repeat their usual pdf that no one cares about ...

RISC-V instead was born with the precise intention of giving the possibility to anyone to be able not only to study the ISA, study its possible implementation { multicycle, pipeline } but to extend it without any legal recourse, which was not possible with MIPS


And here we are to better understand what really bothers me:

In 2005, I received a legal letter from SONY, for posting on my website an attempt to decode the BIOS of their Playstation1 (MIPS R3000/LE), and a few months later a second legal letter from MIPS.inc for modifying the MIPS ISA- R2K as "MIPS++", as in my website I suggested to implement the whole PS-ONE in FPGA, but with more ram, and a couple of extensions to the ISA to make it like what today is known as "MIPS32".

Briefly, the first letter said something like "blablabla, the BIOS is the intellectual property of SONY, you are only authorized to use the console to run genuine gaming software, you are not authorized to post any reverse engineering as it may be used to hack third party software affiliated with SONY, blablabla" (I think they meant affiliated software-houses) ; while the second letter said that you can do whatever you want, but it shouldn't be called "MIPS" or "MIPS-compliant" without contacting their legal department and paying royalties.

With RISC-V there is no "cease and desist or pay a salt fine", and nobody, ___ nobody ___, will ever get any legal letter for publishing an extension of the ISA, maybe that', adding x86 style SIMD.

Perhaps a small step forward for opensource/hardware, how many people will add something really useful? But it's a big potentially insanely great step forward for freedom: so, don't throw it in the corner because you assume "LC3" is easier!
« Last Edit: July 16, 2023, 09:13:15 am by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #140 on: July 16, 2023, 10:53:10 am »
I believe Motorola originally wanted to use microcoding in the 68000, but the layout of the program store was longer than the chip!  So they broke it down a bit to create nanocoding where the microcode would invoke routines in the nanocode, exploiting the repetitive nature of most code.

Ironically they had to do this to shorten the time to market of 68020.
I was there when it happened, I still have their brochure  :o :o :o

I don't know why you say "ironically".  Once they developed microcode combined with nanocode, I would expect them to continue to use that to minimize the die area.  Die area is a very important part of chip cost and so profit.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #141 on: July 16, 2023, 03:47:07 pm »
I don't know why you say "ironically".  Once they developed microcode combined with nanocode, I would expect them to continue to use that to minimize the die area.  Die area is a very important part of chip cost and so profit.

68020: 1984, ~190.000 transistors
68030: 1986, ~273.000 transistors

The 68030 is essentially a 68020 with a memory management unit and instruction and data caches of 256 bytes each, and the main idea presented in my MC68020 brochure shows a traditional microcode memory divided into two parts
  • microcode part, controls the microaddress sequencer
  • nanocode part, controls the execution unit
The tricky bit with 020 was that the nanocode part is stored in a nanoROM which has an address decoder which doesn’t fully decode the address. That is, different microcode addresses will result in the same row being addressed in the nanocode ROM which is quite useful when different microcode instructions want to send identical control signals to the execution unit, as this allows precious silicon area to be saved if there is a lot of redundancy in the nanocode ROM before the optimization is performed.

But they didn't care too much about redundancy in the nanocode ROM, and this was done to save development time, to recude the time-to-market of the chip to contrast competitors!
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #142 on: July 16, 2023, 04:29:56 pm »
e.g. 68000 has 544 x 17-bit "microcode-words" which dispaches to 366 x 68-bit "nanocode-words", and doesn't support the 2bits for scale so is set to bx00 as this addressing possibility was left open during MACSS/68k design, and got then actually suported from 68020 up, but on 68020 the redundancy of this extra EA mode was not optimized in the nanocode ROM, and it was finally fixed in 68030.

The main disadvantage of a microcoded circuit lies mainly in its generality. When you don't find repeating patterns and you optimize them you're wasting silicon, expecially because ISA 68k is orthogonal, that EA extension is used by many CPU instructions and if it's not optimized then it's repeated microcode at the expense of more transistors and then make a bigger circuit because it requires a bigger nanoROM.

They didn't have enough time to optimize the nanoROM in 68020, they did it some years later in 68030, and it was vastly better.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9893
  • Country: us
Re: MCU with FPGA vs. SoC FPGA
« Reply #143 on: July 16, 2023, 05:20:35 pm »
In terms of RISC-V, is there a hardware diagram similar to that of LC3 where each major block is shown along with the inputs, outputs and control signals?

Is there a state diagram for a prototypical design variant.  Something that can be reduced to HDL with little effort?

Is there sufficient documentation that a first semester student can get the device to work?

I haven't researched RISC-V enough to know the answers to any of those questions but I also haven't stumbled over documentation at the level discussed above.

Copy and paste from sombody else's design doesn't count.  Otherwise why not use the CPU designs provided by the FPGA vendors?  They're pretty well understood.  How about the cores at OpenCores.org?  The T80 core works well for CP/M and various arcade games based on the original Z80 - like PacMan.  It's pretty flexible in terms of adding peripherals.

There are a bunch of RISC-V boards at Amazon.  Should be easy to get started.  Some will run Linux...

« Last Edit: July 16, 2023, 05:22:24 pm by rstofer »
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #144 on: July 16, 2023, 05:39:14 pm »
I don't know why you say "ironically".  Once they developed microcode combined with nanocode, I would expect them to continue to use that to minimize the die area.  Die area is a very important part of chip cost and so profit.

68020: 1984, ~190.000 transistors
68030: 1986, ~273.000 transistors

The 68030 is essentially a 68020 with a memory management unit and instruction and data caches of 256 bytes each, and the main idea presented in my MC68020 brochure shows a traditional microcode memory divided into two parts
  • microcode part, controls the microaddress sequencer
  • nanocode part, controls the execution unit
The tricky bit with 020 was that the nanocode part is stored in a nanoROM which has an address decoder which doesn’t fully decode the address. That is, different microcode addresses will result in the same row being addressed in the nanocode ROM which is quite useful when different microcode instructions want to send identical control signals to the execution unit, as this allows precious silicon area to be saved if there is a lot of redundancy in the nanocode ROM before the optimization is performed.

But they didn't care too much about redundancy in the nanocode ROM, and this was done to save development time, to recude the time-to-market of the chip to contrast competitors!

I'm not following your logic.  Which of the 68000, 68010, 68020 and 68030 did not use nanocoding?  Are you trying to say they continued to use nanocoding in the '20 and '30 to save development time? 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #145 on: July 16, 2023, 07:01:23 pm »
68020: 1984, ~190.000 transistors
68030: 1986, ~273.000 transistors
68040: 1990 ~1,200,000 transistors.

Quite a jump with a very different implementation. The first pass of the 68040 was brilliant, but mismanagement meant they struggled to crank it to more than 40MHz, while the 80486 went 25MHz to 33MHz to 66MHz to 100MHz, and greatly outperformed it.
 
The following users thanked this post: DiTBho

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #146 on: July 16, 2023, 07:35:28 pm »
I'm not following your logic

I only told you that, when not optimized, microcoding can contribute to wasting area, and I gave you a real example as I have the full documentation about it.

Which of the 68000, 68010, 68020 and 68030 did not use nanocoding?  Are you trying to say they continued to use nanocoding in the '20 and '30 to save development time?

I think I clearly pointed to you as an example the 68020 where non-optimization of the nanoROM was accepted, at the expense of having wasted area of silicon, to reduce "time-to-market" because Motorola guys had the competitors breathing down their necks.

That's all! and it means only one thing: although it happened only with the 68020 and with no other CPU of the 68k family, it's not theoretical, it can happen in a Company, and the reason is "strategic business".
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #147 on: July 16, 2023, 07:46:20 pm »
RISC-V

In terms of feasibility, several members of this forum managed to implement their version of RISC-V in HDL.

The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #148 on: July 16, 2023, 08:06:52 pm »
68040: 1990 ~1,200,000 transistors.

Quite a jump with a very different implementation. The first pass of the 68040 was brilliant, but mismanagement meant they struggled to crank it to more than 40MHz, while the 80486 went 25MHz to 33MHz to 66MHz to 100MHz, and greatly outperformed it.

I never worked with 68040, the closest thing to ever touching a 040 is when I replaced the 68LC040 CPU on my Apple LC475 with a 68060 via a Smartsocket + complete ROM-hacking as the 060 misses some instructions that need to be emulated in software, and, even when it never throws an exception for unimplemented opcode, I found the floating point unit of a 68060 FULL (with MMU and FPU) is of several orders of magnitude slower than the FPU on a Pentium1!

On the contrary, however, the integer unit of a 68060@100Mhz is 3x faster than a Pentium1@100Mhz.

That's why on the CyberStorm060 (CPU accelerator for Amiga2000, 3000, 4000) was used to massively process "fixedpoint" instead of "floatingpoint", and why my customers' VME industrial embroidery machine controllers, based on a pair of 060 CPUs @100Mhz in SMP configuration, use a pair of SHARK DSP units, attached to a cross bar matrix, for floating point calculations.

I vaguely know that the problem was with the implementation - Intel and AMD were heavily using pipelined FPUs for their 5th generation of x86 32bit CPUs, while Motorola must have reused a non-pipelined floating point unit for the 060 ...

... but I have no idea of the reason  :-//
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #149 on: July 16, 2023, 08:23:08 pm »
I'm not following your logic

I only told you that, when not optimized, microcoding can contribute to wasting area, and I gave you a real example as I have the full documentation about it.

Which of the 68000, 68010, 68020 and 68030 did not use nanocoding?  Are you trying to say they continued to use nanocoding in the '20 and '30 to save development time?

I think I clearly pointed to you as an example the 68020 where non-optimization of the nanoROM was accepted, at the expense of having wasted area of silicon, to reduce "time-to-market" because Motorola guys had the competitors breathing down their necks.

That's all! and it means only one thing: although it happened only with the 68020 and with no other CPU of the 68k family, it's not theoretical, it can happen in a Company, and the reason is "strategic business".

OK, thanks for the update.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8700
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #150 on: July 16, 2023, 09:36:12 pm »
68040: 1990 ~1,200,000 transistors.

Quite a jump with a very different implementation. The first pass of the 68040 was brilliant, but mismanagement meant they struggled to crank it to more than 40MHz, while the 80486 went 25MHz to 33MHz to 66MHz to 100MHz, and greatly outperformed it.

I never worked with 68040, the closest thing to ever touching a 040 is when I replaced the 68LC040 CPU on my Apple LC475 with a 68060 via a Smartsocket + complete ROM-hacking as the 060 misses some instructions that need to be emulated in software, and, even when it never throws an exception for unimplemented opcode, I found the floating point unit of a 68060 FULL (with MMU and FPU) is of several orders of magnitude slower than the FPU on a Pentium1!

On the contrary, however, the integer unit of a 68060@100Mhz is 3x faster than a Pentium1@100Mhz.

That's why on the CyberStorm060 (CPU accelerator for Amiga2000, 3000, 4000) was used to massively process "fixedpoint" instead of "floatingpoint", and why my customers' VME industrial embroidery machine controllers, based on a pair of 060 CPUs @100Mhz in SMP configuration, use a pair of SHARK DSP units, attached to a cross bar matrix, for floating point calculations.

I vaguely know that the problem was with the implementation - Intel and AMD were heavily using pipelined FPUs for their 5th generation of x86 32bit CPUs, while Motorola must have reused a non-pipelined floating point unit for the 060 ...

... but I have no idea of the reason  :-//
I was surprised they ever finished the 68060. Motorola was so committed to Power PC at that point, even though their strategy for it was brain dead. Floating point performance wasn't amazing on any chip at that time. People had problems deciding how much of their transistor budget to commit to the FPU, as they were mostly judged on integer performance. Intel and AMD performance was crippled by emulating an 8087. x86 floating point performance didn't start to take off until SSE was introduced.
 
The following users thanked this post: DiTBho

Offline glenenglish

  • Frequent Contributor
  • **
  • Posts: 266
  • Country: au
  • RF engineer. AI6UM / VK1XX . Aviation pilot. MTBr.
Re: MCU with FPGA vs. SoC FPGA
« Reply #151 on: September 09, 2023, 04:52:19 am »
I would suggest a fabric (soft)  microcontroller over a SoC UNLESS you really really need the horsepower.

The SoC is an order of magnitude more complex to learn and use compared to a Microblaze  or RISC-V etc.

like 10x the documentation at least.... I would spend my whole life learning everything about Xilinx MPSoC, but Microblaze is a cinch

But, if you need MIPS, there is no substitute for the hard processor (dual core at 800 MHz executing at 2 DMIPS/ MHZ per core with 128 bit wide buses)  versus a 200 MHz core and 32 bit bus and 1 DMIP/MHz.   Programming the fabric processor is easier also, instead of the whole boot sequence etc stuff. Depends on your use case.....

glen.
Xilinx Alliance Partner.
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #152 on: September 09, 2023, 09:28:08 am »
I would suggest a fabric (soft)  microcontroller over a SoC UNLESS you really really need the horsepower.

The SoC is an order of magnitude more complex to learn and use compared to a Microblaze  or RISC-V etc.

like 10x the documentation at least.... I would spend my whole life learning everything about Xilinx MPSoC, but Microblaze is a cinch

But, if you need MIPS, there is no substitute for the hard processor (dual core at 800 MHz executing at 2 DMIPS/ MHZ per core with 128 bit wide buses)  versus a 200 MHz core and 32 bit bus and 1 DMIP/MHz.   Programming the fabric processor is easier also, instead of the whole boot sequence etc stuff. Depends on your use case.....

glen.
Xilinx Alliance Partner.

Interesting perspective.  I like the assertion that it is much more complex to learn to use a "SoC" than to build your own SoC in the FPGA.  How does that work exactly???
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19616
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #153 on: September 09, 2023, 10:03:23 am »
I would suggest a fabric (soft)  microcontroller over a SoC UNLESS you really really need the horsepower.

The SoC is an order of magnitude more complex to learn and use compared to a Microblaze  or RISC-V etc.

like 10x the documentation at least.... I would spend my whole life learning everything about Xilinx MPSoC, but Microblaze is a cinch

But, if you need MIPS, there is no substitute for the hard processor (dual core at 800 MHz executing at 2 DMIPS/ MHZ per core with 128 bit wide buses)  versus a 200 MHz core and 32 bit bus and 1 DMIP/MHz.   Programming the fabric processor is easier also, instead of the whole boot sequence etc stuff. Depends on your use case.....

glen.
Xilinx Alliance Partner.

Interesting perspective.  I like the assertion that it is much more complex to learn to use a "SoC" than to build your own SoC in the FPGA.  How does that work exactly???

A combination of
  • more complex CPU
    • longer datasheet, more errata and more possibility of surprises :(
    • more possibility of surprises when using C without realising what C avoids guaranteeing :(
  • different tooling for interconnecting the logic with CPU
  • optionally having an RTOS or full-blown operating system in the CPU
  • tying all the radically different development tools together in a coherent whole

That can be mitigated by better tooling and libraries, but the underlying issues persist.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online gnuarm

  • Super Contributor
  • ***
  • Posts: 2226
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #154 on: September 09, 2023, 10:41:26 am »
I would suggest a fabric (soft)  microcontroller over a SoC UNLESS you really really need the horsepower.

The SoC is an order of magnitude more complex to learn and use compared to a Microblaze  or RISC-V etc.

like 10x the documentation at least.... I would spend my whole life learning everything about Xilinx MPSoC, but Microblaze is a cinch

But, if you need MIPS, there is no substitute for the hard processor (dual core at 800 MHz executing at 2 DMIPS/ MHZ per core with 128 bit wide buses)  versus a 200 MHz core and 32 bit bus and 1 DMIP/MHz.   Programming the fabric processor is easier also, instead of the whole boot sequence etc stuff. Depends on your use case.....

glen.
Xilinx Alliance Partner.

Interesting perspective.  I like the assertion that it is much more complex to learn to use a "SoC" than to build your own SoC in the FPGA.  How does that work exactly???

A combination of
  • more complex CPU
    A RISC-V is more complex than a RISC-V???


    Quote
    • longer datasheet, more errata and more possibility of surprises :(
    • more possibility of surprises when using C without realising what C avoids guaranteeing :(

    You seem to be reaching pretty far for this one.


    Quote
    [/li]
    [li]different tooling for interconnecting the logic with CPU[/li][/list]

    "Tooling"???  Yes, a huge reach.


    Quote
    • optionally having an RTOS or full-blown operating system in the CPU

    Sorry... is that a plus or a minus???


    Quote
    • tying all the radically different development tools together in a coherent whole

    Wow!  Yes, you are really working to prove your case.


    Quote
    That can be mitigated by better tooling and libraries, but the underlying issues persist.

    If you say so.
    Rick C.  --  Puerto Rico is not a country... It's part of the USA
      - Get 1,000 miles of free Supercharging
      - Tesla referral code - https://ts.la/richard11209
     

    Online dietert1

    • Super Contributor
    • ***
    • Posts: 2091
    • Country: br
      • CADT Homepage
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #155 on: September 09, 2023, 11:10:40 am »
    Many years ago i designed a digital audio project on a Spartan50AN. It included a picoblaze for running the user interface with a LCD display and two rotary encoders, IR receiver and UART. The SPDIF receiver was a helper chip to implement input MUX and the PLL and upsample to 96 KHz. Then i implemented a minimual DSP for a stereo crossover, as far as i remember running at 50 MHz. The channel program implementing various filters needed to run twice within the ~ 10 usec sample interval. The filter coefficients could be configured on the fly and the device had a multichannel audio DAC to implement analog audio outputs.
    Implementing this required some months. The picoblaze embedded processor was IP provided by Xilinx, including the assembler. Some years later the same application was implemented with a Kinetis MCU instead of the FPGA. Arm Cortex M4 was strong enough to implement the DSP stuff. Of course the FPU helped a lot for filter configuration.

    I would recommend learning Zynq. It will take some months, but not your whole life.

    Regards, Dieter

    Edit: At the time it wasn't a microblaze but a picoblaze.
    « Last Edit: September 09, 2023, 12:50:16 pm by dietert1 »
     

    Offline nctnico

    • Super Contributor
    • ***
    • Posts: 27003
    • Country: nl
      • NCT Developments
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #156 on: September 12, 2023, 12:16:41 am »
    I would suggest a fabric (soft)  microcontroller over a SoC UNLESS you really really need the horsepower.

    The SoC is an order of magnitude more complex to learn and use compared to a Microblaze  or RISC-V etc.

    like 10x the documentation at least.... I would spend my whole life learning everything about Xilinx MPSoC, but Microblaze is a cinch
    A really hard no to that. A SoC comes with libraries / OS / SDK that allows to implement a system quickly. Just take a look how much functionality there is in a Linux distribution. You'd be a fool to discard that. Using an FPGA only adds complexity by needing to create a logic design that you can buy off-the-shelve. I have a decent number of SoC designs under my belt and none of them required reading a significant portion of the documentation because the software is already provided by the vendor.
    There are small lies, big lies and then there is what is on the screen of your oscilloscope.
     

    Offline Berni

    • Super Contributor
    • ***
    • Posts: 4961
    • Country: si
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #157 on: September 12, 2023, 06:18:25 am »
    But all that stuff the SoC comes with is only useful if you actually need it.

    Yes if you have to do things that are a solved problem in Linux like networking, scriptability, media encode/decode ..etc then a SoC is an obvius choice. You slap on the linux image they give you and you are up and running.

    But if you need just a more microcontroller level of functionality then it makes no sense to have a built in SoC. In those cases you are not gaining any useful functionality from having Linux running on it. You can have some serial peripherals and GPIO just as well in bare metal C++, and if you want multipthreading, slap on a RTOS (that you usually also get ready made for these premade soft cores). It boots within a millisecond too and you get tighter control over timing.

    Not to mention that FPGAs with SoCs built in tend to be a fair bit more expensive than just a generic one.

    Also the thing with making people not deeply familiar with linux use embedded linux is that you will get a lot of hacky ways of making linux do things, since they don't have the knowledge to do it properly. As a result you get products that take 1 minute to boot and sometimes randomly break, or need a reboot for some reason. I seen some horrifically bad implementations of embedded linux even from even big companies (Like Philips smart TVs, they are mess)
     

    Online tggzzz

    • Super Contributor
    • ***
    • Posts: 19616
    • Country: gb
    • Numbers, not adjectives
      • Having fun doing more, with less
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #158 on: September 12, 2023, 08:44:00 am »
    But all that stuff the SoC comes with is only useful if you actually need it.

    Yes if you have to do things that are a solved problem in Linux like networking, scriptability, media encode/decode ..etc then a SoC is an obvius choice. You slap on the linux image they give you and you are up and running.

    But if you need just a more microcontroller level of functionality then it makes no sense to have a built in SoC. In those cases you are not gaining any useful functionality from having Linux running on it. You can have some serial peripherals and GPIO just as well in bare metal C++, and if you want multipthreading, slap on a RTOS (that you usually also get ready made for these premade soft cores). It boots within a millisecond too and you get tighter control over timing.

    Not to mention that FPGAs with SoCs built in tend to be a fair bit more expensive than just a generic one.

    Also the thing with making people not deeply familiar with linux use embedded linux is that you will get a lot of hacky ways of making linux do things, since they don't have the knowledge to do it properly. As a result you get products that take 1 minute to boot and sometimes randomly break, or need a reboot for some reason. I seen some horrifically bad implementations of embedded linux even from even big companies (Like Philips smart TVs, they are mess)

    Very true, especially the comment about inexperience/ignorance leading to hacky ways of making Linux do something.

    That, of course, is true in other fields; see too many examples in https://thedailywtf.com/

    At a place where I once worked, someone had the job of getting a datastream from one Unix box to another. The obvious solution would be based on TCP, sockets, and two tiny processes. But they  knew SQL, so they wrote the data to a database, and the database(?) on the other box polled it occasionally.
    There are lies, damned lies, statistics - and ADC/DAC specs.
    Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
    Having fun doing more, with less
     

    Offline nctnico

    • Super Contributor
    • ***
    • Posts: 27003
    • Country: nl
      • NCT Developments
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #159 on: September 12, 2023, 09:07:21 am »
    But all that stuff the SoC comes with is only useful if you actually need it.

    Yes if you have to do things that are a solved problem in Linux like networking, scriptability, media encode/decode ..etc then a SoC is an obvius choice. You slap on the linux image they give you and you are up and running.

    But if you need just a more microcontroller level of functionality then it makes no sense to have a built in SoC. In those cases you are not gaining any useful functionality from having Linux running on it. You can have some serial peripherals and GPIO just as well in bare metal C++, and if you want multipthreading, slap on a RTOS (that you usually also get ready made for these premade soft cores). It boots within a millisecond too and you get tighter control over timing.
    You are making the exact same argument:  use an off-the-shelve CPU + peripherals plus an existing OS / driver package as an eco-system. In the end using FreeRTOS requires knowledge as well.

    An FPGA + CPU (either softcore or as IP block) only serves one niche: the one where you need to process a lot of data coming from a source that isn't supported by generic peripherals.
    There are small lies, big lies and then there is what is on the screen of your oscilloscope.
     

    Offline Berni

    • Super Contributor
    • ***
    • Posts: 4961
    • Country: si
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #160 on: September 12, 2023, 10:15:05 am »
    True that depends on what RTOS you end up using.

    The kind of RTOS i was talking about is more the FreeRTOS kind where the whole thing is basically just a glorified multithreading library for C , rather than being an actual os where you have the os manages drivers,services, filesystems..etc. Sure you still need to know how to use something like FreeRTOS in order to make it work, but it is something you can figure out in an afternoon of reading the documentation.

    When it comes to the big boy RTOSes with all that OS stuff included, they are more of a niche. If you are using one you usually have some good  specific reason for using it. Those are indeed complex beasts.

    Personally i am more of a fan of using an external MCU rather than a softcore, but some might prefer the later.
     

    Online tggzzz

    • Super Contributor
    • ***
    • Posts: 19616
    • Country: gb
    • Numbers, not adjectives
      • Having fun doing more, with less
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #161 on: September 12, 2023, 10:26:32 am »
    True that depends on what RTOS you end up using.

    The kind of RTOS i was talking about is more the FreeRTOS kind where the whole thing is basically just a glorified multithreading library for C , rather than being an actual os where you have the os manages drivers,services, filesystems..etc. Sure you still need to know how to use something like FreeRTOS in order to make it work, but it is something you can figure out in an afternoon of reading the documentation.

    When it comes to the big boy RTOSes with all that OS stuff included, they are more of a niche. If you are using one you usually have some good  specific reason for using it. Those are indeed complex beasts.

    Personally i am more of a fan of using an external MCU rather than a softcore, but some might prefer the later.

    I'm always concerned about the availability and quality of the peripheral libraries, particularly for more complex peripherals such as ethernet or USB.

    Personally I am more of a fan of MCUs where the toolsets guarantee system performance/timing. I too like a simple RTOSs, preferably one implemented in silicon.
    There are lies, damned lies, statistics - and ADC/DAC specs.
    Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
    Having fun doing more, with less
     

    Offline nctnico

    • Super Contributor
    • ***
    • Posts: 27003
    • Country: nl
      • NCT Developments
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #162 on: September 12, 2023, 12:40:42 pm »
    True that depends on what RTOS you end up using.

    The kind of RTOS i was talking about is more the FreeRTOS kind where the whole thing is basically just a glorified multithreading library for C , rather than being an actual os where you have the os manages drivers,services, filesystems..etc. Sure you still need to know how to use something like FreeRTOS in order to make it work, but it is something you can figure out in an afternoon of reading the documentation.

    When it comes to the big boy RTOSes with all that OS stuff included, they are more of a niche. If you are using one you usually have some good  specific reason for using it. Those are indeed complex beasts.
    In such cases you typically end up with BSD style OSses like QNX and so on. Still FreeRTOS has enough pitfalls to get yourself sucked into. Especially where it comes to (IP) networking.
    There are small lies, big lies and then there is what is on the screen of your oscilloscope.
     

    Offline jars121Topic starter

    • Regular Contributor
    • *
    • Posts: 51
    • Country: 00
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #163 on: October 05, 2023, 05:20:20 am »
    Quick update on this thread from me (the OP).

    I've spent the last few weeks diving into this topic a bit further. I have an application in mind which I'd like to use as an introduction to FPGAs, for which I'll also need a software component. I'm currently leaning towards a Zynq SoC, as I can build (using existing IP where possible) the data path in hardware, and use one of the Cortex A9 cores to implement the control/configuration path with FreeRTOS. I don't have an immediate need for the second Cortex A9 core, but the option of implementing an SMP FreeRTOS configuration across both cores, or a Linux OS on the second core at some point in the future is quite appealing.

    The possible pitfall I have with the above at the moment is the implementation of certain protocols/interfaces in PL, namely CAN (2.0B and ideally FD) and possibly RS-232/RS-485 and Ethernet, as I'd likely have to try my luck with open source IP cores (e.g. OpenCores.org) where native Xilinx cores aren't available. I could easily implement these in software, but then I'd have some of the data path in PL and some in PS which isn't exactly what I'm after.

    I think I need to get a Zynq development board and make a start one way or the other!
     

    Offline Berni

    • Super Contributor
    • ***
    • Posts: 4961
    • Country: si
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #164 on: October 05, 2023, 05:39:30 am »
    Low speed peripherals like UART or CAN are easy to implement in the FPGA since they are so slow that no fancy tricks or special care is needed to make them run fast enough on any modern FPGA.

    Things like Ethernet get more complicated since they go into high speed territory, you need a fast way to get data in/out of memory, and you need to interface to a TCP/IP stack on the CPU, so you want something that you have drivers for.

    But in general you can usually fairly easily stick open source IPs into your FPGA as long as the bus they use as the interface is somewhat similar, sometimes you might have to write bus adapters to adapt your memory bus to it. Other times you might have issues if the IPs use any vendor specific blocks inside them. Those might need some porting.
     

    Offline jars121Topic starter

    • Regular Contributor
    • *
    • Posts: 51
    • Country: 00
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #165 on: October 05, 2023, 09:07:08 am »
    Nobody has mentioned LiteX yet? https://github.com/enjoy-digital/litex

    It's funny you mention that, I was actually reading about LiteX yesterday. I haven't gone into any detail with it just yet so am not really sure what it has to offer, but at face value it looks quite interesting.
     

    Offline betocool

    • Regular Contributor
    • *
    • Posts: 102
    • Country: au
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #166 on: October 05, 2023, 12:34:57 pm »
    Litex has a steep learning curve, from my experience with it. I do like it, mind, but you pretty much write your hardware in Python to convert to verilog and then use the FPGA's IDE to compile. It's tricky sometimes to get hold of good information, but the Litex discord channel is pretty helpful.

    I've done a few simple things on Litex, super easy to build a SoC from scratch, but when you start looking under the hood, it's a rabbit hole.

    My 2c's.

    Cheers,

    Alberto
     

    Offline SiliconWizard

    • Super Contributor
    • ***
    • Posts: 14536
    • Country: fr
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #167 on: October 05, 2023, 07:00:36 pm »
    Probably already mentioned, but a soft core on FPGA will never remotely reach the processing power of a dedicated hardware core made on a similar process node, while drawing a LOT more power for a given frequency.
    OTOH, a soft core will save you the hassle of having an external bus between a FPGA and external CPU, and that's where hybrid FPGAs (FPGA fabric + hard CPU core on the same chip) can be useful if you need the processing power and don't want to deal with an external high-speed bus.

    So only your specific requirements can tell whether a soft core, a hybrid FPGA or a FPGA + external CPU is a proper fit.
     

    Offline jars121Topic starter

    • Regular Contributor
    • *
    • Posts: 51
    • Country: 00
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #168 on: October 05, 2023, 07:56:00 pm »
    Probably already mentioned, but a soft core on FPGA will never remotely reach the processing power of a dedicated hardware core made on a similar process node, while drawing a LOT more power for a given frequency.
    OTOH, a soft core will save you the hassle of having an external bus between a FPGA and external CPU, and that's where hybrid FPGAs (FPGA fabric + hard CPU core on the same chip) can be useful if you need the processing power and don't want to deal with an external high-speed bus.

    So only your specific requirements can tell whether a soft core, a hybrid FPGA or a FPGA + external CPU is a proper fit.

    Thank you, this is pretty much exactly what I've surmised from my research over the last couple of weeks. I've successfully implemented a heterogenous multiprocessor design in the past, but as you've identified, the external high-speed bus between the two processors quickly became a bottleneck. The 32/64-bit wide AXI interfaces built directly into the fabric between the FPGA and the hard core, while not without some complexity, certainly resolves any throughput/bottlenecking issues.
     

    Offline asmi

    • Super Contributor
    • ***
    • Posts: 2733
    • Country: ca
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #169 on: October 06, 2023, 02:21:06 am »
    Probably already mentioned, but a soft core on FPGA will never remotely reach the processing power of a dedicated hardware core made on a similar process node, while drawing a LOT more power for a given frequency.
    OTOH, a soft core will save you the hassle of having an external bus between a FPGA and external CPU, and that's where hybrid FPGAs (FPGA fabric + hard CPU core on the same chip) can be useful if you need the processing power and don't want to deal with an external high-speed bus.

    So only your specific requirements can tell whether a soft core, a hybrid FPGA or a FPGA + external CPU is a proper fit.
    The problem with hybrid solution is that very few modern SoCs have high-speed communication channel which can be used to talk to FPGA. The best you can find is PCIE 2.0 x1 which would be good for year 2010, but not for 2023. And even that is usually available only on higher-end SKU which are loaded with bazillion of peripherals, most of which are going to be useless for many designs. Even the brand new RaspPi 5 SoC only has a single PCIE2 line! :palm:
     
    The following users thanked this post: karpouzi9

    Offline Berni

    • Super Contributor
    • ***
    • Posts: 4961
    • Country: si
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #170 on: October 06, 2023, 05:35:27 am »
    Yeah id say USB 3.0 is the only commonly available high speed interface on modern SoCs, But that thing brings a lot of extra baggage when all you want to do is send a bunch of raw data across. FTDI does make some high performance USB 3.0 to parallel bus chips, letting a more pedestrian FPGA with no fancy pants SERDES talk fast.

    But yeah even PCIe is slow compared to the speed you can get in monolithic FPGA+SoC chips. Not only do you get the whole AXI bus across, they will often also give you a dedicated bus to the DDR memory controller, so the FPGA fabric can directly access memory at sizable GB/s speeds.
     

    Offline asmi

    • Super Contributor
    • ***
    • Posts: 2733
    • Country: ca
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #171 on: October 06, 2023, 01:14:43 pm »
    Yeah id say USB 3.0 is the only commonly available high speed interface on modern SoCs, But that thing brings a lot of extra baggage when all you want to do is send a bunch of raw data across. FTDI does make some high performance USB 3.0 to parallel bus chips, letting a more pedestrian FPGA with no fancy pants SERDES talk fast.
    Those FT60x chips eat a lot of IO pins, which can be a problem for smaller FPGA as they tend to have less IO as well. Also it requires a software layer to be implemented on top of it to facilitate the actual communication, and you pretty much have to use Linux because rolling out your own USB 3 stack is just not realistic. In that sense PCIE link is far superior as it provides a direct path straight to host's memory if endpoint supports bus mastering, so it's as close to having AXI interface as it can be, and it's usually not that complicated to get PCIE going even in bare-metal mode because PCIE fundamentally by design is little more than a DMA engine. Another big advantage of PCIE is that it's only two differential pairs per lane, so MUCH easier to route than a typical parallel bus. The only problem is, like I said, pathetic obsolete PCIE IPs which SoC vendors use that only supports a single lane of PCIE 2.0. I would be much more happy with x4 link, as even low-end Artix-7 chips have at least a full quad of transceivers, so there is no reason not to implement x4 link if it were to be supported by the host side. There is a bit of a catch here as PCIE 2.0 requires using at least speed grade 2 devices, but in practice it's usually not that big of a deal as long as you actually remember to manifest SG2 SKU in a BOM.

    Online Tation

    • Contributor
    • Posts: 39
    • Country: pt
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #172 on: October 06, 2023, 03:05:43 pm »
    In my experience (I must admit, based on products now obsolete), FPGAs including hard cores are (were?) buggy as hell, sometimes simply unusable. Thus I think a safer path is going thru any of the available soft cores (MicroBlaze, NiosII/V, LatticeMico —didn't use this one—,…). Or use and external MCU, provided that you can locate a fast enough I/O channel between MCU and FPGA.

    Regards.
     

    Offline asmi

    • Super Contributor
    • ***
    • Posts: 2733
    • Country: ca
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #173 on: October 06, 2023, 03:46:15 pm »
    In my experience (I must admit, based on products now obsolete), FPGAs including hard cores are (were?) buggy as hell, sometimes simply unusable. Thus I think a safer path is going thru any of the available soft cores (MicroBlaze, NiosII/V, LatticeMico —didn't use this one—,…). Or use and external MCU, provided that you can locate a fast enough I/O channel between MCU and FPGA.

    Regards.
    Never heard of anyone having troubles with silicon bugs of Zynq 7000 platform. Softcores, on the other hand, can be very buggy. For example, see my thread about Microblaze 64 bit. It feels like a beta version at best. 32 bit version is very stable though, never had any problems with it.

    Offline SiliconWizard

    • Super Contributor
    • ***
    • Posts: 14536
    • Country: fr
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #174 on: October 06, 2023, 07:57:02 pm »
    In my experience (I must admit, based on products now obsolete), FPGAs including hard cores are (were?) buggy as hell, sometimes simply unusable. Thus I think a safer path is going thru any of the available soft cores (MicroBlaze, NiosII/V, LatticeMico —didn't use this one—,…). Or use and external MCU, provided that you can locate a fast enough I/O channel between MCU and FPGA.

    Regards.
    Never heard of anyone having troubles with silicon bugs of Zynq 7000 platform. Softcores, on the other hand, can be very buggy. For example, see my thread about Microblaze 64 bit. It feels like a beta version at best. 32 bit version is very stable though, never had any problems with it.

    Indeed.
    Almost every entry-level (and even many higher-level) scope these days uses a Zynq, for instance. That's probably because these are so buggy.

    Anyway, the soft vs hard core question, as I mentioned earlier, makes usually no sense for the use cases where you really need a hard core anyway, no need to repeat why.

     
    The following users thanked this post: nctnico

    Online DiTBho

    • Super Contributor
    • ***
    • Posts: 3918
    • Country: gb
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #175 on: October 08, 2023, 01:15:48 pm »
    Even the brand new RaspPi 5 SoC only has a single PCIE2 line! :palm:

    well, so imagine when you have a single PCI bus 32bit/5V, exported as "ISA bus", with a bandwidth of 5Mbyte/sec, then "voltage level adapted" to 3.3V. That's what you get with several cheap PCI to fpga development boards!

    oh, and cheap means > 90 euro
    The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
     

    Offline asmi

    • Super Contributor
    • ***
    • Posts: 2733
    • Country: ca
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #176 on: October 08, 2023, 01:53:53 pm »
    well, so imagine when you have a single PCI bus 32bit/5V, exported as "ISA bus", with a bandwidth of 5Mbyte/sec, then "voltage level adapted" to 3.3V. That's what you get with several cheap PCI to fpga development boards!

    oh, and cheap means > 90 euro
    Sorry, but I live in 2023, not 1993.

    Online DiTBho

    • Super Contributor
    • ***
    • Posts: 3918
    • Country: gb
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #177 on: October 08, 2023, 02:16:55 pm »
    well, so imagine when you have a single PCI bus 32bit/5V, exported as "ISA bus", with a bandwidth of 5Mbyte/sec, then "voltage level adapted" to 3.3V. That's what you get with several cheap PCI to fpga development boards!

    oh, and cheap means > 90 euro
    Sorry, but I live in 2023, not 1993.

    The board was designed in ~2008, commercialized since 2010.
    The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
     

    Offline asmi

    • Super Contributor
    • ***
    • Posts: 2733
    • Country: ca
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #178 on: October 08, 2023, 02:27:58 pm »
    The board was designed in ~2008, commercialized since 2010.
    It doesn't matter now, in 2023.

    Online DiTBho

    • Super Contributor
    • ***
    • Posts: 3918
    • Country: gb
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #179 on: October 08, 2023, 02:30:21 pm »
    my two OW-boards are based on Mediatek SoC, designed in 2019, they are premium SBCs (150 euro each), they only have two miniPCIe slots, one is shared with the sATA lane, and it's mutually exclusive, either you use mini PCI, or you use sATA lane, you can't have both.

    Another (cheaper, ~60 euro) example? the RBM33G, designed in December 2017, it has three miniPCIe lanes, two miniPCIe slots and M.2 slot, but that's because it's a router!

    In my opinion it's already a good thing that the RPI-v5 has an exported PCIe lane, and it's a great thing, with which you can widely do great things.

    (and by the way what do you have to do with much more bandwidth?)
    The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
     

    Online langwadt

    • Super Contributor
    • ***
    • Posts: 4452
    • Country: dk
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #180 on: October 08, 2023, 02:59:39 pm »
    my two OW-boards are based on Mediatek SoC, designed in 2019, they are premium SBCs (150 euro each), they only have two miniPCIe slots, one is shared with the sATA lane, and it's mutually exclusive, either you use mini PCI, or you use sATA lane, you can't have both.

    Another (cheaper, ~60 euro) example? the RBM33G, designed in December 2017, it has three miniPCIe lanes, two miniPCIe slots and M.2 slot, but that's because it's a router!

    In my opinion it's already a good thing that the RPI-v5 has an exported PCIe lane, and it's a great thing, with which you can widely do great things.

    (and by the way what do you have to do with much more bandwidth?)

    yeh, a lane of PCIe gen 2.0 is ~500MB/s and afaiu the rpi5 is capable of doubling that forcing the (unsupported) Gen 3.0


     

    Offline asmi

    • Super Contributor
    • ***
    • Posts: 2733
    • Country: ca
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #181 on: October 08, 2023, 05:05:08 pm »
    they are premium SBCs (150 euro each)
    :-DD

    In my opinion it's already a good thing that the RPI-v5 has an exported PCIe lane, and it's a great thing, with which you can widely do great things.
    And there are many great things which you can not do...

    (and by the way what do you have to do with much more bandwidth?)
    Video processing. PCIE2x1 is not enough even for 1080p@60. But that is just one example, there are more - high-speed ADCs/DAC comes to my mind.
    « Last Edit: October 08, 2023, 05:07:43 pm by asmi »
     

    Online DiTBho

    • Super Contributor
    • ***
    • Posts: 3918
    • Country: gb
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #182 on: October 09, 2023, 12:32:20 pm »
    they are premium SBCs (150 euro each)
    :-DD

    why are you laughing? it is not a SBC that is purchased in bulk by the mass precisely because it has a lot of things that common users (usually of the RPIs&C) do not need, and the two miniPCI slots are declared "premium" compared to the version without miniPCI slot, which costs a lot Less.

    I don't see what's funny about it, but if you assume that "premium" means "industrial level", then we don't understand each other.

    high-speed ADCs/DAC comes to my mind.

    great point, here  :-+
    The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
     

    Online gnuarm

    • Super Contributor
    • ***
    • Posts: 2226
    • Country: pr
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #183 on: December 14, 2023, 06:15:07 pm »
    Thanks for your input, it's greatly appreciated.

    The option of having additional 'front-end' MCUs, feeding into the larger, more capable MCU is actually quite an interesting one.

    Or you can do the opposite, putting the entire design in a single FPGA. 

    Most people don't think of using fpgas because they are so used to the sequential mindset of CPUs.  They forget that fpgas can do all the same things.

    If you need something more complex than you are willing to roll on your own, like an Ethernet interface, then plop down a small MCU with Ethernet. 
    Rick C.  --  Puerto Rico is not a country... It's part of the USA
      - Get 1,000 miles of free Supercharging
      - Tesla referral code - https://ts.la/richard11209
     

    Offline bitslip

    • Contributor
    • Posts: 12
    • Country: us
    Re: MCU with FPGA vs. SoC FPGA
    « Reply #184 on: January 02, 2024, 05:20:20 am »
    Even though OP had indicated using Zynq, I'm going to put another vote for a softcore processor + your logic, even if it's a Zynq physical devices.  The Vex-RISC-V open-source RISC-V softcore processor looks ideal, but Microblaze (the 32b one, MIPS-based) is pretty easy to work with.  The 64b Microblaze was really only introduced to enable addressing memory > 4GB, and yes, it's basically a beta design.

    Be warned, you'll likely end up fighting the FPGA tools and Xilinx's weird ideas about how they should work vs. either writing code for the Zynq and/or creating your actual design

    I also suggest using Vivado 2018, maybe Vivado 2019, all the newer versions have muddled the flow from "front-end" schematics to the software IDE because Xilinx is chasing other goals.  I don't think these version of Vivado are "perfect" (far from it) but the basics work.  You can embedd a Microblaze into a part and have "Hello World" running on it in less than 20 mins.
     


    Share me

    Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
    Smf