@Ian.M
Thanks! You make a good point, and the scheme you detail certainly would solve that problem.
* FPGAs aren't very efficient at implementing complex but slow logic directly as it typically requires as much resources as full-speed logic of comparable complexity. Once the complexity goes over a certain threshold, an 8 bit soft-core MCU to manage supervisory tasks may well be the best option.
By that point, I have previously sprung the on-FPGA resources for a swapforth/j1a core (or my personal j4a core): Almost full featured ANS forth compatible 16 bit core, that implements the Forth stack machine in hardware, not by 'stack is in ram, pointer in register', but 'both stacks exist in place of registers'.
Runs easily up to about 60MHz, and that's one forth word call per clock, most of the time. Every indirect costs you a clock, but in general it executes standard 16 bit forth hilariously well. The hard-limited stack depths hurt less than you'd think, and it's a joy to use. (See
for Jonathon's demo of swapforth j1a on a 1k gate ice40 chip).
More recently, I'll just co-install an Arm SBC of some kind running some variety of debian. Then usually just communicate either by serial or USB (or both). Plain python is fine. Can then have stuff like tensorflow / openCL if really heavy lifting is needed: or just talk over the internet to a supercomputer.
Certainly USB 2.0 HiSpeed does up to 30M bytes/sec reliably: So long as you have at least 8MiB of SDRAM on the FPGA side, at any rate. Through an SBC that does drop to only around 18MB/s or so, but I've found 16MB/s to be enough for my needs. This gets punted over the network, just using netcat on the SBC, and is 'caught' the same way elsewhere on a server with the space to put it to disk.
I haven't looked into it deeply, but I do have a $20 FPGA board on my desk: A Colorlight 5A-75B, having a LFE5U-25F (25k gate) FPGA and 2x gigabit ethernet interfaces. There's already a open source RISCV soft core SoC project for this that works (including gigabit ethernet!) but I have yet to check what *sustainble* dataflow it can reliably manage, and whether the couple MB's sram it has is enough.
Either way, the data capture pipe I tend keep completely separate from the 'control' FPGA: My business is basically getting evidence of running new equipment, and if it fails, we need to be able to learn from it. When it finally all works, the reports look totally boring, but that's what you want to see. Boring reliability. A whole bunch of pulses that look identical, apart from the natural noise. Better not to mess with what works, and has proven reliable so far. So the control FPGA is generally on a completely separate board, let alone chip. It's just easier that way.
At most there's an additional link between them, which lets the control FPGA 'snoop' on selected live data.
Most of the stream is somewhat of a black box: 8 channels of 12bit 0 to 3.3V data at 1MS/s, mostly watching hydraulic pressure sensors. Acts somewhat as a cross between an oscilloscope and and strip-chart recorder, but one which is more like a highspeed camera that is just running all day long: Really good for catching things breaking, really makes it easy to see what went wrong, since any 'triggering' can be done entirely post-process.
You end up with some big files, but hard drives are really cheap these days. I use the 'snd' program to deal with them: works really well with arbitrary sample rate, bit depth, channel count and file size: Never crashes, just takes a while to read over the file sometimes, especially if you zoom out too far. But it comes back with a really neat overview, where you can find any glitch, no matter how small, and zoom in on it - it feels a bit like using google earth, because you can go from 'all day' to '100 microseconds' or so.
But all the above is the recording / science side of the job: For controls, just putting the stuff that needs precision timing in the FPGA, with some dumb state machines, and setting up a low-latency link to where more complex code can run works well enough. The job is basically DI engine ECU, where I do DI injection and ignition timing, plus some other things as needed. The FPGA is just doing the job that the mechanical fuel system does - whilst being able to be a bit smarter about dealing with known injection latency, for example. This also does some correlation of deskewed pressure data with engine position, and having all that corrected for latency so it's collated in terms of observations at a give engine position make analysis much easier: separate data link and save flow for that.
What really impresses me is how good the open source only FPGA tools are getting: At least for the chips they have good support for.
The main thing, apart from being able keep all the toolchain as well as source code within the same box, is that it eliminates all the silly 'IT' failure modes. I've found such systems far more stable to bring out of mothballs after a couple of years. With everything else, there's another two battles: reinstalling the software, getting it relicensed (possibly repurchasing a time-limited license again) and then needing to fix any breakage that occurs because of the new versions.
With the open source box, you can just turn it off, walk away, at let it sit for years. When you dust it off and turn it on, sure the OS is out of date, but it all just works, exactly as it was. Updating it is easy and quick, and usually painless.
The OSS FPGA tools on a fast new PC are breathtakingly fast, especially compared to the older vendor tools. See for instance icestudio.io : It's like the arduino 'just an app' setup experiance, more or less, except going from changed verilog code to configured FPGA with blinking LED is literally seconds. Faster even than the same wait for a changed code on an Arduino platform design, or so it seems to me.
But even on an embedded SBC, they're still about as fast as the vendor tools are on said fast PC: maybe 15 to 20 minutes, but doable, for that 8k gate embedded forth soft SoC FPGA recompile.
With quicker turn around for development than anything but a live forth system, along with easy immunity to interprocess interlocks - which can be nasty intermittent problems to solve; I think even quite low end control cases should go to FPGA from microcontrollers, especially where servicability matters at all.
It's quite feasible to now put the whole dev tools into an sd card, and have that card served in usable state via a web page running from a $4 esp microcontroller, where the user BYO's some wifi-connectable smart device which then provides all the RAM and compute needed to not only run a GUI, but also the whole toolchain including recompilation! I haven't done it yet, but it would be a good student project. esp-link , maybe some use of emscripten etc.
That would save the cost of the SBC. Just stick a QR code in there to let whoever opens the door have their smart device connect up and point it at the entry web page. Can work entirely without internet access. Could use something like the cloud9 web-hosted IDE to make that quite comfortable too (I have hosted that from an embedded system, and it's fairly nice), or just use jupyterlab or so. IDK: not a web dev.
At this time the FPGA tools are available already in architecture portable executables which run using WebASM (not in a browser).
You can be up and running in a minute or two, on any platform. (YoWASP project). I do this using microsoft's WSL2 and it has been good. A little lower level / more flexible than icestudio.io, but very comfortable if you're comfortable in linux.
As a side note: I've found dumb relay ladder logic, and dumb analog alarm relays (And a pulse-presence alarm relay) is all worthwhile for anything safety critical (especailly safety of people/environment, but also safety of plant). Should all be fail-safe and 'programmable' by turning a trimpot, with readily available E-stop buttons. No matter how 'smart' the programmable stuff works, day to day - safety is just a whole 'nother ball game! A couple threshold/alarm units, maybe some interlock switch-physical access locks go a long way to keeping absent-minded people safe.
I did at first feel like I was going out on a bit of a limb, just using all open source toolchain stuff, but I've had excellent results doing so, and it's proven itself time and time again on some older apparatus that gets only intermittent use. It's easy to recommision, tailor to the new job, and be up and running in less time than it takes to just install/update the heavyweight windows software otherwise needed.
projects Symbiflow, yosys, icestorm, trellis, apycula, YoWASP, etc are all very much like "gcc for FPGAs", and only improving with time. Probaby you'll get better design performance with vendor tools still, especially if you're chasing maximum clock rates or minimum space usage, but for what is typically otherwise 'arduino' territory, I think there's a very strong case for skipping over microcontrollers (including RTOS) entirely these days, in favour of a hybrid FPGA digital logic / Linux OS SBC approach.
I expect eventually there'll be even smaller cheaper 'lower end' FPGA platforms, and there kind of is - c.f. Fomu FPGA.
Support for some of the nicer older FPGAs: Xilinx spartans and machXO etc, is slowly appearing as well.
Were I intel, I'd be trying to get on top of it.
Anyway - I've got stuff to do. Sorry for the Rant, it's probably just I do have to deal with some colleages stuck in the 90's sometimes.