Questionnaire for those interested in the project.
I'd appreciate any responses to understand what features are a priority and what I should focus on.
https://docs.google.com/forms/d/e/1FAIpQLSdm2SbFhX6OJlB834qb0O49cqowHnKiu7BEsXmT3peX4otOIw/formResponse
All responses will be anonymised and a summary of the results will be posted here (when sufficient data exists.)
IntroductionYou may prefer to watch the video I have made:
Over the past year and a half I have been working on a little hobby project to develop a decent high performance oscilloscope, with the intention for this to be an open source project. By 'decent' I class this as something that could compete with the likes of the lower-end digital phosphor/intensity graded scopes e.g. Rigol DS1000Z, Siglent SDS1104X-E, Keysight DSOX1000, and so on. In other words, 8-bit ADC, 1 gigasamples per second sampling rate on at least 1 channel, 200Mpt of waveform memory and rendering at least capable of rendering 25,000 waveforms/second.
The project began for a number of reasons. The first was because I wanted to learn and understand more about FPGAs; having only been one to blink an LED on an FPGA dev kit before implementing an oscilloscope seemed like a validating challenge. Secondly, I wasn't aware of any high performance open source oscilloscopes, ones that could be used every day by an engineer on their desk. I've since become aware of ScopeFun but the project is a little different from ScopeFun as they do the data processing on a PC whereas I intended to create a self-contained instrument with data capture and display in one device. For the display/user interface I utilise a Raspberry Pi Compute Module 3. This is a decent little device but crucially it has a camera interface port capable of receiving 1080p30 video, which works out to about 2Gbit/s of raw bandwidth. While this isn't enough to buffer raw samples from an oscilloscope, it's sufficient once you have a trigger criteria and if you have an FPGA in the loop to capture the raw data.
At the heart of the oscilloscope is a Xilinx Zynq 7014S system on chip on a custom PCB, connected to 256MB of DDR3 memory clocked at 533MHz. With the 16-bit memory interface this gives us a usable memory bandwidth of ~1.8GB/s. The Zynq is essentially an ARM Cortex-A9 with an Artix-7 FPGA on the same die, with a number of high performance memory interfaces between the two. Crucially, it has a hard on-silicon memory controller, unlike the regular Artix-7, which means you don't use up 20% of logic area implementing that controller. The FPGA acquires data using a HMCAD1511 ADC, which is the same ADC used in the Rigol and Siglent budget offerings. This ADC is inexpensive for its performance grade (~$70) and available from Digi-Key. A variant HMCAD1520 device offers 12-bit and 14-bit capability, with 12-bit at 500MSa/s. The ADC needs a stable 1GHz clock which is provided in this case by an ADF4351 PLL.
Data is captured from the ADC front end and packed into RAM using a custom acquisition engine on the FPGA. The acquisition engine also works with a trigger block which uses the data in the raw ADC stream to decide when to generate a trigger event and therefore when to start recording the post-trigger event. The implemented oscilloscope has both pre- and post-trigger events with a custom size for both from just a few pre-trigger samples to the full buffer of memory. The data is streamed over an AXI-DMA peripheral into blocks defined by software running on the Zynq. The blocks are streamed out of the memory into a custom CSI-2 peripheral also using a DMA block (using a large scatter-gather list created by the ARM.) The CSI-2 data bus interface is reverse-engineered, from documentation publicly available on the internet and by analysing a slowed-down data bus from an official Pi camera, with a modified PLL, captured on my trusty Rigol DS1000Z. I have a working HDL and hardware implementation that reliably runs at >1.6Gbit/s and application software on the Pi then renders the data transmitted over this interface. Most application software is written in Python on the Pi, with a small bit of C to interface with MMAL and to render the waveforms. The Zynq software is raw embedded C, running on baremetal/standalone platform. All Zynq software and HDL was developed with Vivado and Vitis toolkit from Xilinx.
Now, caveats: Only edge triggers (rising/falling/either) are currently supported, and only a 1ch mode is currently implemented for acquisition; it is mostly a data decimation problem for 2ch and 4ch modes but this has not been implemented for the prototype. All rendering is done in software presently on the Pi as there were some difficulties keeping a prototype GPU renderer stable. This rendering task uses 100% of one ARM core on the Pi (there is almost certainly a threading benefit available but that is unimplemented at present due to Python GIL nonsense) but the ideal goal would be to do the rendering on the Pi's GPU or on the FPGA. A fair bit of the ARM on the Zynq is busy just managing system tasks like setting up AXI DMA transactions for every waveform, which could probably be sped up if this was done all on the FPGA.
The analog front end for now is just AC coupled. I have a prototype AFE designed in LTSpice, but I haven't put any proper hardware together yet.
The first custom PCB (the "Minimum Viable Product") was funded by myself and a generous American friend who was interested in the concept. It cost about £1,500 (~$2,000 USD or 1,700 EUR, approx) to develop in total, including two prototypes (one with a 7014S and one with a 7020; the 7020 prototype has never been used). This was helped in part by a manufacturer in Sweden, SVENSK Elektronikproduktion, who provided their services at a great price due to the interest in the project (particular thanks to Fredrik H. for arranging this.) It is a 6 layer board, which presented some difficulty in implementation of the DDR3 memory interface (ideal would be 8-10 layers), but overall results were very positive and the interface functions at 533MHz just fine.
The first revision of the board worked with only minor alterations required. I've nothing but good words to say about SVENSK Elektronikproduktion, who helped bring this prototype to fruition very quickly and even with a last minute change and a minor error on my part that they were able to resolve. The board was mostly assembled by pick and place including the Zynq's BGA package and DDR3 memory, with some parts later hand placed. I had the first prototypes delivered in late November 2019 and had the prototype up and running by early March 2020 and the pandemic meant I had a lot more time at home so development continued at rapid pace from then onwards. The plan was to demonstrate the prototype in person at EMFCamp 2020 but for obvious reasons that event was cancelled.
(Prototype above is the unused 7020 variant.)
ResultsI have a working, 1GSa/s oscilloscope that can acquire and display >22,000 wfm/s. There is more work to be done but at this stage the prototype demonstrates the hardware is capable of providing most needs from the acquisition system of a modern digital oscilloscope.
The attached waveform images show:
1. 5MHz sine wave modulated AM with 10kHz sine wave
2. 5MHz sine wave modulated FM with 10kHz sine wave + 4MHz bias
3. 5MHz positive ramp wave
4. Psuedorandom noise
5. Chirp waveform (~1.83MHz)
6. Sinc pulse
The video also shows a live preview of the instrument in action.
Where next?Now I'm at a turning point with this project. I had to move job and location for personal reasons, so took a two month break from the project while starting at my new employer and moving house. But, I'm back to looking at this project, still in my spare time. And, having reflected a bit ...
A couple of weeks ago the Raspberry Pi CM4 was released. It's not pin compatible with the CM3, which is of course expected as the Pi 4 has PCI-Express interface and an additional HDMI port. It would make sense to migrate this project to the CM4; the faster processor and GPU present an advantage here. (I have already tested the CSI-2 implementation with a Pi 4 and no major compatibility issues were noted.)
There's also a lot of other things I want to experiment with. For instance, I want to move to a dual channel DDR3 memory interface on the Zynq, with 1GB of total memory space available. This would quadruple the sampling memory, and more than double the memory bandwidth (>3.8GB/s usable bandwidth), which is beneficial when it comes to trying to do some level of rendering on the FPGA. It's worth looking at the PCI-e interface on the CM4 for data transfer, but CSI-2 still offers some advantages, namely that it wouldn't be competing with bandwidth from the USB 3.0 or Ethernet peripherals if those are used in a scope product. PCI-e would also require a higher grade of Zynq with a hard PCI-e core implemented, or a slower HDL implementation of PCI-e, which might present other difficulties.
I'm also considering completely ripping up the Pi CM4 concept and going for a powerful SoC+FPGA like a Zynq UltraScale, but that would be a considerably more expensive part to utilise, and would perhaps change the goals of this project from developing an inexpensive open-source oscilloscope, to developing a higher-performance oscilloscope platform for enthusiasts. The cheapest UltraScale processor is around $250 USD but features an on-device dual ARM Cortex-A53 complex (a considerable upgrade over the ARM Cortex-A9 in the Zynq 7014S), Mali-400 GPU and DDR4 memory controllers; this would allow for e.g. an oscilloscope capture engine with gigabytes of sample memory (up to 32GB in the largest parts!), and we'd no longer be restricted into running over a limited bandwidth camera interface which would improve the performance considerably there.
I think there's a great deal of capability here when it comes to supporting modularity. What I'd like to offer is something along the lines of the original Tek mainframes, where you can swap an acquisition module in and out to change the function of the whole device. A small EEPROM would identify the right software package and bitstream to load and you can convert your oscilloscope into e.g. a small VNA, spectrum analyser, a CAN/OBDII module with analog channels for automotive work, etc. on the fly.
The end goal is a handheld, mains and/or battery-powered oscilloscope, with a capacitive 1280x800 touchscreen (optional HDMI output), 4 channels at 100MHz bandwidth and 1GSa/s multiplexed, minimum 500MSa of acquisition memory and at least 30,000 waveforms/second display rate (with a goal of 100kwaves/sec rendered and 200kwaves/sec captured for segmented memory modes.) I also intend to offer a two channel arbitrary signal generator output on the product, utilising the same FPGA as for acquisition. The product is intended to be open-source in its entirety, including the FPGA design and schematics, firmware on the processor and application software on the main processor. I'll publish details on these in short order, provided there's sufficient interest.
Full disclosure - I have some commercial interest in the project. It started as just a hobby project, but I've done everything through my personal contracting company, and have been in discussions with a few individuals and companies regarding possible commercialisation. No decisions have been made yet, and I intend for the project to be FOSHW regardless of the commercial aspects.
The questions for everyone here is:
- Does a project like this interest you? If so, why? If not, why not?
- What would you like to see from a Mk2 development - if anything: a more expensive oscilloscope to compete with e.g. the 2000-series of many manufacturers that aims more towards the professional engineer, or a cheaper open-source oscilloscope that would perhaps sell more to students, junior engineers, etc.? (We are talking about $500USD difference in pricing. An UltraScale part makes this a >$800USD product - which almost certainly changes the marketability.)
- Would you consider contributing in the development of an oscilloscope? It is a big project for just one guy to complete. There is DSP, trigger engines, an AFE, modules, casing design and so many more areas to be completed. Hardware design is just a small part of the product. Bugs also need to be found and squashed, and there is documentation to be written. I'm envisioning the capability to add modules to the software and the hardware interfaces will be documented so 3rd party modules could be developed and used.
- I'm terrible at naming products. "BluePulse" is very unlikely to be a longer term name. I'll welcome any suggestions.