Products > Test Equipment
A High-Performance Open Source Oscilloscope: development log & future ideas
Marco:
All the FPGA should be doing is digital phosphor accumulation.
nctnico:
--- Quote from: Marco on December 05, 2020, 04:33:43 pm ---All the FPGA should be doing is digital phosphor accumulation.
--- End quote ---
No, not at this point in the project. This can be done in software just fine.
If you look at Siglent's history you'll notive they have rewritten their oscilloscope firmware at least 3 times from scratch before getting where they are now. Creating oscilloscope firmware is hard and it is super easy to paint yourself into a corner. The right approach is to get the basic framework setup first (going through several iterations for sure) and then optimise. IMHO the value of this project is going to be in flexibility to make changes / add new features. If people want crazy high update rates they can buy an exisiting scope and be done with it.
For example: if the open source platform allows to add a Python or C/C++ based protocol decoder in a couple of hours then that is a killer feature. Especially if the development environment already runs on the oscilloscope so no software installation for cross compiling or whatever is needed. If OTOH you'd need to get a Vivado license first and spend a couple of days on understanding the FPGA code then nobody will want to do this.
A good example is how the Tektronix logic analyser software can be extended by decoders: https://xdevs.com/guide/tla_spi/
tom66:
--- Quote from: nctnico on December 05, 2020, 12:18:50 pm ---
--- Quote from: tom66 on November 29, 2020, 06:49:24 pm ---It would of course be very interesting to see what you come up with nctnico. In the meantime I am focused on the digital systems engineering parts of this project. I am presently designing the render-acquisition engine which would replace the existing render engine in software on the Pi.
--- End quote ---
I'd advise against that. With the rendering engine fixed inside the FPGA you'll loose a lot of freedom in this part. Lecroy scopes do all their rendering in software to give them maximum flexibility for analysis. A better way would be to finalise the rendering in software first and then see what can be optimised where using the FPGA is the very very last resort. IMHO it would be a mistake to put the rendering inside the FPGA because it will fixate a lot of functionality and lock many people out of being able to help improve this design.
--- End quote ---
The problem with software rendering is you can't do as much with software as you can do with dedicated hardware blocks. The present rendering engine achieves ~23k wfms/s and is about as optimised as you can achieve on a Raspberry Pi ARM processor taking maximum advantage of cache design and hardware hacks. And that is without vector rendering, which currently approximately halves performance.
An FPGA rendering engine should easily be able to achieve over 200k wfm/s and while raw waveforms rendered per second is a case of diminishing returns (there probably is not much benefit with the 1 million waves/sec scopes from Keysight - feel free to disagree with me here) there is still some advantage to achieving e.g. 100k wfm/s which is where many US$900 - 1500 oscilloscopes seem to be benchmarking.
This also frees the ARM on the Pi to be used for more useful things - while theoretically 100kwfm/s might be possible if all four ARMs were busy would this be a good thing? The UI would become sluggish and features like serial decode would depend on the ARM processor too, in all likelihood, and therefore would suffer in performance.
As for maintainability, that shouldn't be as much of a concern. Sure, it is true that the raw waveform engine may not be maintained as much (it is a 'get it right and ship' thing in my mind), but the rest of the UI and application will be in userspace, including cursors, graticule, that sort of thing. In fact, it is likely that all the FPGA renderer will do is pass out a rendered image of the waveform for a given channel which the Pi or other applications processor can plot at any desired location. Essentially, as Marco states, the FPGA is doing the digital phosphor part which is the thing that needs to be fast. The applications software will always have access to the waveform data too.
nctnico:
Trust me, nobody cares about waveforms per second! It is not a good idea to just pursue a crazy high number just for the sake of achieving it. There are enough readily made products out there for sale for the waveforms/s aficionado. IIRC the Lecroy Wavepro 7k series tops at a couple of thousand without any analysis functions enabled.
You have to define the target audience. What if someone has a great idea on how to do color grading differently but if that is 'fixed' inside the FPGA there is no way to change it. Also, with rendering fixed inside the FPGA you basically end up with math traces for anything else and you can't make a waveform processing pipeline (like GStreamer does for example) that easely.
I'm 100% sure that the software and GPU approach offers the best flexibility and is the way to the future (also for oscilloscope manufacturers in general). A high end version can have a PCIexpress slot which can accept a high end video card to do the display processing. The waveforms/s rate goes up immediately and doesn't take any extra development effort. Again, look at the people upgrading their Lecroy Wavepro 7k series with high end video cards.
Marco:
--- Quote from: nctnico on December 05, 2020, 04:38:52 pm ---No, not at this point in the project.
--- End quote ---
For a minimum functional prototype to get some hype going that makes sense, high capture rate digital phosphor and event detection is a high end feature. Budgeting some room/memory for it in the FPGA costs very little time though.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version