Products > Test Equipment
A High-Performance Open Source Oscilloscope: development log & future ideas
<< < (17/71) > >>
2N3055:

--- Quote from: james_s on November 19, 2020, 04:04:39 am ---What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

--- End quote ---

 :horse:

This has been discussed to death. It cannot be done in a time frame needed. Reverse engineering existing scope to the point that you understand everything about it's architecture is much harder than simply designing one from the scratch. There were dozens of attempts, that ended up with loading custom linux on scopes, and no way to talk to acquisition engine. Mainly running Doom on scope.
Scope architecture is interleaved design, hardware acquisition, hardware acceleration, system software, scope software and app side acceleration (gpu etc..)
Design decision have been made by manufacturers how to implement it. You end up with something that, in the end, won't have much more capabilities that what it had in the start. You just spent 20 engineer years to have same capabilities and different fonts...
Only way that maybe would make sense to try would be if existing manufacturer would open their design (publish all internal details) as a starting point. Which will happen, well, never.

FOSS scope for the sake of FOSS is a waste of time. GOOD FOSS scope is not... If good FOSS scope existed, it would start to spread through academia, hobby, industry and if really used, that would fuel it's progress. Remember Linux and Kicad... When did they take off? When they became useful and people started using them, not when they became free. They existed and were free for many years and happily ignored by most.

And then you would have many manufacturers that would make them for nominal price. It would be easy for them. like Arduino and clones. Someone did the hard work of hardware design. And someone else will take care of software... That is a dream for manufacturing.. Very low cost... I bet you that what we estimate now to be 600 USD could be had for 300 USD (or less) if mass manufacturing happens...
tom66:

--- Quote from: Someone on November 19, 2020, 12:17:01 am ---Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.
--- End quote ---

There is no need to use the fractional position when in the situation where 1 wave point is plotting across 1 pixel or less.  At that point any fractionality is lost to the user unless you do antialiasing.  I prototyped antialiasing on the GPU renderer, and the results were poor because a waveform never looked "sharp".  I suspect no scope manufacturer makes a true antialiased renderer, it just doesn't add anything.

With that in mind your sinx/x problem becomes only an issue at faster timebases (<50ns/div) and that is where you start plotting more pixels than you have points so your read input data is slower than your write output data.  Your sinx/x filter is essentially a FIR filter with sinc coefficients loaded in and every nth parameter set to zero (if I understand it correctly!)  That makes it prime territory for the DSP blocks on the FPGA.

There isn't any reason why, with an architecture like the Zynq you can't do a hybrid of the two - software renderer configures hardware rendering engine for instance by loading list of pointers and pixel offsets for each waveform.  While it would probably end up being the ultimate bottleneck,  the present system is doing over 80,000 DMA transfers per second to achieve the waveform update rate, and that is ALL controlled by the ARM.  So if that part is eliminated and driven entirely by the FPGA logic and the ARM is just processing the sinx/x offset and trigger position then the performance probably won't be so bad.
sb42:

--- Quote from: james_s on November 19, 2020, 04:04:39 am ---What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

--- End quote ---

Internet router: mass-market consumer product, cheap commodity hardware, Linux does everything.
Oscilloscope: completely the opposite in every way? ;)

The only way I see this happening would be for one of the A-brand manufacturers to come up with an open-platform scope that's explicitly designed to support third-party firmware, kind of like the WRT54GL of oscilloscopes. I suspect that the business case isn't very good, though.

Manufacturers of inexpensive scopes iterate too quickly for reverse engineering to be practical.
tom66:

--- Quote from: nctnico on November 18, 2020, 11:21:28 pm ---I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

--- End quote ---

The present Zynq solution is scalable to about a max. of 2.5GSa/s across all active channels.   That would use the bandwidth of two 64-bit AXI Master ports and require a complex memory gearbox to ensure the data got assembled correctly in the RAM.  It would require a minimum dual channel memory controller, as the present memory bandwidth is ~1.8GB/s, so you just wouldn't have enough bandwidth to write all your samples without losing them.

You can't get to 5GSa/s across all active channels (i.e. 1.25GSa/s per channel in 4ch mode or 2.5GSa/s in ch1,ch3 mode) without having a faster memory bus, AXI bus, and fabric, so this platform can't make a 4ch 500MHz scope.    Which pushes the platform towards the UltraScale part, or a very fast FPGA front end (e.g. Kintex-7) connected to a slower backbone (e.g. a Zynq or a Pi) if you want to be able to do that.

If the platform is modular though then there is an option.  The product could "launch" with a smaller FPGA/SoC solution and the motherboard could be replaced at a later date with the faster SoC solution.  There would be software compatibility headaches, as the platforms would differ considerably, but it would be possible to share most of the GUI/Control/DSP stuff, I think.

The really big advantage with keeping it all within something like an UltraScale is that things become memory addressable.  If that can also be done with PCI-e then that could be a winner.  It seems that the Nvidia card doesn't expose PCI-e ports though which is a shame.  You'd need to do it over USB3.0.

With UltraScale though, it would be possible to fit an 8GB SODIMM memory module and have ~8Gpts of waveform memory available for long record mode. 

That would be a pretty killer feature.  You could also upgrade it at any time (ships with a 2GB DDR4 module, install any laptop DDR4 RAM that's fast enough.)
Someone:

--- Quote from: tom66 on November 19, 2020, 08:26:21 am ---
--- Quote from: Someone on November 19, 2020, 12:17:01 am ---Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.
--- End quote ---
There is no need to use the fractional position when in the situation where 1 wave point is plotting across 1 pixel or less.
--- End quote ---
I disagree, if you zoom in then it will become visible (even before display "fill in" interpolation appears).

Between ADC sample rate, acquisition memory sample rate, and display points/px, there may be differences between any of those and opportunities for aliasing and jitter to appear.

While the rendering etc is all the exciting/pretty stuff, I agree with many posters above that the starting point to get people working on the project would be a viable AFE and ADC capture system. There are many ways to use that practically and trying to lock down the processing architecture/concept/limitations at the start might be putting the cart before the horse.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod