Products > Test Equipment

A High-Performance Open Source Oscilloscope: development log & future ideas

<< < (16/71) > >>

nctnico:

--- Quote from: tom66 on November 18, 2020, 10:50:09 pm ---It's certainly an option, but the current Zynq solution is scalable to about 2.5GSa/s for a single channel operation (625MSa/s on 4ch).  Maximum bandwidth around 300MHz per channel.

If you want 500MHz 4ch scope, the ADC requirements would be ~2.5GSa/s per channel pair (drop to 1.25GSa/s with a channel pair activated) which implies about 5GB/s data rate into RAM.  The 64-bit AXI Slave HPn bus in the Zynq clocked at 200MHz maxes out around 1.6GB/s and four channels gets you up to 6.4GB/s but that would completely saturate the AXI buses leaving no free slots for RAM accesses for readback from the RAM or for executing code/data.

The fastest RAM configuration supported would be a dual channel DDR3 configuration at 800MHz requiring the fastest speed grade, which would get the total memory bandwidth to just under 6GB/s.

Bottom line is the platform caps out around 2.5GSa/s in total with the present Zynq 7000 architecture,   and would need to move towards an UltraScale or a dedicated FPGA capture engine for faster capture rate.

So I suppose the question is -- if you were to buy an open-source oscilloscope -- what would you prefer

1. US$1200 instrument with 2.5GSa/s per channel pair (4 channels, min. 1.25GSa/s), ~500MHz bandwidth, Nvidia core, >100kwfm/s, mains only power
2. US$600 instrument with 1GSa/s multiplexed over 4 channels, ~125MHz bandwidth, RasPi core, >25kwfm/s, portable/battery powered
3. Neither/something else

--- End quote ---
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

@Tautech: an instrument like this is interesting for people who like to extend the functionality. Another fact is that no oscilloscope currently on the market is perfect. You always end up with a compromise even if you spend US $10k. I'm not talking about bandwidth but just basic features.

Someone:

--- Quote from: rhb on November 18, 2020, 02:45:19 pm ---
--- Quote from: Someone on November 17, 2020, 09:55:24 pm ---
--- Quote from: tom66 on November 17, 2020, 12:12:48 pm ---
--- Quote from: Someone on November 16, 2020, 10:16:31 pm ---
--- Quote from: tom66 on November 16, 2020, 08:13:55 pm ---Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
--- End quote ---
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.
--- End quote ---
As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.
--- End quote ---
Not sure the trigger interpolation calculation is a single 8bit lookup when the sample point before and after the trigger could each be any value (restricted by the bandwidth of the front end, so perhaps 1/5 of the full range). Sounds like an area you need to look at much more deeply, as the entire capture needs to be phase shifted somewhere or the trigger will be jittering 1 sample forward/backward when the trigger point lands close to the trigger threshold. Exactly where and how to apply the phase shift is dependent on the scopes architecture. This may not be a significant problem if the acquisition sample rate is always >>> the bandwidth.

Similarly if you think a 60 tap filter isn't very useful recall that Lecroy ERES uses 25 taps to obtain its 2 bits of enhancement. P.S. don't restrict the thinking to DSP blocks as 18x18 multipliers, or that they are the only way to implement FIR filters. Similarly while running decimation/filtering at the full ADC rate before storing to acquisition memory makes for a nice architecture concept suited to realtime/fast update rates, its not the only way and keeping all "raw" ADC samples in acquisition memory (Lecroy style) to be plotted later has its own set of benefits and more closely matches your current memory architecture (from what you explained).

If memory is your cheap resource then some of the conventional assumptions are thrown out.

--- End quote ---

Interpolation for sample alignment and display infill are fundamentally different even if the operation is identical.

High order analog filters have serious problems with tolerance spreads.  So sensibly, the interpolation to align the samples and the correction operator should be combined using data measured in production.

The alignment interpolation operators are precomputed, so the lookup is into a table of 8 point operators.  Work required is 8 multiply-adds per output sample which is tractable with an FPGA.  Concern about running out of fabric led to a unilateral decision by me to include a 7020 version as the footprints are the same but lots more resources.  I  am still nervous about the 7014 not having sufficient DSP blocks.

Interpolation for display can be done in the GPU where  the screen update rate limitation provides lots of time and the output series are short.  So an FFT interpolator for the display becomes practical.

As for all the other features such as spectrum and vector analysis we have discussed that at great length. 
The To Do list is daunting.

Reg

--- End quote ---
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

So far you've both just said there is a fixed filter/delay, which is a worry when you plan to have acquisition rates close to the input bandwidth.

Equally doing sinc interpolation is a significant processing cost (time/area tradeoff), and just one of the waveform rate barriers in the larger plotting system. Although each part is achievable alone its how to balance all those demands within the limited resources, hence suggesting that software driven rendering is probably a good balance for the project as described.

james_s:
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

shaunakde:

--- Quote ---- Does a project like this interest you?   If so, why?   If not, why not?
--- End quote ---
Yes. This is interesting because I don't think there is a product out there that has been able to take advantage of the recent advances in embedded computing. I would love to see a "USB-C" scope that is implemented correctly, with excellent open-source software and firmware (maybe pico scopes come close). Bench-scopes are amazing, but PC scopes that are python scriptable, are just more interesting. Maybe not so much for pure electronics, but for science experiments it could be game-changer.


--- Quote ---- What would you like to see from a Mk2 development - if anything:  a more expensive oscilloscope to compete with e.g. the 2000-series of many manufacturers that aims more towards the professional engineer,  or a cheaper open-source oscilloscope that would perhaps sell more to students, junior engineers, etc.?  (We are talking about $500USD difference in pricing.  An UltraScale part makes this a >$800USD product - which almost certainly changes the marketability.)
--- End quote ---
This is a tough one. I personally would like it to remain as it is, a low-cost device and perhaps a better version of the "Analog Discovery" - just so that it can enable applications in applied sciences etc. But logically, it feels like this should be a higher-end scope that has all the bells and whistles; however, that does limit the audience (And hence community interest) which is not exactly great for an opensource project.


--- Quote --- Would you consider contributing in the development of an oscilloscope?  It is a big project for just one guy to complete.  There is DSP, trigger engines, an AFE, modules, casing design and so many more areas to be completed.  Hardware design is just a small part of the product.  Bugs also need to be found and squashed,  and there is documentation to be written.  I'm envisioning the capability to add modules to the software and the hardware interfaces will be documented so 3rd party modules could be developed and used.
--- End quote ---
Yes. My github handle is shaunakde - I would LOVE to help in any way I can (but probably I will be most useful in the documentation for now)



--- Quote ---I'm terrible at naming products.  "BluePulse" is very unlikely to be a longer term name.  I'll welcome any suggestions.
--- End quote ---
apertumScope - coz latin :P

tom66:

--- Quote from: james_s on November 19, 2020, 04:04:39 am ---What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

--- End quote ---

I did consider this when I first embarked on the project, but the reverse-engineering exercise would be very significant.  I'd also be stuck with whatever the manufacturer decided in their hardware design, that might limit future options.  And, you're stuck if that scope gets discontinued, or has a hardware revision which breaks things.  They might patch the software route that you use to load your unsigned binary (if they do implement that) or change the hardware in a subtle and hard to determine way (swap a couple pairs on the FPGA layout).

rhb also discovered the hard way that poking around a Zynq FPGA with 12V DC on the same connector as WE# is not conducive to the long-term function of the instrument.

It's a different story with something like a wireless router because those devices are usually running Linux with standard peripherals - an FPGA is definitely not a standard peripheral.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod