Author Topic: A High-Performance Open Source Oscilloscope: development log & future ideas  (Read 69967 times)

0 Members and 1 Guest are viewing this topic.

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #75 on: November 18, 2020, 11:21:28 pm »
It's certainly an option, but the current Zynq solution is scalable to about 2.5GSa/s for a single channel operation (625MSa/s on 4ch).  Maximum bandwidth around 300MHz per channel.

If you want 500MHz 4ch scope, the ADC requirements would be ~2.5GSa/s per channel pair (drop to 1.25GSa/s with a channel pair activated) which implies about 5GB/s data rate into RAM.  The 64-bit AXI Slave HPn bus in the Zynq clocked at 200MHz maxes out around 1.6GB/s and four channels gets you up to 6.4GB/s but that would completely saturate the AXI buses leaving no free slots for RAM accesses for readback from the RAM or for executing code/data.

The fastest RAM configuration supported would be a dual channel DDR3 configuration at 800MHz requiring the fastest speed grade, which would get the total memory bandwidth to just under 6GB/s.

Bottom line is the platform caps out around 2.5GSa/s in total with the present Zynq 7000 architecture,   and would need to move towards an UltraScale or a dedicated FPGA capture engine for faster capture rate.

So I suppose the question is -- if you were to buy an open-source oscilloscope -- what would you prefer

1. US$1200 instrument with 2.5GSa/s per channel pair (4 channels, min. 1.25GSa/s), ~500MHz bandwidth, Nvidia core, >100kwfm/s, mains only power
2. US$600 instrument with 1GSa/s multiplexed over 4 channels, ~125MHz bandwidth, RasPi core, >25kwfm/s, portable/battery powered
3. Neither/something else
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

@Tautech: an instrument like this is interesting for people who like to extend the functionality. Another fact is that no oscilloscope currently on the market is perfect. You always end up with a compromise even if you spend US $10k. I'm not talking about bandwidth but just basic features.
« Last Edit: November 18, 2020, 11:41:16 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4525
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #76 on: November 19, 2020, 12:17:01 am »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.
As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.
Not sure the trigger interpolation calculation is a single 8bit lookup when the sample point before and after the trigger could each be any value (restricted by the bandwidth of the front end, so perhaps 1/5 of the full range). Sounds like an area you need to look at much more deeply, as the entire capture needs to be phase shifted somewhere or the trigger will be jittering 1 sample forward/backward when the trigger point lands close to the trigger threshold. Exactly where and how to apply the phase shift is dependent on the scopes architecture. This may not be a significant problem if the acquisition sample rate is always >>> the bandwidth.

Similarly if you think a 60 tap filter isn't very useful recall that Lecroy ERES uses 25 taps to obtain its 2 bits of enhancement. P.S. don't restrict the thinking to DSP blocks as 18x18 multipliers, or that they are the only way to implement FIR filters. Similarly while running decimation/filtering at the full ADC rate before storing to acquisition memory makes for a nice architecture concept suited to realtime/fast update rates, its not the only way and keeping all "raw" ADC samples in acquisition memory (Lecroy style) to be plotted later has its own set of benefits and more closely matches your current memory architecture (from what you explained).

If memory is your cheap resource then some of the conventional assumptions are thrown out.

Interpolation for sample alignment and display infill are fundamentally different even if the operation is identical.

High order analog filters have serious problems with tolerance spreads.  So sensibly, the interpolation to align the samples and the correction operator should be combined using data measured in production.

The alignment interpolation operators are precomputed, so the lookup is into a table of 8 point operators.  Work required is 8 multiply-adds per output sample which is tractable with an FPGA.  Concern about running out of fabric led to a unilateral decision by me to include a 7020 version as the footprints are the same but lots more resources.  I  am still nervous about the 7014 not having sufficient DSP blocks.

Interpolation for display can be done in the GPU where  the screen update rate limitation provides lots of time and the output series are short.  So an FFT interpolator for the display becomes practical.

As for all the other features such as spectrum and vector analysis we have discussed that at great length. 
The To Do list is daunting.

Reg
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

So far you've both just said there is a fixed filter/delay, which is a worry when you plan to have acquisition rates close to the input bandwidth.

Equally doing sinc interpolation is a significant processing cost (time/area tradeoff), and just one of the waveform rate barriers in the larger plotting system. Although each part is achievable alone its how to balance all those demands within the limited resources, hence suggesting that software driven rendering is probably a good balance for the project as described.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #77 on: November 19, 2020, 04:04:39 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.
 
The following users thanked this post: splin

Offline shaunakde

  • Contributor
  • Posts: 11
  • Country: us
    • Shaunak's Lab
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #78 on: November 19, 2020, 04:19:53 am »
Quote
- Does a project like this interest you?   If so, why?   If not, why not?
Yes. This is interesting because I don't think there is a product out there that has been able to take advantage of the recent advances in embedded computing. I would love to see a "USB-C" scope that is implemented correctly, with excellent open-source software and firmware (maybe pico scopes come close). Bench-scopes are amazing, but PC scopes that are python scriptable, are just more interesting. Maybe not so much for pure electronics, but for science experiments it could be game-changer.

Quote
- What would you like to see from a Mk2 development - if anything:  a more expensive oscilloscope to compete with e.g. the 2000-series of many manufacturers that aims more towards the professional engineer,  or a cheaper open-source oscilloscope that would perhaps sell more to students, junior engineers, etc.?  (We are talking about $500USD difference in pricing.  An UltraScale part makes this a >$800USD product - which almost certainly changes the marketability.)
This is a tough one. I personally would like it to remain as it is, a low-cost device and perhaps a better version of the "Analog Discovery" - just so that it can enable applications in applied sciences etc. But logically, it feels like this should be a higher-end scope that has all the bells and whistles; however, that does limit the audience (And hence community interest) which is not exactly great for an opensource project.

Quote
Would you consider contributing in the development of an oscilloscope?  It is a big project for just one guy to complete.  There is DSP, trigger engines, an AFE, modules, casing design and so many more areas to be completed.  Hardware design is just a small part of the product.  Bugs also need to be found and squashed,  and there is documentation to be written.  I'm envisioning the capability to add modules to the software and the hardware interfaces will be documented so 3rd party modules could be developed and used.
Yes. My github handle is shaunakde - I would LOVE to help in any way I can (but probably I will be most useful in the documentation for now)


Quote
I'm terrible at naming products.  "BluePulse" is very unlikely to be a longer term name.  I'll welcome any suggestions.
apertumScope - coz latin :P
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6697
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #79 on: November 19, 2020, 08:20:25 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

I did consider this when I first embarked on the project, but the reverse-engineering exercise would be very significant.  I'd also be stuck with whatever the manufacturer decided in their hardware design, that might limit future options.  And, you're stuck if that scope gets discontinued, or has a hardware revision which breaks things.  They might patch the software route that you use to load your unsigned binary (if they do implement that) or change the hardware in a subtle and hard to determine way (swap a couple pairs on the FPGA layout).

rhb also discovered the hard way that poking around a Zynq FPGA with 12V DC on the same connector as WE# is not conducive to the long-term function of the instrument.

It's a different story with something like a wireless router because those devices are usually running Linux with standard peripherals - an FPGA is definitely not a standard peripheral.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 6600
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #80 on: November 19, 2020, 08:22:49 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

 :horse:

This has been discussed to death. It cannot be done in a time frame needed. Reverse engineering existing scope to the point that you understand everything about it's architecture is much harder than simply designing one from the scratch. There were dozens of attempts, that ended up with loading custom linux on scopes, and no way to talk to acquisition engine. Mainly running Doom on scope.
Scope architecture is interleaved design, hardware acquisition, hardware acceleration, system software, scope software and app side acceleration (gpu etc..)
Design decision have been made by manufacturers how to implement it. You end up with something that, in the end, won't have much more capabilities that what it had in the start. You just spent 20 engineer years to have same capabilities and different fonts...
Only way that maybe would make sense to try would be if existing manufacturer would open their design (publish all internal details) as a starting point. Which will happen, well, never.

FOSS scope for the sake of FOSS is a waste of time. GOOD FOSS scope is not... If good FOSS scope existed, it would start to spread through academia, hobby, industry and if really used, that would fuel it's progress. Remember Linux and Kicad... When did they take off? When they became useful and people started using them, not when they became free. They existed and were free for many years and happily ignored by most.

And then you would have many manufacturers that would make them for nominal price. It would be easy for them. like Arduino and clones. Someone did the hard work of hardware design. And someone else will take care of software... That is a dream for manufacturing.. Very low cost... I bet you that what we estimate now to be 600 USD could be had for 300 USD (or less) if mass manufacturing happens...
 
The following users thanked this post: Someone, Kean

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6697
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #81 on: November 19, 2020, 08:26:21 am »
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

There is no need to use the fractional position when in the situation where 1 wave point is plotting across 1 pixel or less.  At that point any fractionality is lost to the user unless you do antialiasing.  I prototyped antialiasing on the GPU renderer, and the results were poor because a waveform never looked "sharp".  I suspect no scope manufacturer makes a true antialiased renderer, it just doesn't add anything.

With that in mind your sinx/x problem becomes only an issue at faster timebases (<50ns/div) and that is where you start plotting more pixels than you have points so your read input data is slower than your write output data.  Your sinx/x filter is essentially a FIR filter with sinc coefficients loaded in and every nth parameter set to zero (if I understand it correctly!)  That makes it prime territory for the DSP blocks on the FPGA.

There isn't any reason why, with an architecture like the Zynq you can't do a hybrid of the two - software renderer configures hardware rendering engine for instance by loading list of pointers and pixel offsets for each waveform.  While it would probably end up being the ultimate bottleneck,  the present system is doing over 80,000 DMA transfers per second to achieve the waveform update rate, and that is ALL controlled by the ARM.  So if that part is eliminated and driven entirely by the FPGA logic and the ARM is just processing the sinx/x offset and trigger position then the performance probably won't be so bad.
« Last Edit: November 19, 2020, 08:28:03 am by tom66 »
 

Offline sb42

  • Contributor
  • Posts: 42
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #82 on: November 19, 2020, 08:33:22 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

Internet router: mass-market consumer product, cheap commodity hardware, Linux does everything.
Oscilloscope: completely the opposite in every way? ;)

The only way I see this happening would be for one of the A-brand manufacturers to come up with an open-platform scope that's explicitly designed to support third-party firmware, kind of like the WRT54GL of oscilloscopes. I suspect that the business case isn't very good, though.

Manufacturers of inexpensive scopes iterate too quickly for reverse engineering to be practical.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6697
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #83 on: November 19, 2020, 09:12:04 am »
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

The present Zynq solution is scalable to about a max. of 2.5GSa/s across all active channels.   That would use the bandwidth of two 64-bit AXI Master ports and require a complex memory gearbox to ensure the data got assembled correctly in the RAM.  It would require a minimum dual channel memory controller, as the present memory bandwidth is ~1.8GB/s, so you just wouldn't have enough bandwidth to write all your samples without losing them.

You can't get to 5GSa/s across all active channels (i.e. 1.25GSa/s per channel in 4ch mode or 2.5GSa/s in ch1,ch3 mode) without having a faster memory bus, AXI bus, and fabric, so this platform can't make a 4ch 500MHz scope.    Which pushes the platform towards the UltraScale part, or a very fast FPGA front end (e.g. Kintex-7) connected to a slower backbone (e.g. a Zynq or a Pi) if you want to be able to do that.

If the platform is modular though then there is an option.  The product could "launch" with a smaller FPGA/SoC solution and the motherboard could be replaced at a later date with the faster SoC solution.  There would be software compatibility headaches, as the platforms would differ considerably, but it would be possible to share most of the GUI/Control/DSP stuff, I think.

The really big advantage with keeping it all within something like an UltraScale is that things become memory addressable.  If that can also be done with PCI-e then that could be a winner.  It seems that the Nvidia card doesn't expose PCI-e ports though which is a shame.  You'd need to do it over USB3.0.

With UltraScale though, it would be possible to fit an 8GB SODIMM memory module and have ~8Gpts of waveform memory available for long record mode. 

That would be a pretty killer feature.  You could also upgrade it at any time (ships with a 2GB DDR4 module, install any laptop DDR4 RAM that's fast enough.)
« Last Edit: November 19, 2020, 09:13:59 am by tom66 »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4525
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #84 on: November 19, 2020, 09:35:06 am »
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.
There is no need to use the fractional position when in the situation where 1 wave point is plotting across 1 pixel or less.
I disagree, if you zoom in then it will become visible (even before display "fill in" interpolation appears).

Between ADC sample rate, acquisition memory sample rate, and display points/px, there may be differences between any of those and opportunities for aliasing and jitter to appear.

While the rendering etc is all the exciting/pretty stuff, I agree with many posters above that the starting point to get people working on the project would be a viable AFE and ADC capture system. There are many ways to use that practically and trying to lock down the processing architecture/concept/limitations at the start might be putting the cart before the horse.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6697
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #85 on: November 19, 2020, 09:38:54 am »
I disagree, if you zoom in then it will become visible (even before display "fill in" interpolation appears).

Between ADC sample rate, acquisition memory sample rate, and display points/px, there may be differences between any of those and opportunities for aliasing and jitter to appear.

While the rendering etc is all the exciting/pretty stuff, I agree with many posters above that the starting point to get people working on the project would be a viable AFE and ADC capture system. There are many ways to use that practically and trying to lock down the processing architecture/concept/limitations at the start might be putting the cart before the horse.

Well yes, but at that point you are no longer doing 1 pixel to 1 wavepoint or more.   So you can recompute the trigger position if needed when zooming in.  The fractional trigger point can then be used, if needed.  In the present implementation, the trigger point is supplied as a fixed point integer with a 24-bit integer component and 8-bit fractional component.

Reinterpreting data when zooming in seems to be pretty common amongst scopes.  When I test-drove the SDS5104X you had a sinx/x interpolation 'bug' which only became visible at certain timebases once stopped.  So the scope would be reinterpreting the data in RAM as needed.

« Last Edit: November 19, 2020, 09:41:46 am by tom66 »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #86 on: November 19, 2020, 10:18:02 am »
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

The present Zynq solution is scalable to about a max. of 2.5GSa/s across all active channels.   That would use the bandwidth of two 64-bit AXI Master ports and require a complex memory gearbox to ensure the data got assembled correctly in the RAM.  It would require a minimum dual channel memory controller, as the present memory bandwidth is ~1.8GB/s, so you just wouldn't have enough bandwidth to write all your samples without losing them.

You can't get to 5GSa/s across all active channels (i.e. 1.25GSa/s per channel in 4ch mode or 2.5GSa/s in ch1,ch3 mode) without having a faster memory bus, AXI bus, and fabric, so this platform can't make a 4ch 500MHz scope.    Which pushes the platform towards the UltraScale part, or a very fast FPGA front end (e.g. Kintex-7) connected to a slower backbone (e.g. a Zynq or a Pi) if you want to be able to do that.

I've also done some number crunching on bandwidth. A cheap solution to get to higher bandwidths is likely using multiple (low cost) FPGAs with memory attached and use PCI express to transfer the data (at a lower speed) to the processor module. With PCIexpress you can likely get rid of the processor inside the Zync as well. Multiple small FPGAs are always cheaper compared to using 1 big FPGA.

The Intel (formerly Altera) Cyclone 10 FPGA's look like a better fit compared to Xilinx' offerings where it comes to memory bandwidth, I/O bandwidth and price. The memory interface on the Cyclone 10 has a peak bandwidth of nearly 15GB/s (64bit wide but 72 bit is possible), there is a hard PCIexpress gen2 IP block (x2 on the smallest device and x4 on the larger ones) and 12.5Gb/s transceivers which also support the JESD204B ADC interface. On top of that it seems Intel supports OpenCL for FPGA development which could offer a relatively easy path to migrate code between GPU and FPGA (seeing is believing though). The price look doable; the 10CX085 (smallest part) sits at around $120 in single quantities.
« Last Edit: November 19, 2020, 12:17:55 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66, ogden

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6697
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #87 on: November 19, 2020, 12:34:38 pm »
I'll give the Cyclone 10 devices a look.  I'd initially ruled them out due to cost but if we're looking at UltraScale it makes sense to consider similar devices.

I do think we want an ARM of some kind on the FPGA.  It makes the system control so much easier to implement and maintain.  FSMs everywhere for system/acquisition control do not a happy debugger make.

There are enough FSMs in the present design to deal with acquisition, stream out, DMA, trigger, etc. and getting those to behave in a stable fashion was quite the task.  Keeping a small real-time processor on the FPGA makes a lot of sense. Of course there are options with soft-cores here but a hard core is preferred due to performance and it doesn't eat into your logic area.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #88 on: November 19, 2020, 01:11:18 pm »
I'll give the Cyclone 10 devices a look.  I'd initially ruled them out due to cost but if we're looking at UltraScale it makes sense to consider similar devices.

I do think we want an ARM of some kind on the FPGA.  It makes the system control so much easier to implement and maintain.  FSMs everywhere for system/acquisition control do not a happy debugger make.
For that stuff a simple softcore will do just fine (for example the LM32 from Lattice) but it is also doable from Linux. Remember that a lot of hardware devices on a Linux system have realtime requirements too (UART for example) so using an interrupt is perfectly fine. A system is much easier to debug if there are no CPUs scattered allover the place. One of my customer's projects involves a system which has a softcore inside an FPGA and a processor running Linux. In order to improve / simplify the system a lot of functionality is being moved from the softcore to the main processor.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2730
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #89 on: November 19, 2020, 02:31:54 pm »
The Intel (formerly Altera) Cyclone 10 FPGA's look like a better fit compared to Xilinx' offerings where it comes to memory bandwidth, I/O bandwidth and price. The memory interface on the Cyclone 10 has a peak bandwidth of nearly 15GB/s (64bit wide but 72 bit is possible), there is a hard PCIexpress gen2 IP block (x2 on the smallest device and x4 on the larger ones) and 12.5Gb/s transceivers which also support the JESD204B ADC interface. On top of that it seems Intel supports OpenCL for FPGA development which could offer a relatively easy path to migrate code between GPU and FPGA (seeing is believing though). The price look doable; the 10CX085 (smallest part) sits at around $120 in single quantities.
You forgot to mention that you will have to pay 4k$ per year in order to be able to use them :palm:

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2730
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #90 on: November 19, 2020, 02:32:56 pm »
I'll give the Cyclone 10 devices a look.
Don't bother. It's garbage.

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6697
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #91 on: November 19, 2020, 02:33:32 pm »
Sigh.  At least I can build for Zynq using Vivado WebPACK.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #92 on: November 19, 2020, 02:38:22 pm »
I'll give the Cyclone 10 devices a look.
Don't bother. It's garbage.
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2730
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #93 on: November 19, 2020, 02:54:39 pm »
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
LP subfamily, which can be used with free version of their tools, doesn't even have memory controllers  :palm:
GX subfamily is behind a heavy paywall (4k$ a year).
You are better off using Kintex-7 devices from Xilinx, lower end ones (70T and 160T) can be used with free tools, license for 325T can be purchased with a devboard and subsequently used for your own designs (Xilinx device-locked license allows using that part in any package and speed grade, not necessarily the one that's on a devboad, and it's a permanent license, not subscription), the cheapest Kintex-7 devboard that I know of that ships with a license is Digilent's Genesys 2 board for $1k, and you can find 325T devices in China for 200-300$ a pop, as opposed to Digikey prices of 1-1.5k$ a pop. Or you can talk to Xilinx directly, and they typically provide deep discounts - it won't be as cheap as you can get them in China, but it will be a fully legit devices and you can be sure you can always buy them at that price, while sources in China tend to be ad-hoc - they appear on a market, they sell their stock, and they disappear forever. These devices provide up to 64/72bit 933MHz DDR3 interface (~14.6 GBytes/s of bandwidth), up to 16 transceivers which can go as high as 12.5 Gbps (depending on package and speed grade), and all of that in convenient 1 mm pitch BGA packages with 400 or 500 user IO balls so you can connect a lot of stuff to it.
But most importantly, Kintex-7 fabric is significantly faster than even Artix-7/Spartan-7 one, which is faster than anything Intel offers in Cyclone family.
« Last Edit: November 19, 2020, 03:00:37 pm by asmi »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #94 on: November 19, 2020, 03:01:06 pm »
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
LP subfamily, which can be used with free version of their tools, doesn't even have memory controllers  :palm:
GX subfamily is behind a heavy paywall (4k$ a year).
You are better off using Kintex-7 devices from Xilinx, lower end ones (70T and 160T) can be used with free tools, license for 325T can be purchased with a devboard and subsequently used for your own designs (Xilinx device-locked license allows using that part in any package and speed grade, non necessarily the one that's on a devboad, and it's a permanent license, not subscription), the cheapest Kintex-7 devboard that I know of that ships with a license is Digilent's Genesys 2 board for $1k, and you can find 325T devices in China for 200-300$ a pop, as opposed to Digikey prices of 1-1.5k$ a pop. Or you can talk to Xilinx directly, and they typically provide deep discounts - it won't be as cheap as you can get them in China, but it will be a fully legit devices and you can be sure you can always buy them at that price, while sources in China tend to be ad-hoc - they appear on a market, they sell their stock, and they disappear forever. These devices provide up to 64/72bit 933MHz DDR3 interface (~14.6 GBytes/s of bandwidth), up to 16 transceivers which can go as high as 12.5 Gbps (depending on package and speed grade), and all of that in convenient 1 mm pitch BGA packages with 400 or 500 user IO balls so you can connect a lot of stuff to it.

For a moment assume the $4k for the Cyclone 10 license drops to 0. Are there any technical problems with the Cyclone 10 FPGAs? The Kintex device you are proposing is about 3 times more expensive (up to 10 times when comparing Digikey prices). Even with the $4k subscription it doesn't take selling a lot of boards to break even.
« Last Edit: November 19, 2020, 03:04:11 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2730
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #95 on: November 19, 2020, 03:18:44 pm »
For a moment assume the $4k for the Cyclone 10 license drops to 0. Are there any technical problems with the Cyclone 10 FPGAs? The Kintex device you are proposing is about 3 times more expensive (up to 10 times when comparing Digikey prices).
I'm not interested in discussing spherical horses in the vacuum. I prefer practical approach. And it tells me for 4k$ a year I can buy quite a bit of Kintex devices, which can be used with free tools. Besides even top of the line GX device fall short of what 325T offers, while being priced similarly to 325T from China, with it's relatively obtainable license. If we stick to free versions of tools, 160T offers quite a bit of resources - up to 8 12.5 Gbps MGTs, same 64/72bit 933 DDR3 interface, up to x8 PCIE-express 2 link (PCIE Gen 3 is possible, but you have to roll your own, or buy commercial IP) and all other goodies of 7 series family.

Even with the $4k subscription it doesn't take selling a lot of boards to break even.
Did you forget the part that it's an open source project? That means tools must be free.

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16642
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #96 on: November 19, 2020, 03:46:27 pm »
I did consider this when I first embarked on the project, but the reverse-engineering exercise would be very significant.

And the manufacturer could decide to stop production of that model at any moment and you'll be back to square one.
« Last Edit: November 19, 2020, 03:55:10 pm by Fungus »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26896
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #97 on: November 19, 2020, 08:33:07 pm »
Did you forget the part that it's an open source project? That means tools must be free.
Some tradeoff needs to be made here. If requiring free tools increases the price of each unit with several hundreds of US dollars then that will be an issue for adoption of the platform in general. For example: one of my customers has invested nearly 20k euro in tooling to be able to work on an open source project. Something else to consider is risk versus free tools. For example: the commercial PCB package I use can do impedance and cross-talk simulation of a PCB and has extensive DRC checks for high speed designs. Free tools like Kicad are not that advanced so they need more time for manual checking and/or pose a higher risk of an error in the board design which needs an expensive re-spin. At some point software which costs several $k is well worth it just from a risk management point of view. Developing hardware is expensive. IIRC I have sunk about 3k euro in my own USB oscilloscope project (which wasn't a waste even though I stopped the project).

Anyway,  I think your suggestion for the Kintex 70T (XC7K70T) is a very good one. I have overlooked that one. Price wise it seems to be on par with the Cyclone 10CX085 and it can do the job as well.
« Last Edit: November 19, 2020, 08:38:51 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6697
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #98 on: November 19, 2020, 09:01:57 pm »
The Zynq 7014S has 40k LUTs + ARM processor + hard DDR controller fabric.  1 unit starting from $89 USD.

IMO that wins over the 65k LUTs + no hardcore CPU + no DDR3 controller in the Kintex 70T -- and in my experience the MIG eats up easily 20% of the logic area for just a single channel DDR3 controller, I'm not convinced that would be a worthwhile trade off here. By the time you've implemented everything that the Zynq gives you "for free" (including 256KB of fast on-chip RAM) you're stepping up several grades of 'pure FPGA' to get there.
 
« Last Edit: November 19, 2020, 09:05:39 pm by tom66 »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6697
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #99 on: November 19, 2020, 09:10:04 pm »
The present oscilloscope fits in ~20% of the logic area of the 7014S.  That's including a System ILA which uses about 10% of logic area. I imagine the 'render engine' would use about 10-15% of logic area if implemented on the FPGA fabric.  We're not constrained by the fabric capacity right now - but more by the speed of the logic.  And a lot of that is down to experience in timing optimisation which is an area I have a lot to learn about.

Moving up a speed grade may be more beneficial than moving to a bigger device.
« Last Edit: November 19, 2020, 09:12:12 pm by tom66 »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf