Author Topic: A High-Performance Open Source Oscilloscope: development log & future ideas  (Read 31617 times)

0 Members and 1 Guest are viewing this topic.

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Questionnaire for those interested in the project.  

I'd appreciate any responses to understand what features are a priority and what I should focus on.
https://docs.google.com/forms/d/e/1FAIpQLSdm2SbFhX6OJlB834qb0O49cqowHnKiu7BEsXmT3peX4otOIw/formResponse

All responses will be anonymised and a summary of the results will be posted here (when sufficient data exists.)

Introduction

You may prefer to watch the video I have made: 


Over the past year and a half I have been working on a little hobby project to develop a decent high performance oscilloscope, with the intention for this to be an open source project.  By 'decent' I class this as something that could compete with the likes of the lower-end digital phosphor/intensity graded scopes e.g. Rigol DS1000Z,  Siglent SDS1104X-E,  Keysight DSOX1000, and so on.   In other words, 8-bit ADC,  1 gigasamples per second sampling rate on at least 1 channel,  200Mpt of waveform memory and rendering at least capable of rendering 25,000 waveforms/second.

The project began for a number of reasons. The first was because I wanted to learn and understand more about FPGAs;  having only been one to blink an LED on an FPGA dev kit before implementing an oscilloscope seemed like a validating challenge.  Secondly, I wasn't aware of any high performance open source oscilloscopes,  ones that could be used every day by an engineer on their desk.  I've since become aware of ScopeFun but the project is a little different from ScopeFun as they do the data processing on a PC whereas I intended to create a self-contained instrument with data capture and display in one device.   For the display/user interface I utilise a Raspberry Pi Compute Module 3.  This is a decent little device but crucially it has a camera interface port capable of receiving 1080p30 video, which works out to about 2Gbit/s of raw bandwidth.  While this isn't enough to buffer raw samples from an oscilloscope, it's sufficient once you have a trigger criteria and if you have an FPGA in the loop to capture the raw data.

At the heart of the oscilloscope is a Xilinx Zynq 7014S system on chip on a custom PCB, connected to 256MB of DDR3 memory clocked at 533MHz.   With the 16-bit memory interface this  gives us a usable memory bandwidth of ~1.8GB/s.   The Zynq is essentially an ARM Cortex-A9 with an Artix-7 FPGA on the same die,  with a number of high performance memory interfaces between the two.  Crucially, it has a hard on-silicon memory controller, unlike the regular Artix-7, which means you don't use up 20% of logic area implementing that controller.   The FPGA acquires data using a HMCAD1511 ADC, which is the same ADC used in the Rigol and Siglent budget offerings.  This ADC is inexpensive for its performance grade (~$70) and available from Digi-Key.  A variant HMCAD1520 device offers 12-bit and 14-bit capability, with 12-bit at 500MSa/s.  The ADC needs a stable 1GHz clock which is provided in this case by an ADF4351 PLL.

Data is captured from the ADC front end and packed into RAM using a custom acquisition engine on the FPGA.  The acquisition engine also works with a trigger block which uses the data in the raw ADC stream to decide when to generate a trigger event and therefore when to start recording the post-trigger event.  The implemented oscilloscope has both pre- and post-trigger events with a custom size for both from just a few pre-trigger samples to the full buffer of memory.    The data is streamed over an AXI-DMA peripheral into blocks defined by software running on the Zynq.  The blocks are streamed out of the memory into a custom CSI-2 peripheral also using a DMA block (using a large scatter-gather list created by the ARM.) The CSI-2 data bus interface is reverse-engineered,  from documentation publicly available on the internet and by analysing a slowed-down data bus from an official Pi camera, with a modified PLL, captured on my trusty Rigol DS1000Z.   I have a working HDL and hardware implementation that reliably runs at >1.6Gbit/s and application software on the Pi then renders the data transmitted over this interface.   Most application software is written in Python on the Pi,  with a small bit of C to interface with MMAL and to render the waveforms.  The Zynq software is raw embedded C, running on baremetal/standalone platform.  All Zynq software and HDL was developed with Vivado and Vitis toolkit from Xilinx.

Now, caveats:  Only edge triggers (rising/falling/either) are currently supported, and only a 1ch mode is currently implemented for acquisition;  it is mostly a data decimation problem for 2ch and 4ch modes but this has not been implemented for the prototype.  All rendering is done in software presently on the Pi as there were some difficulties keeping a prototype GPU renderer stable.  This rendering task uses 100% of one ARM core on the Pi (there is almost certainly a threading benefit available but that is unimplemented at present due to Python GIL nonsense) but the ideal goal would be to do the rendering on the Pi's GPU or on the FPGA.   A fair bit of the ARM on the Zynq is busy just managing system tasks like setting up AXI DMA transactions for every waveform,  which could probably be sped up if this was done all on the FPGA. 

The analog front end for now is just AC coupled.  I have a prototype AFE designed in LTSpice, but I haven't put any proper hardware together yet.

The first custom PCB (the "Minimum Viable Product") was funded by myself and a generous American friend who was interested in the concept.  It cost about £1,500 (~$2,000 USD or 1,700 EUR, approx) to develop in total, including two prototypes (one with a 7014S and one with a 7020;  the 7020 prototype has never been used).  This was helped in part by a manufacturer in Sweden,  SVENSK Elektronikproduktion,  who provided their services at a great price due to the interest in the project (particular thanks to Fredrik H. for arranging this.)  It is a 6 layer board, which presented some difficulty in implementation of the DDR3 memory interface (ideal would be 8-10 layers), but overall results were very positive and the interface functions at 533MHz just fine. 

The first revision of the board worked with only minor alterations required.  I've nothing but good words to say about SVENSK Elektronikproduktion, who helped bring this prototype to fruition very quickly and even with a last minute change and a minor error on my part that they were able to resolve.  The board was mostly assembled by pick and place including the Zynq's BGA package and DDR3 memory, with some parts later hand placed.  I had the first prototypes delivered in late November 2019 and had the prototype up and running by early March 2020 and the pandemic meant I had a lot more time at home so development continued at rapid pace from then onwards.  The plan was to demonstrate the prototype in person at EMFCamp 2020 but for obvious reasons that event was cancelled.


(Prototype above is the unused 7020 variant.)

Results

I have a working, 1GSa/s oscilloscope that can acquire and display >22,000 wfm/s.  There is more work to be done but at this stage the prototype demonstrates the hardware is capable of providing most needs from the acquisition system of a modern digital oscilloscope. 

The attached waveform images show:
1. 5MHz sine wave modulated AM with 10kHz sine wave
2. 5MHz sine wave modulated FM with 10kHz sine wave + 4MHz bias
3. 5MHz positive ramp wave
4. Psuedorandom noise
5. Chirp waveform (~1.83MHz)
6. Sinc pulse

The video also shows a live preview of the instrument in action.

Where next?

Now I'm at a turning point with this project.   I had to move job and location for personal reasons, so took a two month break from the project while starting at my new employer and moving house.  But, I'm back to looking at this project, still in my spare time.  And, having reflected a bit ...

A couple of weeks ago the Raspberry Pi CM4 was released.   It's not pin compatible with the CM3,  which is of course expected as the Pi 4 has PCI-Express interface and an additional HDMI port.  It would make sense to migrate this project to the CM4;  the faster processor and GPU present an advantage here.  (I have already tested the CSI-2 implementation with a Pi 4 and no major compatibility issues were noted.) 

There's also a lot of other things I want to experiment with.  For instance, I want to move to a dual channel DDR3 memory interface on the Zynq, with 1GB of total memory space available.  This would quadruple the sampling memory, and more than double the memory bandwidth (>3.8GB/s usable bandwidth), which is beneficial when it comes to trying to do some level of rendering on the FPGA.    It's worth looking at the PCI-e interface on the CM4 for data transfer,  but CSI-2 still offers some advantages, namely that it wouldn't be competing with bandwidth from the USB 3.0 or Ethernet peripherals if those are used in a scope product.  PCI-e would also require a higher grade of Zynq with a hard PCI-e core implemented, or a slower HDL implementation of PCI-e, which might present other difficulties.

I'm also considering completely ripping up the Pi CM4 concept and going for a powerful SoC+FPGA like a Zynq UltraScale,  but that would be a considerably more expensive part to utilise, and would perhaps change the goals of this project from developing an inexpensive open-source oscilloscope,  to developing a higher-performance oscilloscope platform for enthusiasts.  The cheapest UltraScale processor is around $250 USD but features an on-device dual ARM Cortex-A53 complex (a considerable upgrade over the ARM Cortex-A9 in the Zynq 7014S),  Mali-400 GPU and DDR4 memory controllers;  this would allow for e.g. an oscilloscope capture engine with gigabytes of sample memory (up to 32GB in the largest parts!), and we'd no longer be restricted into running over a limited bandwidth camera interface which would improve the performance considerably there.

I think there's a great deal of capability here when it comes to supporting modularity.  What I'd like to offer is something along the lines of the original Tek mainframes, where you can swap an acquisition module in and out to change the function of the whole device.  A small EEPROM would identify the right software package and bitstream to load and you can convert your oscilloscope into e.g. a small VNA,  spectrum analyser,  a CAN/OBDII module with analog channels for automotive work,  etc.  on the fly.

The end goal is a handheld, mains and/or battery-powered oscilloscope, with a capacitive 1280x800 touchscreen (optional HDMI output), 4 channels at 100MHz bandwidth and 1GSa/s multiplexed,  minimum 500MSa of acquisition memory and at least 30,000 waveforms/second display rate (with a goal of 100kwaves/sec rendered and 200kwaves/sec captured for segmented memory modes.)   I also intend to offer a two channel arbitrary signal generator output on the product, utilising the same FPGA as for acquisition.  The product is intended to be open-source in its entirety, including the FPGA design and schematics,  firmware on the processor and application software on the main processor.  I'll publish details on these in short order, provided there's sufficient interest.

Full disclosure - I have some commercial interest in the project.  It started as just a hobby project, but I've done everything through my personal contracting company, and have been in discussions with a few individuals and companies regarding possible commercialisation.  No decisions have been made yet, and I intend for the project to be FOSHW regardless of the commercial aspects.

The questions for everyone here is:
- Does a project like this interest you?   If so, why?   If not, why not?

- What would you like to see from a Mk2 development - if anything:  a more expensive oscilloscope to compete with e.g. the 2000-series of many manufacturers that aims more towards the professional engineer,  or a cheaper open-source oscilloscope that would perhaps sell more to students, junior engineers, etc.?  (We are talking about $500USD difference in pricing.  An UltraScale part makes this a >$800USD product - which almost certainly changes the marketability.)

- Would you consider contributing in the development of an oscilloscope?  It is a big project for just one guy to complete.  There is DSP, trigger engines, an AFE, modules, casing design and so many more areas to be completed.  Hardware design is just a small part of the product.  Bugs also need to be found and squashed,  and there is documentation to be written.  I'm envisioning the capability to add modules to the software and the hardware interfaces will be documented so 3rd party modules could be developed and used.

- I'm terrible at naming products.  "BluePulse" is very unlikely to be a longer term name.  I'll welcome any suggestions.

Online YetAnotherTechie

  • Regular Contributor
  • *
  • Posts: 204
  • Country: pt
I vote for this to be the most interesting post of the year, Great work!!  :-+
 
The following users thanked this post: egonotto, Trader

Offline artag

  • Frequent Contributor
  • **
  • Posts: 672
  • Country: gb
I like the idea, mostly because I REALLY like the idea of an open-source scope that's got acceptable performance. Something I could add a feature to when I want it.

I think you've made fabulous progress, and I think you very much need to watch out for the upcoming problems :

It's very easy to get lost in a maze of processor directions - stretch too far and your completion data disappears over the horizon, set your targets too low and you end up with something that's obsolete before its finished.

The same goes for expansion and software plans - there's a temptation to do everything, resulting in plans that never get finalised, or an infrastructure that's too big for the job.

I don't say this negatively, to put you off - I put these points forward as problems that need a solution.

I'm interested in helping if I can.
 
The following users thanked this post: tom66, james_s

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 1950
  • Country: ua
We need help of Chinese guys to produce and sell cheap hardware for such project :)

It will be also nice to see Altera Cyclone version.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 16104
  • Country: us
It's an incredibly impressive project, with that kind of output from just one person on their personal time it would not surprise me if you got a few job offers from T&M companies. It's very interesting from the standpoint of seeing in detail how a modern DSO works although I think you will be really hard pressed to compete with companies like Rigol and Siglent. I personally would be very interested if the alternative was spending $10k on a Tektronix, Keysight or other A-list brand but the better known Chinese companies deliver an incredible amount of bang for the buck. Building something this complex in small quantities is expensive, and it's probably too complex for all but the most hardcore DIY types to assemble themselves. On top of that, the enclosure is a very difficult part, at least in my own experience. Making a nice enclosure and front panel approaching the quality of even a low end commercial product is very difficult. Not trying to rain on your parade though, looks very cool and I'll be watching with interest to see how this pans out.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Thanks for the comments.

The hardware isn't all that expensive - the current BOM works out as just under US$200 in 500 off quantity.  That means it would be feasible to sell this at US$500-600, which although a little more expensive than the cheapest Rigol/Siglent offerings, may be more attractive with the open source aspect.   Adding the UltraScale and removing the Pi adjusts the BOM by +US$150, which starts pushing the device into the US$800-$1000 range.    Perhaps it would be worth discussing with Xilinx - I know they give discounted FPGAs to dev kit manufacturers - if they are interested in this project they may consider a discounted price.  The Zynq is the most expensive single part on the board.  But, so far, all pricing is based on Digi-Key strike prices with no discounts assumed.

The idea would be to sell an instrument that has only a touchscreen and 4 probe inputs.  The mechanical design of a keypad, knobs, buttons etc and injection moulded case would be significant, and the tooling is not cheap, so an extruded aluminum case would be used.  Of course a touchscreen interface wouldn't be attractive to everyone, so a later development might include an external keypad/knob assembly,  or you could use a small keyboard.  Optionally, the unit could contain a Li-Ion battery and charger, which would allow it to be used away from power for up to 5-6 hours.  (The present power consumption is a little too high for practical battery use, but the Zynq and AFE components are running continuously with no power saving considerations right now.)

There isn't much chance someone could hand-assemble a prototype like this.  The BGA and DDR memory make it all but impossible for the most enthusiastic members on this forum.  There was a reason that, despite having (in my own words) reasonably decent hand-soldering skills, I went with a manufacturer to build the board.  I did not want gremlins from having a BGA ball go open circuit randomly, for instance.  I was very careful in the stencil specification and design to ensure the BGA was not overpasted.  The 7014S board has been perfectly reliable, all considered, even while the Zynq was running at 75C+ pre-heatsink.

While I've not had any offers from T&M companies (although - I've not asked or offered it) I did get my present job as an FPGA/senior engineer with this project as part of the interview process (as Dave says - bring prototypes - they love them!)    There are a couple in the Cambridge area but I'm not really interested in selling out to anyone,  I wanted to develop this project because there is no great open source scope out there yet and it was a great way to get used to high speed FPGAs and memory interfaces.  I've never laid out a DDR memory interface before so it felt incredibly validating that it worked first time.

Regarding Altera parts there would not be much point in using them - the cheapest comparable Altera SoC is double the price of the Zynq and has a slower, older ARM architecture.  The Zynq is a really nice processor!
« Last Edit: November 16, 2020, 08:46:41 am by tom66 »
 
The following users thanked this post: egonotto

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 12716
  • Country: 00
The idea would be to sell an instrument that has only a touchscreen and 4 probe inputs.  The mechanical design of a keypad, knobs, buttons etc and injection moulded case would be significant, and the tooling is not cheap, so an extruded aluminum case would be used.  Of course a touchscreen interface wouldn't be attractive to everyone, so a later development might include an external keypad/knob assembly,  or you could use a small keyboard.

Have a look at Micsigs. Their UI is really good, much faster/easier than traditional "twisty knob" DSOs.

Note that they now make a model with knobs at the side, I'd bet that's because a lot of people were put off by the idea of a touchsceen-only device.

(Although having owned one for a couple of weeks I can say that any fears are unfounded. It works perfectly)

Optionally, the unit could contain a Li-Ion battery and charger, which would allow it to be used away from power for up to 5-6 hours.

Micsig again...  >:D
« Last Edit: November 16, 2020, 11:01:59 am by Fungus »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
I'm aware of the Micsig device,  I do quite like it.  So this is comparable to a Micsig device but with open source hardware and firmware, plus modular capability - the ability to remove the AFE and replace it with a different module for a different task for example.  Plus considerably better system and acquisition performance.

I'm a fairly avid user of touchscreen devices in general and while I think there is a case for knobs and dials on a scope,  it can be replicated with competent UI design and a capacitive multitouch screen.  The problem with adding knobs and dials onto a portable device is that once you drop it, you risk damage to the encoders and plastics.  A fully touchscreen device with BNCs being the only exposed elements would be more rugged. Of course, you shouldn't drop any test equipment, but once it is in a portable form factor, it WILL get dropped, by someone.
 

Offline artag

  • Frequent Contributor
  • **
  • Posts: 672
  • Country: gb
I've always tended to prefer real knobs and dials, especially in comparing pc-instruments against traditional ones. But we're all getting more used to touchscreens : what they usually lack is a really good, natural usage paradigm. I haven't tried the Micsig devices but have noticed people commenting positively on them.

The WIMP interface is very deeply embedded in us now and tablets don't quite meet it. Some gestures (swipe, pinch) have become familiar but not enough to replace a whole panel.  I think we'll slowly get more used to it, and learn how to make that more natural.

I like the modularity idea, but it's hard to know where to place an interface. The obvious modules are display, acquisition memory and AFE. Linking the memory and display tightly gives fast response to control changes. Linking the memory and AFE gives faster acquisition. There's also some value in using an existing interface for one of those. Maybe USB3 is fast enough, though I think using the camera interface is really cunning. Another processor option - which also has a camera interface and a GPU - is the NVidia Jetson.

My feeling is that AFE should be tightly coupled to memory, so that as bandwidths rise they can both improve together. As long as the memory to display interface is fast enough for human use, it should be 'fast enough'. The limitation of that argument is when a vast amount of data is acquired and needs to be processed before display. Process in the instrument and you can't take advantage of the latest computing options for display processing. Process in the display/PC and you have to transfer through the bottleneck.


 
 

Offline tv84

  • Super Contributor
  • ***
  • Posts: 2377
  • Country: pt
with open source hardware and firmware, plus modular capability

Love your modular capability and the implementation. You are one of the 1st to do such a one-man real implementation.

Usually many talk about this but stop short of beginning such a daunting task: they end up not deciding on the processors, the modularity frontiers, they only do SW, others only do HW, etc, etc...

Many other choices could be made but you definitely deserve a congratulation!  :clap: :clap: :clap:

Whatever you decide to do, just keep it open source and you will always be a winner!

RESPECT.
 
The following users thanked this post: tom66, cdev

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #10 on: November 16, 2020, 12:48:10 pm »
I like that post processing is done inside the GPU. Having a PCI express interface on the newer RPis would be a benefit. It is also an option to use a SoC chip directly on the board and use a lower cost FPGA (Spartan 6 LXT45 for example) that reads data from the ADC, does some rudimentary buffering and streams it into the PCIexpress bus.
« Last Edit: November 16, 2020, 12:53:46 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline fcb

  • Super Contributor
  • ***
  • Posts: 2049
  • Country: gb
  • Test instrument designer/G1YWC
    • Electron Plus
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #11 on: November 16, 2020, 01:12:59 pm »
Great work so far. Although the cost/performance benefit you've outlined is not sufficient to make it compelling commercial project, perhaps it could find a niche?

I'd probably have turned the project on it's head -> what's the best 'scope I can build with Pi Compute module for £XXX.  Also, I wouldn't be afraid of a touchscreen/WIMP interface, if implemented well it can be pretty good - although still haven't seen one YET that beats the usuability of an old HP/Tek.
https://electron.plus Power Analysers, VI Signature Testers, Voltage References, Picoammeters, Curve Tracers.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #12 on: November 16, 2020, 01:18:52 pm »
My concept for modularity is to keep it very simple.  The AFE board will essentially be the HMCAD15xx ADC plus the necessary analog front end hardware and the 1GHz clock.
 
Then the ADC interfaces with n LVDS pairs going into the Zynq. If I put the 484 ball Zynq on the next board, then I have the capacity for a large number of LVDS pairs. 

The modules could be double-wide,  i.e. a 4 channel AFE,  or single-wide,  i.e. a 2 channel AFE  and you could then use some arbitrary other module in the second slot.  The bitstream and software would be written to be as flexible as possible, although it is possible that not all modular configurations will be allowable.  (For instance it might not be possible to have two output modules at once;  the limits would need to be defined.)

For instance, you could have a spectrum analyser front end that contains the RF mixers, filters and ADC, and the software on the Zynq just drives the LO/PLL over SPI to sweep, and performs an FFT on the resulting data.  The module is different - but gathering the data over a high speed digital link is a common factor.

The modules would also be able to share clocks or run on independent clock sources.  The main board could provide a 10MHz reference (which could also be externally provided or outputted) and the PLLs on the respective boards would then generate the necessary sampling clock.

The bandwidth of this interface is less critical than it sounds,  for 8Gbit/s ADC (1GSa/s 8-bit) then just 10 LVDS pairs are needed.  A modern FPGA has 20+ on a single bank and on the Xilinx 7 series parts, each has an independent ISEREDESE2/OSERDESE2 which means you can deserialise and serialise as needed on the fly on each pin.   There are routing and timing considerations but I've not had an issue with the current block running at 125MHz,  I think I might run into issues trying to get it above 200MHz with a standard -3 grade part.

My unfinished modularity proposal is here:
https://docs.google.com/spreadsheets/d/1hpS83vqnude4Z6Bsa2l4NRGaMY8nclvE8eZ_bKncBDo/edit?usp=sharing

So the idea is that most of the modules are dumb but we have a SPI interface if needed for smarter module interfacing,  which allows e.g. an MCU on the module to control attenuation settings.

The MCU could communicate, via a defined standard, what its capabilities are. If the instrument doesn't have the software it needs, then it can pick that up over the internet via Wi-Fi or ethernet or from a USB stick.

One other route I have is to use a 4-lane CSI module as the Pi does support that on the CM3/CM4.  This doubles available transfer bandwidth.  I do need to give PCI-e a good thought though because it allows bidirectional transfer - the current solution is purely unidirectional.

IMO there is little benefit in using a separate FPGA + SoC because you lose that close coupling that the Zynq has.  The ARM on the Zynq is directly writing registers on the FPGA side to influence acquisition, DMA behaviour etc.  That would have to fit over a SPI or small digital link, which would constrain the system considerably.  In fact, currently the Pi controls the Zynq over SPI,  and that is slow enough to cause issues, so I will be moving away from that in a future version.
 
The following users thanked this post: Simon_RL

Offline jxjbsd

  • Regular Contributor
  • *
  • Posts: 119
  • Country: cn
  • 喜欢电子技术的网络工程师
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #13 on: November 16, 2020, 02:25:31 pm »
 :-+
Very good work. I very much agree to keep it simple, and now only the main functions are implemented. It would be great if most of the functions of TEK465 are implemented. Others such as: advanced trigger, FFT, can be implemented later. Only one core board is made, and various control knobs or touch screens are implemented through external boards, which can increase the number of core boards. Simple and flexible may be the advantages of open source hardware. Programming may be the difficulty of this project.
« Last Edit: November 16, 2020, 02:32:31 pm by jxjbsd »
Analog instruments can tell us what they know, digital instruments can tell us what they guess.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #14 on: November 16, 2020, 02:46:27 pm »
IMO there is little benefit in using a separate FPGA + SoC because you lose that close coupling that the Zynq has.  The ARM on the Zynq is directly writing registers on the FPGA side to influence acquisition, DMA behaviour etc.  That would have to fit over a SPI or small digital link, which would constrain the system considerably.
That is where PCIexpress comes in. This gives you direct memory access both ways; in fact the FPGA could push the acquired data directly into the GPU memory area using PCIexpress.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #15 on: November 16, 2020, 02:48:35 pm »
IMO there is little benefit in using a separate FPGA + SoC because you lose that close coupling that the Zynq has.  The ARM on the Zynq is directly writing registers on the FPGA side to influence acquisition, DMA behaviour etc.  That would have to fit over a SPI or small digital link, which would constrain the system considerably.
That is where PCIexpress comes in. This gives you direct memory access both ways; in fact the FPGA could push the acquired data directly into the GPU memory area using PCIexpress.

True, but the FPGA would still need to have some kind of management firmware on it for some parts,  for instance setting up DMA transfer sizes and trigger settings.  You could write that all in Verilog, but it becomes a real pain to debug.  The balance of CPU for easy software tasks and HDL for easy hardware tasks makes the most sense, and some of this stuff is low-latency so you ideally want to keep it away from a non-realtime system like Linux.  (The UltraScale SOC has a separate 600MHz dual ARM Cortex-R5 complex for realtime work - which is an interesting architecture.)  But, having the ability for the Pi to write and read directly from memory space on the Zynq side would be really compelling.  I may need to get the PCI-e reference manual and see what the interface and requirements look like there.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3933
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #16 on: November 16, 2020, 02:59:34 pm »
Very impressive work! I really hope you will succeed in your "quest"!
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #17 on: November 16, 2020, 03:14:02 pm »
IMO there is little benefit in using a separate FPGA + SoC because you lose that close coupling that the Zynq has.  The ARM on the Zynq is directly writing registers on the FPGA side to influence acquisition, DMA behaviour etc.  That would have to fit over a SPI or small digital link, which would constrain the system considerably.
That is where PCIexpress comes in. This gives you direct memory access both ways; in fact the FPGA could push the acquired data directly into the GPU memory area using PCIexpress.

True, but the FPGA would still need to have some kind of management firmware on it for some parts,  for instance setting up DMA transfer sizes and trigger settings.  You could write that all in Verilog, but it becomes a real pain to debug.  The balance of CPU for easy software tasks and HDL for easy hardware tasks makes the most sense, and some of this stuff is low-latency so you ideally want to keep it away from a non-realtime system like Linux.  (The UltraScale SOC has a separate 600MHz dual ARM Cortex-R5 complex for realtime work - which is an interesting architecture.)  But, having the ability for the Pi to write and read directly from memory space on the Zynq side would be really compelling.  I may need to get the PCI-e reference manual and see what the interface and requirements look like there.
The beauty of a PCI interface is that it basically does DMA transfers so Linux doesn't need to get in the way at all. The only thing the host CPU needs to do is setup the acquisition parameters and the FPGA can start pushing data into the GPU. Likely the GPU can signal the FPGA directly to steer the rate of the acquisitions. In the end a GPU has a massive amount of processing power compared to an ARM core for as long as you can do parallel tasks. I have made various realtime video processing projects with Linux and since all the data transfer is DMA based the host CPU is loaded by only a few percent. System memory bandwidth is something to be aware of though.
« Last Edit: November 16, 2020, 03:17:08 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #18 on: November 16, 2020, 06:26:49 pm »
The beauty of a PCI interface is that it basically does DMA transfers so Linux doesn't need to get in the way at all. The only thing the host CPU needs to do is setup the acquisition parameters and the FPGA can start pushing data into the GPU. Likely the GPU can signal the FPGA directly to steer the rate of the acquisitions. In the end a GPU has a massive amount of processing power compared to an ARM core for as long as you can do parallel tasks. I have made various realtime video processing projects with Linux and since all the data transfer is DMA based the host CPU is loaded by only a few percent. System memory bandwidth is something to be aware of though.

It's a fair point. There's still some acquisition control that the FPGA needs to be involved in, for instance sorting out the pre- and post-trigger stuff.

The current architecture roughly works as such:
- Pi configures acquisition mode (ex. 600 pts pre trigger, 600 pts post trigger, 1:1 input divide, 1 channel mode, 8 bits, 600 waves/slice, trigger is this type, delay by X clocks, etc.)
- Zynq acquires these waves into a rolling buffer - the buffer moves through memory space so there is history for any given acquisition (~25 seconds with current memory)
- Pi interrupts before next VSYNC to get packet of waves (which may be less than the 600 waves request)
- Transfer is made by the Zynq over CSI - Zynq corrects trigger positions and prepares DMA scatter-gather list then my CSI peripheral transfers ~2MB+ of data with no CPU intervention

There is close fusion between the Zynq ARM, FPGA fabric, and the Pi - and since the Pi is not hard real time (Zynq ARM is running baremetal) you'd need to be careful there with what latency you introduce into the system.

It would be nice if we could say to the Pi, e.g. find waveforms at this address, and when the Pi snoops in to the PCIe bus, the FPGA fabric intercepts the request and translates each waveform dynamically so we don't have to do the pre-trigger rotation on the Zynq ARM.  Right now, the pre-trigger rotation is done by reading from the middle of the pre-trigger buffer, and then the start, then the post-trigger buffer (though I believe this could be simplified to two reads with some thought.)  Perhaps it's possible by using the SCU on the Zynq - it's got a fairly sophisticated address translation engine.  I'd like to avoid doing a read-rotate-writeback operation, as that triples the memory bandwidth requirements on the Zynq, and already 1GB/s of the memory bandwidth (~60%) is used just writing data from the ADC.  The Zynq ARM has to execute code and read/write data from this same RAM, and although the 512KB L2 cache on the Zynq is generous,  it's not perfect.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 16104
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #19 on: November 16, 2020, 06:38:53 pm »
I loathe touchscreens, I tolerate one on my phone because of obvious constraints with the form factor but while I've owned several tablets I've yet to find a really good use case for one other than looking at datasheets. Can't stand them on most stuff and it annoys me whenever someone points to something and makes a finger smudge on my monitor. I could potentially make an exception in the case of a portable scope to have in addition to my bench scope although I think in the case of this project my interest is mostly academic, it's a fascinating project and an incredible achievement but not something I'm likely to spend money on. Roughly the same price will get me a 4 channel Siglent in a nice molded housing with real buttons and knobs and support, or a used TDS3000 that can be upgraded to 500MHz. That said, I've heard that Digikey pricing on FPGAs is hugely inflated so you may be able to drop the cost down substantially.
 
The following users thanked this post: nuno

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #20 on: November 16, 2020, 06:41:11 pm »
Another challenge I am working on is how to do the rendering all on the FPGA.

This would free up the CPUs of the Pi and the GPU could be used for e.g. FFTs and 2D acceleration tasks. 

The real challenge is - waveforms are stored linearly, but every X pixel on the display needs a different Y coordinate for a given wavevalue.  So, it is not conducive to bulk write operations at all (e.g. AXI Burst).  The 'trivial' improvement is to rotate the buffer 90 degrees (which is what my SW renderer does) so that your accesses tend to hit the same row at least and will be more likely to be sitting in the cache.  But this is still a non-ideal solution. So the problem has to be broken down into tiles or slices.  Zynq should read, say, 128 waveform values (fits nicely into a burst), and repeat for every waveform (with appropriate translations provided),  write all the pixel values for that into BRAM (~12 bits x 128 x 1024,  for a 1024 height canvas with 12 bits intensity grading = ~1.5Mbits pr about half of all available BlockRAMs),  and write that back into DDR in order to get the most performance with burst operations used as much as possible.

It implies a fairly complex core and that's without considering multiple channels (which introduce even more complexity, because do you handle each as a separate buffer, or accumulate each with a 'key value' or ...?)  The complexity here is that the ADC multiplexes samples,  so in 1ch mode the samples are  A0 .. A7,  but in 2ch mode they are  A0 B0 A1 B1 .. A3 B3 which means you need to think carefully about how you read and write data.  You can try to unpack the data with small FIFOs on the acquisition side, but then you need to reassemble the data when you stream it out. 

This is essentially solving the rotated polygon problem that GPU manufacturers solved 20 years ago, but solving it in a way that can fit in a relatively inexpensive FPGA and doing it at 100,000 waves/sec (60 Mpoints/sec plotted).  And then doing it with vectors or dots between points - ArmWave is just dots for now though there is a prototype slower vector plotter I have written somewhere.

If you look at Rigol DS1000Z then you can see a fairly hefty SRAM chip attached to the FPGA, in addition to a regular DDR2/3 memory device.  It is almost certain that the DDR memory is used just for waveform acquisition and that the waveform is rendered into the SRAM buffer and then streamed to the i.MX processor (possibly over the camera port like I am using.)   Whether the FPGA colourises the camera data or whether Rigol use the i.MX's ISP block to do that is unknown to me.  Rigol likely chose an expensive SRAM because it allows for true random access with minimal penalty in jumping to random addresses.

Current source code for ArmWave, the rendering engine presently used for anyone curious:
https://github.com/tom66/armwave/blob/master/armwave.c

This is about as fast as you will get an ARM rendering engine while using just one core and it has been profiled to death and back again.  4 cores would make it faster although some of the limitation does come from memory bus performance.  It's at about 20 cycles per pixel plotted right now.
« Last Edit: November 16, 2020, 06:45:47 pm by tom66 »
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 12716
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #21 on: November 16, 2020, 06:56:14 pm »
I loathe touchscreens, I tolerate one on my phone because of obvious constraints with the form factor but ... roughly the same price will get me a 4 channel Siglent in a nice molded housing with real buttons and knobs and support

Trust me: The knobs are OK for things like adjusting the timebase but a twisty, pushable, multifunction knob is not better for navigating menus, choosing options, etc.

eg. Look at the process of enabling a bunch of on-screen measurement on a Siglent. Does that seem like the best way?

https://youtu.be/gUz3KYp_5Tc?t=2925
« Last Edit: November 16, 2020, 07:11:38 pm by Fungus »
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 22232
  • Country: nz
  • Taupaki Technologies Ltd. NZ Siglent Distributor
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #22 on: November 16, 2020, 07:15:49 pm »

Look at the process of enabling a bunch of on-screen measurement on a Siglent. Does that seem like the best way?
Best is accurate:
https://www.eevblog.com/forum/testgear/testing-dso-auto-measurements-accuracy-across-timebases/
Avid Rabid Hobbyist
 

Offline sb42

  • Contributor
  • Posts: 42
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #23 on: November 16, 2020, 07:17:34 pm »
I loathe touchscreens, I tolerate one on my phone because of obvious constraints with the form factor but ... roughly the same price will get me a 4 channel Siglent in a nice molded housing with real buttons and knobs and support

Trust me: The knobs are OK for things like adjusting the timebase but a twisty, pushable, multifunction knob is not better for navigating menus, choosing options, etc.

eg. Look at the process of enabling a bunch of on-screen measurement on a Siglent. Does that seem like the best way?

https://youtu.be/gUz3KYp_5Tc?t=2925

Also, with a USB port it might be possible to design something around a generic USB input interface like this one:
http://www.leobodnar.com/shop/index.php?main_page=product_info&cPath=94&products_id=300
 
The following users thanked this post: tom66

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #24 on: November 16, 2020, 07:35:36 pm »
Another challenge I am working on is how to do the rendering all on the FPGA.

This would free up the CPUs of the Pi and the GPU could be used for e.g. FFTs and 2D acceleration tasks. 
I'm not saying it can't be done but you also need to address (literally) shifting the dots so they match the trigger point.

IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 12716
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #25 on: November 16, 2020, 08:13:24 pm »
If you look at Rigol DS1000Z then you can see a fairly hefty SRAM chip attached to the FPGA, in addition to a regular DDR2/3 memory device.  It is almost certain that the DDR memory is used just for waveform acquisition and that the waveform is rendered into the SRAM buffer and then streamed to the i.MX processor (possibly over the camera port like I am using.)   Whether the FPGA colourises the camera data or whether Rigol use the i.MX's ISP block to do that is unknown to me.  Rigol likely chose an expensive SRAM because it allows for true random access with minimal penalty in jumping to random addresses.

I believe the Rigol main CPU can only "see" a window of 1200 samples at a time, as decimated by the FPGA. This is the reason that all the DS1054Z measurements are done "on screen", etc.

1200 samples is twice the screen display (600 pixels).

 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #26 on: November 16, 2020, 08:13:55 pm »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
« Last Edit: November 16, 2020, 08:20:10 pm by tom66 »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #27 on: November 16, 2020, 08:15:54 pm »
If you look at Rigol DS1000Z then you can see a fairly hefty SRAM chip attached to the FPGA, in addition to a regular DDR2/3 memory device.  It is almost certain that the DDR memory is used just for waveform acquisition and that the waveform is rendered into the SRAM buffer and then streamed to the i.MX processor (possibly over the camera port like I am using.)   Whether the FPGA colourises the camera data or whether Rigol use the i.MX's ISP block to do that is unknown to me.  Rigol likely chose an expensive SRAM because it allows for true random access with minimal penalty in jumping to random addresses.

I believe the Rigol main CPU can only "see" a window of 1200 samples at a time, as decimated by the FPGA. This is the reason that all the DS1054Z measurements are done "on screen", etc.

1200 samples is twice the screen display (600 pixels).

Yes it seems likely to me that it is transmitted as an embedded line in whatever is transmitting the video data.  The window is about 600 pixels across so it makes sense that they would be using e.g. the top eight lines for this data, two per channel.  It is also clear that Rigol use a 32-bit data bus instead of my 64-bit data bus as the holdoff/delay counter resolution is half what I support. (My holdoff setting has 8ns resolution due to 125MHz clock; theirs is 4ns/250MHz.)  They use a Spartan-6 with fewer LUTs than my 7014S so it's perhaps a trade off there.

I am almost certain (though have not physically confirmed it) that the Rigol is doing all the render work on the FPGA.  Perhaps they are using the i.MX CPU for the Anti-Alias mode which gets very slow on longer timebases as it appears to be rendering more (all?) of the samples.

The Rigol also does not decimate the data when doing the waveform rendering, so you can get aliasing in some cases although they are fairly infrequent corner cases.
« Last Edit: November 16, 2020, 08:32:58 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #28 on: November 16, 2020, 08:34:06 pm »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Personally I don't have a real need for high waveform update rates. Deep memory is usefull (either as a continuous record or as segmented / history buffer; segmented and history are very much the same). But with deep memory also comes the requirement to be able to process it fast.

Nearly 2 decades ago I embarked on a similar project where I tried to cram all the realtime & post processing into the FPGAs. In the end you only need to fill the width of a screen which is practically 2000 pixels. This greatly reduces the bandwidth towards the display section but needs a huge efford on the FPGA side. The design I made could go through 1Gpts of 10bit data within 1 second and (potentially) produce multiple views of the data at the same time. The rise of cheap Asian oscilloscopes made me stop the project. If I where to take on such a project today I'd go the GPU route and do as little as possible inside an FPGA. I think creating trigger engines for protocols and special signal shapes will be challenging enough already.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #29 on: November 16, 2020, 09:17:40 pm »
The bandwidth of this interface is less critical than it sounds,  for 8Gbit/s ADC (1GSa/s 8-bit) then just 10 LVDS pairs are needed.  A modern FPGA has 20+ on a single bank and on the Xilinx 7 series parts, each has an independent ISEREDESE2/OSERDESE2 which means you can deserialise and serialise as needed on the fly on each pin.   There are routing and timing considerations but I've not had an issue with the current block running at 125MHz,  I think I might run into issues trying to get it above 200MHz with a standard -3 grade part.
As you go into giga samples range, ADC quickly becoming jesd204b-only, which is itself a separate big can of worms. And many of them will happily send 12 Gpbs per lane and even more, for that you will need something more recent than 7 series (or using Virtex-7, I think they can go that high, though no personal experience).

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #30 on: November 16, 2020, 09:31:10 pm »
There's JESD204B support in the Zynq 7000 series, though only via the gigabit transceivers which are on the much more expensive parts.

I've little doubt that I'll cap the maximum performance around the 2.5GSa/s range - at that point memory bandwidth becomes a serious pain.

I've a coy play for how to get up to 2.5GSa/s using regular ADC chips - it'll require an FPGA as 'interface glue' to achieve but it could be a relatively small FPGA.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #31 on: November 16, 2020, 10:16:31 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.

In better news if you're going down an all digital trigger route (probably a good idea) then the vast majority of "trigger" types are simply combinations of 2 thresholds and a one shot timer, which are easy enough. That can then be passed off to slower state machines for protocol/serial triggers. But without going down dynamic reconfiguration or using multiple FPGA images supporting a variety of serial trigger types becomes an interesting problem all of its own.
 

Offline Circlotron

  • Super Contributor
  • ***
  • Posts: 2473
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #32 on: November 16, 2020, 11:28:06 pm »
This takes "home made" to a whole new level!
My suggestion would be to have an A/D with greater than 8 bits. This would set it apart from so many other "me to" scopes. I'm sure there is a downside to this though - price, sample rate limitations etc. Also, if there is to be a hi-res option, maybe have a user adjustable setting for how many averaged samples per final sample or however it is expressed. I love sharp, clean traces. None of this furry trace rubbish!
« Last Edit: November 16, 2020, 11:30:41 pm by Circlotron »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #33 on: November 16, 2020, 11:47:20 pm »
This takes "home made" to a whole new level!
My suggestion would be to have an A/D with greater than 8 bits. This would set it apart from so many other "me to" scopes. I'm sure there is a downside to this though - price, sample rate limitations etc. Also, if there is to be a hi-res option, maybe have a user adjustable setting for how many averaged samples per final sample or however it is expressed. I love sharp, clean traces. None of this furry trace rubbish!
Part of the fun of open source is you can ignore the entrenched ways of doing things and offer choices to the user (possibly ignoring IP protection along the way). A programmable FIR + CIC + IIR acquisition filter could implement a wide range of useful processing.
 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 58
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #34 on: November 17, 2020, 12:54:57 am »
A suggestion: replace the barrel connector (for power I assume) and the USB type A receptacle with 2 USB-C female receptacles. Both USB-C connectors should support PD (power delivery) allowing up to 20 Volts @ 5 Amps to be sunk through either connector. This assumes that power draw of your project is <= 100 Watts. If the power draw is <= 60 Watts then any compliant USB-C cable could be used to supply power. If the power draw is <= 45 Watts then a product like the Morphie USB-C 3XL battery could be used to make the 'scope portable. Dual role power (DRP) would also be desirable, so if a USB key is connected to either USB-C port then it could source 5 Volts say around 1 Amp. A USB-C (M) to USB-A (F) adapter or short cable could be supplied with the 'scope for backward compatibility. I guess most folks interested in buying this 'scope will own one or more USB-C power adapters, so it frees the OP from needing to provide one (so the price should go down). Many significant semiconductor manufacturers have USB-C offerings (ICs) with evaluation boards available (but not many eval boards do DRP).
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 12716
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #35 on: November 17, 2020, 03:18:37 am »
Personally I don't have a real need for high waveform update rates.

I don't recall any discussions here about waveforms/sec, waveform record/playback, etc.

I remember a lot of heated discussions about things like FFT and serial decoders.

 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 12716
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #36 on: November 17, 2020, 03:27:56 am »
A suggestion: replace the barrel connector (for power I assume) and the USB type A receptacle with 2 USB-C female receptacles. Both USB-C connectors should support PD (power delivery) allowing up to 20 Volts @ 5 Amps to be sunk through either connector. If the power draw is <= 45 Watts then a product like the Morphie USB-C 3XL battery could be used to make the 'scope portable.

(Seen from another perspective)

You mentioned adding a battery to this but that means:
a) Extra design work
b) A lot of charging circuitry on the PCB
c) Adding a battery compartmentrr/connector
c) A lot of safety concerns
d) Higher price
e) Bigger size/extra weight

Making it work with suitably rated power banks makes a lot more sense.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 16104
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #37 on: November 17, 2020, 06:42:47 am »
The circuitry required to manage a battery pack would be absolutely trivial compared to what has already been achieved here. This is a well developed area, every laptop for the last 15 years at least has mastered the handling of a li-ion battery pack.

For what it's worth, I have not been impressed with USB-C, my work laptop has it and I have to use dongles for everything. The cables are more fragile and more expensive than USB-3, the standard is still a mess after all this time as IMO it tries to be everything to everybody and the result is just too complex. I have never been a fan of using USB for power delivery, a dedicated DC power jack is much nicer.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3933
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #38 on: November 17, 2020, 08:19:06 am »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.

To get discussion back to the track, let me chime in on some questions here.

For the purposes of nice intensity/colour graded waveform display, having very high display rate is diminishing returns game. Basically, if you look at, let's say, 10 MHz AM modulated with 100 Hz. You will need few thousand WFMs/s to make it smooth, so display will not have moire effect. And also, if you are watching something interactively, it will be faster than human eye and to us full real time.

I consider rettriger time important, but could live with 20-30 us rettriger time (30-50kWfms/s), if sequence mode would be much faster, on the level of 1-2 us. In that mode no data processing is performed and  that should be reachable. Picoscopes are like that. They also capture full data in a buffer, but send fast screen updates of decimated data for display, and full data delayed.

There are many scopes, even cheap ones, that do great job of interactive instrument. What would be groundbreaking is open source analytical scope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #39 on: November 17, 2020, 08:22:40 am »
Why would I use USB-C?

- Power adapters are more expensive and less common
- The connector is more fragile and expensive
- I don't need the data connection back (who needs their widget to talk to their power supply?)
- I need to support a wider range of voltages e.g. 5V to 20V input which complicates the power converter design  (present supported range is 7V - 15V)

The plan for the power supply of the next generation product was to have everything sitting at VBAT (3.4V ~ 4.2V) and all DC-DC converters running off that.  It's within the range that a buck/LDO stage can work to give a 3.2V rail (good enough for 3.3V rated devices) and a boost stage can provide 5V.

Now, I was going to design it so that if you connected a 5V source it could charge the battery, so a simple USB type A to barrel jack cable can be supplied.  That would be inexpensive enough because we still have a buck input stage for single-cell Li-Ion charging (I'm keen to avoid multi-cell designs)  but at a maximum 'safe' limit of 5W from such a source, I doubt the scope could run without slowly discharging its battery.

When charging the battery this device could pull up to 45W (36W charging + 9W application) - that's roughly a 1C charge rate for a 10000mAh cell
« Last Edit: November 17, 2020, 08:24:37 am by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #40 on: November 17, 2020, 10:56:11 am »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #41 on: November 17, 2020, 11:21:36 am »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.

I've got a concept and LTSpice simulation of the attenuator and pre-amp side, but nothing has been tested for real or laid out.  It would be useful to have an experienced analog engineer look at this - I know enough to be dangerous but that's about it.

At the time I was looking at a relay-based attenuator for the -40dB step and then a gain/attenuator block for +6dB to -38dB (think it was a TI part, I'll dig it out) which would get you from +6dB to -78dB attenuation.  Enough to cope with typical demands of a scope (1mV/div to 10V/div).

I was also looking into how to do 20MHz B/W limit and whether it would be practical to vary the varicap voltage with some PWM channels on an MCU to fine tune bandwidth limits.
« Last Edit: November 17, 2020, 11:23:37 am by tom66 »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #42 on: November 17, 2020, 11:33:25 am »
The existing AFE is purely ac-coupled.  Attached schematic.  The ADC needs about 1Vp-p input to get full scale code.

Presently the ADC diffpairs go over SATA cables, they are cheap and (usually) shielded.
 

Offline Zucca

  • Supporter
  • ****
  • Posts: 3486
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #43 on: November 17, 2020, 11:39:02 am »
Personally I don't have a real need for high waveform update rates. Deep memory is usefull

Ditto. Normally on our benches we have already a high waveform rate scope.
I believe many of us have (or would buy) a USB/PC scope to cover application where deep memory is needed.

For a project like this I would put all my poker fiches to get as much memory as possible. All in.
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 
The following users thanked this post: 2N3055

Offline Circlotron

  • Super Contributor
  • ***
  • Posts: 2473
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #44 on: November 17, 2020, 12:04:20 pm »
Over the past year and a half I have been working on a little hobby project to develop a decent high performance oscilloscope, with the intention for this to be an open source project.  By 'decent' I class this as something that could compete with the likes of the lower-end digital phosphor/intensity graded scopes e.g. Rigol DS1000Z,  Siglent SDS1104X-E,  Keysight DSOX1000, and so on. <snip>  I'll welcome any suggestions.
Sounds reminiscent of a newsgroup posting by a certain fellow from Finland some years ago... Lets hope it becomes as big.  :-+
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #45 on: November 17, 2020, 12:12:48 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.

In better news if you're going down an all digital trigger route (probably a good idea) then the vast majority of "trigger" types are simply combinations of 2 thresholds and a one shot timer, which are easy enough. That can then be passed off to slower state machines for protocol/serial triggers. But without going down dynamic reconfiguration or using multiple FPGA images supporting a variety of serial trigger types becomes an interesting problem all of its own.

As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.

I think you could do more if filters are run on post-processed, triggered data.   Total numeric 'capacity' is approx 300MHz * 210 DSPs = 63 GMAC/s.    But at that point it comes down to how fast you can get data through your DSP blocks and they are spread across the fabric, which requires very careful design when crossing columns as that's where the fabric routing resource is more constrained.  I'd also be curious what the power consumption of the Zynq looks like when 63 GMAC/s of number crunching is being done - but it can't be low.  I hate fans with a passion.  This scope will be completely fanless.  It will heatsink everything into the extruded aluminum case. 

Regarding digital (serial) triggers, my thought was around the area of a small configurable FSM that can use the digital comparator outputs from any channel.  The FSM would have a number of programmable states and generate a trigger pulse when it reaches the correct end state. This itself is a big project, it would need to be designed, simulated and tested; hence why I have stuck with a fairly simple edge trigger (and the pulse width, slope, runt and timeout triggers are fairly trivial and the core technically supports them, although they are unimplemented in software for now.)  The FSM for complex triggers could have a fairly large 'program' and the program could be computed dynamically (e.g. for I2C address trigger, it would start with a match for a start condition, then look for the relevant rising edges on each clock and compare SDA at that cycle - the Python application would be able to customise the sequence of states that need to pass through to generate triggers in a -very- basic assembly language.)

Serial decode itself would likely use Sigrok, though its pure-Python implementation may cause performance issues in which case a compiled RPython variant may be usable instead.    There is some advantage to doing this on the Zynq in spare cycles if using e.g. a 7020 with the FPGA accelerating the level comparison stage so the ARM just needs to shift bits out a register to decide what to do with each data bit.
« Last Edit: November 17, 2020, 12:17:32 pm by tom66 »
 

Offline capt bullshot

  • Super Contributor
  • ***
  • Posts: 2565
  • Country: de
    • Mostly useless stuff, but nice to have: wunderkis.de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #46 on: November 17, 2020, 02:06:42 pm »
Nothing to say yet, but joining this quite interesting thread by leaving a post.
BTW, to OP: great work.
Safety devices hinder evolution
 
The following users thanked this post: tom66

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #47 on: November 17, 2020, 02:52:47 pm »
Why would I use USB-C?
Because it's super convenient. You have a single power supply that can provide any voltage *you* (as designer) want as opposed to what power supply can provide, it's fairly easy to implement - Cypress has a fully standalone controller chip which handles everything, you use few resistors to tell it which voltages you need, and it gives you two voltages out - one is going to be in the range you've set up with resistors, another "fallback one" - will be 5V if power supply can't provide you what you want, so you can indicate to the user that he connected wrong supply. Or you can use STM32G0 MCU, which has integrated USB-C PD PHY peripherals. USB-C PD is specifically designed to follow "waterfall" model, when if it supports higher voltage, it must support all standard values of lower voltages. Which is why you can request, say 9 V at 3 Amps, and any PSU that provides more than 27 W of power full be guaranteed to work with your device and provide you said 9 V regardless of their support of higher voltages.

- Power adapters are more expensive and less common
Really? Everybody's got one by now with any smart phone purchased in the last 2-3 years. They are also used with many laptops - these are even better.
- The connector is more fragile and expensive
No it's not more fragile. And not expensive either if you know where to look. Besides - did I just see someone complaining about $1 part in a $200+ BOM?
- I don't need the data connection back (who needs their widget to talk to their power supply?)
That's fine - you can use power-only connector.
- I need to support a wider range of voltages e.g. 5V to 20V input which complicates the power converter design  (present supported range is 7V - 15V)
No you don't - see my explanation above.
« Last Edit: November 17, 2020, 03:19:07 pm by asmi »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #48 on: November 17, 2020, 03:31:46 pm »
Still isn't adding USB-C adding on more complexity to an already complex project? I recall Dave2 having quite a bit of difficulties implementing USB-C power for Dave's new power supply.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: ogden

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #49 on: November 17, 2020, 03:34:47 pm »
asmi, could you link to that Cypress solution?  I will give it a look, but it does (as nctnico says) seem like added complexity for little or no benefit.

In my fairly modern home with a mix of Android and iOS devices I have one USB-C cable and zero USB-C power supplies.  My laptop (a few years old, not ultrabook format) still uses a barrel jack connector.  Girlfriend's laptop is the same and only 1 year old.  I've no doubt that people have power supplies with Type C,  but barrel-jack connectors are more common and assuming this device will ship with a power adapter, it won't be too expensive to source a 36W/48W 12V AC-adapter whereas a USB Type-C adapter will almost certainly cost more.

And there will be that not-insignificant group of people who wonder "why does it not work with -cheap 5W smartphone charger-?"  When you have to qualify it with things like, only use >45W or more rated adapter, then the search-space of usable adapters drops considerably.
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #50 on: November 17, 2020, 03:53:08 pm »
asmi, could you link to that Cypress solution?  I will give it a look, but it does (as nctnico says) seem like added complexity for little or no benefit.
https://www.cypress.com/products/ez-pd-barrel-connector-replacement-bcr
I would recommend to buy this eval kit: https://www.cypress.com/documentation/development-kitsboards/cy4533-ez-pd-bcr-evaluation-kit It's very cheap ($25), and allows you to evaluate all features of the chip.
But I find it's hilarious that you already declared it to be complex and provide no benefit without even seeing it :palm:

In my fairly modern home with a mix of Android and iOS devices I have one USB-C cable and zero USB-C power supplies.  My laptop (a few years old, not ultrabook format) still uses a barrel jack connector.  Girlfriend's laptop is the same and only 1 year old.  I've no doubt that people have power supplies with Type C,  but barrel-jack connectors are more common and assuming this device will ship with a power adapter, it won't be too expensive to source a 36W/48W 12V AC-adapter whereas a USB Type-C adapter will almost certainly cost more.
Take a look at Amazon - you can buy 45 W USB-C power supply for like $15-20.
Barrel jacks are good exactly until you connect the wrong one and cause some fireworks.

And there will be that not-insignificant group of people who wonder "why does it not work with -cheap 5W smartphone charger-?"  When you have to qualify it with things like, only use >45W or more rated adapter, then the search-space of usable adapters drops considerably.
I kind of suspect that idiots are not exactly the target audience for DIY oscilloscope project :-DD
But again, nothing stops you from having both options if you really want that stone-age barrel jack. It's trivial to implement.
« Last Edit: November 17, 2020, 04:05:04 pm by asmi »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #51 on: November 17, 2020, 04:24:09 pm »
I didn't regard Cypress solution as too complex.    I'd not seen it until you literally just linked it!  I just have an aversion to USB for power supplies because it's not ideal in many cases and for *this product* it is a complex solution with little obvious benefit.  As nctnico says, what does it add?  It needs to add a good thing to be worth the complexity.

There's a big TVS on the input that clamps at ~17V on the present board.  If you put reverse polarity or too many volts in it either blows the fuse or crowbars the external supply.  A barrel jack is about the most rugged DC connector you can get for the size whereas USB-C port could get contaminated with dust/dirt or have connection pads damaged - after all it is a 20 pin connector.  I've had plenty of headaches with USB connectors before failing in odd, usually somewhat intermittent ways: both the Lightning connector on my old iPhone and the USB Micro connector on my old Samsung S5 failed in intermittent fashion and required replacement.  So personally I see the barrel jack as better from an engineering environment perspective where you have more dust and contaminants than typical. 

And you have a point about the target-market being technical but then you will also have non-technical people that might want to use such an instrument e.g. education or hobbyists.  That USB-C supply is twice the retail price of a comparable Stontronics PSU with a barrel jack output, and it's not clear what it offers over the barrel jack for most users.

Don't get me wrong, nothing is set in stone yet,  it might be the best solution for Mk2 of the product.  I will of course listen to feedback in that regard.

 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 58
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #52 on: November 17, 2020, 05:23:23 pm »
Still isn't adding USB-C adding on more complexity to an already complex project? I recall Dave2 having quite a bit of difficulties implementing USB-C power for Dave's new power supply.

There are lots of 1 chip solutions. Have a look at tindie.com for examples with schematics in most cases. If a USB-C power adapter (or battery, the RP-PB201 is a cheaper + more powerful than the Mophie 3XL that I mentioned previously) doesn't have enough power for the 'scope (you always get at least 5 Volts (Vsafe) at 1.5 Amps) then flash a red LED.

Dave seemed to be intimidated by the 659 page USB PD spec (plus the 373 page USB Type C spec). Dave "spat the dummy" for theatrical effect; either that, or he should get out in the real world more often! For example: write a device driver for product A to talk to product B via transport C (e.g. USB, Ethernet, BT) using OS/environment "D". That may involve thousands of pages across multiple specs. You don't read them like a novel, you use them like a dictionary. And when product A fails to talk in some situation to product B, you contact support for product A (say) and point out that it doesn't comply with the spec they claim to implement with a reference to chapter and verse (of the relevant spec).

I proposed two USB-C ports to replace the barrel connector _and_ the USB Type A receptacle. So either one could be power in, while the other functionally replaced the USB Type A host. In that latter role USB-C is more flexible as it can play either the role of (data) host or device. So your PC connection could be via USB-C where the PC is "host" and the 'scope is the device. OTOH you could connect a USB memory key and the 'scope would play the host role (and source a bit of power).

Whoever suggested connecting a USB-C power adapter to a USB dongle might find that hard to do if the USB-C power adapter has a captive cable. [They would need a USB-C F-F dongle.] If the power adapter didn't have a captive cable (just a USB-C female receptacle) then the connection can be made but nothing bad would happen (i.e. no magic smoke), the dongle would be powered at 5 Volts but the dongle would find no USB host at the other end of the cable to talk to. Maybe the dongle would flash a LED suggesting something was wrong. When you use symmetrical cables then many more stupid combinations are possible (so devices and their users need to be a bit smarter) but you need less (a lot less) cable variants. That is a big win for not much pain .
 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 58
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #53 on: November 17, 2020, 06:23:58 pm »
There's a big TVS on the input that clamps at ~17V on the present board.  If you put reverse polarity or too many volts in it either blows the fuse or crowbars the external supply.  A barrel jack is about the most rugged DC connector you can get for the size whereas USB-C port could get contaminated with dust/dirt or have connection pads damaged - after all it is a 20 pin connector.  I've had plenty of headaches with USB connectors before failing in odd, usually somewhat intermittent ways: both the Lightning connector on my old iPhone and the USB Micro connector on my old Samsung S5 failed in intermittent fashion and required replacement.  So personally I see the barrel jack as better from an engineering environment perspective where you have more dust and contaminants than typical. 

And you have a point about the target-market being technical but then you will also have non-technical people that might want to use such an instrument e.g. education or hobbyists.  That USB-C supply is twice the retail price of a comparable Stontronics PSU with a barrel jack output, and it's not clear what it offers over the barrel jack for most users.

I'm proposing an infinitely cheaper PSU supplied with your 'scope :-) That is, no PSU at all. Sounds like you need 15 Volts in and while that is a common USB-C PSU voltage, the El cheapo ones only supply 5 Volts, and sometimes 9 Volts as well. If you need 15 Watts or less than you could boost a RPi 4 PSU (and they are around $US10). All USB-C PSU schematics that I have seen (that can supply > 5 Volts) have a pass MOSFET that gets switched off in a fault condition. For power only USB-C the 24/22 pin connector can come down to as few as 6 active pins (see https://www.cuidevices.com/blog/an-introduction-to-power-only-usb-type-c-connectors ).
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #54 on: November 17, 2020, 07:11:33 pm »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.

I've got a concept and LTSpice simulation of the attenuator and pre-amp side, but nothing has been tested for real or laid out.  It would be useful to have an experienced analog engineer look at this - I know enough to be dangerous but that's about it.
I have attached a design I created based on earlier circuits. IIRC it is intended to offer a 100MHz bandwidth and should survive being connected to mains (note the date!).
Left to right, top to bottom:
- Input section with attenuators. Note sure whether the capacitance towards the probe is constant.
- Frequency compensation using varicaps. This works but requires an (digitally) adjustable voltage of up to 50V and I'm not sure how well a calibration holds over time. Using trim capacitors might be a better idea for a first version.
- over voltage protection
- high impedance buffer
- anti-aliasing filter. Looking at it I'm not sure whether a 7th order filter is a good idea due to phase shifts.
- single ended to differential amplifier and analog offset
- gain control block and ADC.

Nowadays I'd skip the external gain control and use the internal gain control of the HMCAD1511/20 devices. It could be a nice Christmas project to see how it behaves.

There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66, JohnG, 2N3055

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #55 on: November 17, 2020, 07:55:06 pm »

- anti-aliasing filter. Looking at it I'm not sure whether a 7th order filter is a good idea due to phase shifts.
- single ended to differential amplifier and analog offset
- gain control block and ADC.


What's required  to preserve waveform fidelity is the flattest phase delay, a Bessel-Thompson filter.  I've spent a lot of time studying the subject.  It's not well treated in the literature, but well documented in scopes.  Of course, all high order analog filters are problematic to produce.  Though with an FPGA one need only get close and the FPGA can trim the last bit.

As the order goes up the Bessel-Thompson passband approaches the Gaussian passband.  So the impulse response of a 10th order Bessel-Thompson is approximately a time delayed Gaussian spike.

Tom is the reason I dropped work on hacking the Instek GDS-2000E after I blew the scope. It no longer made sense to replace it.

Have Fun!
Reg
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #56 on: November 17, 2020, 09:55:24 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.
As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.
Not sure the trigger interpolation calculation is a single 8bit lookup when the sample point before and after the trigger could each be any value (restricted by the bandwidth of the front end, so perhaps 1/5 of the full range). Sounds like an area you need to look at much more deeply, as the entire capture needs to be phase shifted somewhere or the trigger will be jittering 1 sample forward/backward when the trigger point lands close to the trigger threshold. Exactly where and how to apply the phase shift is dependent on the scopes architecture. This may not be a significant problem if the acquisition sample rate is always >>> the bandwidth.

Similarly if you think a 60 tap filter isn't very useful recall that Lecroy ERES uses 25 taps to obtain its 2 bits of enhancement. P.S. don't restrict the thinking to DSP blocks as 18x18 multipliers, or that they are the only way to implement FIR filters. Similarly while running decimation/filtering at the full ADC rate before storing to acquisition memory makes for a nice architecture concept suited to realtime/fast update rates, its not the only way and keeping all "raw" ADC samples in acquisition memory (Lecroy style) to be plotted later has its own set of benefits and more closely matches your current memory architecture (from what you explained).

If memory is your cheap resource then some of the conventional assumptions are thrown out.
 

Offline nfmax

  • Super Contributor
  • ***
  • Posts: 1338
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #57 on: November 17, 2020, 10:02:07 pm »

- anti-aliasing filter. Looking at it I'm not sure whether a 7th order filter is a good idea due to phase shifts.
- single ended to differential amplifier and analog offset
- gain control block and ADC.


What's required  to preserve waveform fidelity is the flattest phase delay, a Bessel-Thompson filter.  I've spent a lot of time studying the subject.  It's not well treated in the literature, but well documented in scopes.  Of course, all high order analog filters are problematic to produce.  Though with an FPGA one need only get close and the FPGA can trim the last bit.

As the order goes up the Bessel-Thompson passband approaches the Gaussian passband.  So the impulse response of a 10th order Bessel-Thompson is approximately a time delayed Gaussian spike.

Tom is the reason I dropped work on hacking the Instek GDS-2000E after I blew the scope. It no longer made sense to replace it.

Have Fun!
Reg

There are a class of filters with a linear-phase (Bessel-like) passband and an equiripple stopband, originally described by Feistel & Unbehauen [1], for which Williams [2] gives some limited design tables: I have used these with success as anti-aliasing filters where waveform fidelity is important. There is a conference paper by Huard et al. [3] - which I don't have a copy of, unfortunately - that describes more recent progress in similar filter designs. Huard worked at Tektronix, which may be a clue about the applications being addressed!

[1] Feistel, Karl Heinz, and Rolf Unbehauen. Tiefpässe Mit Tschebyscheff-Charakter Der Betriebsdämpfung Im Sperrbereich Und Maximal Geebneter Laufzeit. Frequenz 19, no. 8 (January 1965). https://doi.org/10.1515/FREQ.1965.19.8.265.

[2] Williams, Arthur Bernard, and Fred J. Taylor. Electronic Filter Design Handbook. 3rd ed. New York: McGraw-Hill, 1995. ISBN 978-0-07-070441-1

[3] Huard, D.R., J. Andersen, and R.G. Hove. Linear Phase Analog Filter Design with Arbitrary Stopband Zeros. In [Proceedings] 1992 IEEE International Symposium on Circuits and Systems, 2:839–42. San Diego, CA, USA: IEEE, 1992. https://doi.org/10.1109/ISCAS.1992.230091.
« Last Edit: November 17, 2020, 10:12:28 pm by nfmax »
 
The following users thanked this post: egonotto

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #58 on: November 17, 2020, 11:25:09 pm »
First of all, as many others have said, very good job! Especially for a one man's band.

I totally agree with ntcnico's view on having more stuff be done on higher level processing, because the lower you go the less and less people will contribute to it. Ideally I would like to see minimal hardware and everything be done in software (I know it's not possible). It's also one way it can distinguish from the current low level entry scopes on the market.

It's totally understandable the use of the compute module, but newer versions seem to be able to break the interfaces.... so why not use the main RPI boards? It would be much more future proof as well as being more upgrade-friendly as new and more powerful RPIs come out. Just food for thought.

I always prefer to have a standalone instrument, because my computer (even phone) is always too busy,like running my development environment, browsing the web, email, etc... as it is it already has too little screen space available.

And although I'm not against touch screens and agree they may have an advantage say, handling (a lot of) menus, I also hate grease on my screens, as they interfere with readability. Touch is also bad at precision and there's no instant haptic feedback. As long as I can easily add some keys to the instrument (even if a custom USB keyboard), I'm OK with it.

I can see people designing their own cases and 3D printing them.
« Last Edit: November 18, 2020, 02:12:27 am by nuno »
 

Offline Spirit532

  • Frequent Contributor
  • **
  • Posts: 447
  • Country: by
    • My website
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #59 on: November 18, 2020, 01:52:56 am »
Have you considered implementing Andrew Zonenberg's glscopeclient for your UI?
 

Online JohnG

  • Frequent Contributor
  • **
  • Posts: 390
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #60 on: November 18, 2020, 12:50:56 pm »
This is a fantastic bit of work, so I hope it takes off.

I can only contribute a couple requests, since this sort of work is way out of my realm. One request would be that if the front end is modular, it would be nice if it were flexible enough to accomodate some more unusual front ends, like a multi-GHz sampling scope or a VNA front end. The latter may not even make sense within the context of the project.

Again, really nice work!

John
"Those who learn the lessons of history are doomed to know when they are repeating the mistakes of the past." Putt's Law of History
 

Offline Zucca

  • Supporter
  • ****
  • Posts: 3486
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #61 on: November 18, 2020, 01:42:44 pm »
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3497
  • Country: lv
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #62 on: November 18, 2020, 02:37:28 pm »
Still isn't adding USB-C adding on more complexity to an already complex project?
Right. Also USB-C will take time and resources off actual scope work! Those who suggest USB-C could start adapter development right now and share with author so he can copy-paste if he finds it useful.

To the *scope* wishlist I would add DDC (digital down converter) with configurable digital IF filter to implement (possibly but not necessarily) realtime spectrum analyzer. For those who wonder why plain FFT is not good enough, answer is: using just FFT you either get frequency resolution or performance, but not both. Calculating gazillion-tap FFT is slow - every user of FFT option of common scopes know. With DDC we downconvert frequency band of interest down to DC (0Hz) to use low order FFT, like 128-1024 taps. Further reading: Tektronix MDO scope info.
« Last Edit: November 18, 2020, 02:39:20 pm by ogden »
 
The following users thanked this post: egonotto

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #63 on: November 18, 2020, 02:45:19 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.
As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.
Not sure the trigger interpolation calculation is a single 8bit lookup when the sample point before and after the trigger could each be any value (restricted by the bandwidth of the front end, so perhaps 1/5 of the full range). Sounds like an area you need to look at much more deeply, as the entire capture needs to be phase shifted somewhere or the trigger will be jittering 1 sample forward/backward when the trigger point lands close to the trigger threshold. Exactly where and how to apply the phase shift is dependent on the scopes architecture. This may not be a significant problem if the acquisition sample rate is always >>> the bandwidth.

Similarly if you think a 60 tap filter isn't very useful recall that Lecroy ERES uses 25 taps to obtain its 2 bits of enhancement. P.S. don't restrict the thinking to DSP blocks as 18x18 multipliers, or that they are the only way to implement FIR filters. Similarly while running decimation/filtering at the full ADC rate before storing to acquisition memory makes for a nice architecture concept suited to realtime/fast update rates, its not the only way and keeping all "raw" ADC samples in acquisition memory (Lecroy style) to be plotted later has its own set of benefits and more closely matches your current memory architecture (from what you explained).

If memory is your cheap resource then some of the conventional assumptions are thrown out.

Interpolation for sample alignment and display infill are fundamentally different even if the operation is identical.

High order analog filters have serious problems with tolerance spreads.  So sensibly, the interpolation to align the samples and the correction operator should be combined using data measured in production.

The alignment interpolation operators are precomputed, so the lookup is into a table of 8 point operators.  Work required is 8 multiply-adds per output sample which is tractable with an FPGA.  Concern about running out of fabric led to a unilateral decision by me to include a 7020 version as the footprints are the same but lots more resources.  I  am still nervous about the 7014 not having sufficient DSP blocks.

Interpolation for display can be done in the GPU where  the screen update rate limitation provides lots of time and the output series are short.  So an FFT interpolator for the display becomes practical.

As for all the other features such as spectrum and vector analysis we have discussed that at great length. 
The To Do list is daunting.

Reg


 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #64 on: November 18, 2020, 03:16:03 pm »
If anyone hasn't figured it yet, rhb (Reg) is the American contributor to this project - he's been a good advisor/counsel through this whole project
 
The following users thanked this post: egonotto, 2N3055

Offline dave j

  • Regular Contributor
  • *
  • Posts: 91
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #65 on: November 18, 2020, 04:46:25 pm »
I don't know how familiar you are with GPU programming but if you're considering using the GPU for rendering it's worth noting that the various Pi GPUs are tile based - which can have significant performance implications depending upon what you are doing.

There's a good chapter on performance turning for tile based GPUs in the OpenGL Insights book that the authors have helpfully put online.

I'm not David L Jones. Apparently I actually do have to point this out.
 
The following users thanked this post: tom66, egonotto, Fungus, 2N3055

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #66 on: November 18, 2020, 06:19:43 pm »
We did have an early prototype using the Pi's GPU but the problem was the performance even at 2048 points was worse than a software renderer.  We (me and a friend who is GL-experienced) looked into using custom software on the VPU/QPU but the complexity of writing for an architecture with limited examples put us off.  There is some documentation from Broadcom but the tools are poorly documented (and most are reverse engineered.)

One approach we considered was re-arranging the data in buffers to more closely represent the tiling arrangement of the GPU - this would be 'cheap' to do on the FPGA using a BRAM lookup.

Thanks for the link though, that is an interesting (and perhaps useful) resource.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #67 on: November 18, 2020, 06:49:05 pm »
Perhaps an alternative could be a Jetson Nano module. Using the SO-DIMM format and costing $129 in single quantities it could be a good alternative with a GPU which is also useful for raw number crunching.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #68 on: November 18, 2020, 06:59:34 pm »
Interesting idea - but it would throw out the window any option for battery operation if that heatsink size is any indication of the power dissipation.

Whether that would be a killer for people I don't know?  I think it's a nice to have especially if the device itself is otherwise portable.
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #69 on: November 18, 2020, 07:03:38 pm »
I wouldn't put portability as a priority - most scopes we use are not battery-operated-ready from factory.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #70 on: November 18, 2020, 07:24:01 pm »
Interesting idea - but it would throw out the window any option for battery operation if that heatsink size is any indication of the power dissipation.

Whether that would be a killer for people I don't know?  I think it's a nice to have especially if the device itself is otherwise portable.
Well, it would simply need a bigger battery. Laptops aren't low power either. To me portability is lowest on the list though. Personally I'd prefer a large screen.

With ultra-low power you'll also be throwing out the possibility of creating a scope with >500MHz bandwidth for example. The ADC on your prototype has a full power bandwidth of 700MHz. Use two in tandem and you can get to 2Gs/s in a single channel in order to meet Nyquist. However supporting 500MHz will probably require several chips which will get warm.

The Jetson Nano seems to need about 10W of power while running at full capacity. There is also a (I think) pin compatible Jetson Xavier NX module with vastly better specs but this also needs more power. The use of these modules would create a lot of flexibility to trade performance for money.
« Last Edit: November 18, 2020, 08:27:12 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #71 on: November 18, 2020, 10:50:09 pm »
It's certainly an option, but the current Zynq solution is scalable to about 2.5GSa/s for a single channel operation (625MSa/s on 4ch).  Maximum bandwidth around 300MHz per channel.

If you want 500MHz 4ch scope, the ADC requirements would be ~2.5GSa/s per channel pair (drop to 1.25GSa/s with a channel pair activated) which implies about 5GB/s data rate into RAM.  The 64-bit AXI Slave HPn bus in the Zynq clocked at 200MHz maxes out around 1.6GB/s and four channels gets you up to 6.4GB/s but that would completely saturate the AXI buses leaving no free slots for RAM accesses for readback from the RAM or for executing code/data.

The fastest RAM configuration supported would be a dual channel DDR3 configuration at 800MHz requiring the fastest speed grade, which would get the total memory bandwidth to just under 6GB/s.

Bottom line is the platform caps out around 2.5GSa/s in total with the present Zynq 7000 architecture,   and would need to move towards an UltraScale or a dedicated FPGA capture engine for faster capture rate.

So I suppose the question is -- if you were to buy an open-source oscilloscope -- what would you prefer

1. US$1200 instrument with 2.5GSa/s per channel pair (4 channels, min. 1.25GSa/s), ~500MHz bandwidth, Nvidia core, >100kwfm/s, mains only power
2. US$600 instrument with 1GSa/s multiplexed over 4 channels, ~125MHz bandwidth, RasPi core, >25kwfm/s, portable/battery powered
3. Neither/something else
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 22232
  • Country: nz
  • Taupaki Technologies Ltd. NZ Siglent Distributor
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #72 on: November 18, 2020, 10:54:47 pm »

So I suppose the question is -- if you were to buy an open-source oscilloscope -- what would you prefer

1. US$1200 instrument with 2.5GSa/s per channel pair (4 channels, min. 1.25GSa/s), ~500MHz bandwidth, Nvidia core, >100kwfm/s, mains only power
2. US$600 instrument with 1GSa/s multiplexed over 4 channels, ~125MHz bandwidth, RasPi core, >25kwfm/s, portable/battery powered
3. Neither/something else
FYI Very close to the specs of a hacked SDS2104X Plus for $1400
Avid Rabid Hobbyist
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #73 on: November 18, 2020, 10:56:32 pm »
I suspect an instrument like this can't ever compete with a mainstream OEM on bang-per-buck - it has to compete on the uniqueness of being a FOSS product.

That is, you can customise it, you have modularity, you have upgrade routes and flexibility.  But it will always be more expensive in low volumes to produce something like this, that's just an ultimate fact of life.
 
The following users thanked this post: egonotto, nuno

Online tautech

  • Super Contributor
  • ***
  • Posts: 22232
  • Country: nz
  • Taupaki Technologies Ltd. NZ Siglent Distributor
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #74 on: November 18, 2020, 10:59:17 pm »
I suspect an instrument like this can't ever compete with a mainstream OEM on bang-per-buck - it has to compete on the uniqueness of being a FOSS product.

That is, you can customise it, you have modularity, you have upgrade routes and flexibility.  But it will always be more expensive in low volumes to produce something like this, that's just an ultimate fact of life.
Or find a performance/features niche that's not currently being catered for.
Avid Rabid Hobbyist
 
The following users thanked this post: Someone

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #75 on: November 18, 2020, 11:21:28 pm »
It's certainly an option, but the current Zynq solution is scalable to about 2.5GSa/s for a single channel operation (625MSa/s on 4ch).  Maximum bandwidth around 300MHz per channel.

If you want 500MHz 4ch scope, the ADC requirements would be ~2.5GSa/s per channel pair (drop to 1.25GSa/s with a channel pair activated) which implies about 5GB/s data rate into RAM.  The 64-bit AXI Slave HPn bus in the Zynq clocked at 200MHz maxes out around 1.6GB/s and four channels gets you up to 6.4GB/s but that would completely saturate the AXI buses leaving no free slots for RAM accesses for readback from the RAM or for executing code/data.

The fastest RAM configuration supported would be a dual channel DDR3 configuration at 800MHz requiring the fastest speed grade, which would get the total memory bandwidth to just under 6GB/s.

Bottom line is the platform caps out around 2.5GSa/s in total with the present Zynq 7000 architecture,   and would need to move towards an UltraScale or a dedicated FPGA capture engine for faster capture rate.

So I suppose the question is -- if you were to buy an open-source oscilloscope -- what would you prefer

1. US$1200 instrument with 2.5GSa/s per channel pair (4 channels, min. 1.25GSa/s), ~500MHz bandwidth, Nvidia core, >100kwfm/s, mains only power
2. US$600 instrument with 1GSa/s multiplexed over 4 channels, ~125MHz bandwidth, RasPi core, >25kwfm/s, portable/battery powered
3. Neither/something else
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

@Tautech: an instrument like this is interesting for people who like to extend the functionality. Another fact is that no oscilloscope currently on the market is perfect. You always end up with a compromise even if you spend US $10k. I'm not talking about bandwidth but just basic features.
« Last Edit: November 18, 2020, 11:41:16 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #76 on: November 19, 2020, 12:17:01 am »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.
As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.
Not sure the trigger interpolation calculation is a single 8bit lookup when the sample point before and after the trigger could each be any value (restricted by the bandwidth of the front end, so perhaps 1/5 of the full range). Sounds like an area you need to look at much more deeply, as the entire capture needs to be phase shifted somewhere or the trigger will be jittering 1 sample forward/backward when the trigger point lands close to the trigger threshold. Exactly where and how to apply the phase shift is dependent on the scopes architecture. This may not be a significant problem if the acquisition sample rate is always >>> the bandwidth.

Similarly if you think a 60 tap filter isn't very useful recall that Lecroy ERES uses 25 taps to obtain its 2 bits of enhancement. P.S. don't restrict the thinking to DSP blocks as 18x18 multipliers, or that they are the only way to implement FIR filters. Similarly while running decimation/filtering at the full ADC rate before storing to acquisition memory makes for a nice architecture concept suited to realtime/fast update rates, its not the only way and keeping all "raw" ADC samples in acquisition memory (Lecroy style) to be plotted later has its own set of benefits and more closely matches your current memory architecture (from what you explained).

If memory is your cheap resource then some of the conventional assumptions are thrown out.

Interpolation for sample alignment and display infill are fundamentally different even if the operation is identical.

High order analog filters have serious problems with tolerance spreads.  So sensibly, the interpolation to align the samples and the correction operator should be combined using data measured in production.

The alignment interpolation operators are precomputed, so the lookup is into a table of 8 point operators.  Work required is 8 multiply-adds per output sample which is tractable with an FPGA.  Concern about running out of fabric led to a unilateral decision by me to include a 7020 version as the footprints are the same but lots more resources.  I  am still nervous about the 7014 not having sufficient DSP blocks.

Interpolation for display can be done in the GPU where  the screen update rate limitation provides lots of time and the output series are short.  So an FFT interpolator for the display becomes practical.

As for all the other features such as spectrum and vector analysis we have discussed that at great length. 
The To Do list is daunting.

Reg
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

So far you've both just said there is a fixed filter/delay, which is a worry when you plan to have acquisition rates close to the input bandwidth.

Equally doing sinc interpolation is a significant processing cost (time/area tradeoff), and just one of the waveform rate barriers in the larger plotting system. Although each part is achievable alone its how to balance all those demands within the limited resources, hence suggesting that software driven rendering is probably a good balance for the project as described.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 16104
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #77 on: November 19, 2020, 04:04:39 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.
 
The following users thanked this post: splin

Offline shaunakde

  • Contributor
  • Posts: 11
  • Country: us
    • Shaunak's Lab
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #78 on: November 19, 2020, 04:19:53 am »
Quote
- Does a project like this interest you?   If so, why?   If not, why not?
Yes. This is interesting because I don't think there is a product out there that has been able to take advantage of the recent advances in embedded computing. I would love to see a "USB-C" scope that is implemented correctly, with excellent open-source software and firmware (maybe pico scopes come close). Bench-scopes are amazing, but PC scopes that are python scriptable, are just more interesting. Maybe not so much for pure electronics, but for science experiments it could be game-changer.

Quote
- What would you like to see from a Mk2 development - if anything:  a more expensive oscilloscope to compete with e.g. the 2000-series of many manufacturers that aims more towards the professional engineer,  or a cheaper open-source oscilloscope that would perhaps sell more to students, junior engineers, etc.?  (We are talking about $500USD difference in pricing.  An UltraScale part makes this a >$800USD product - which almost certainly changes the marketability.)
This is a tough one. I personally would like it to remain as it is, a low-cost device and perhaps a better version of the "Analog Discovery" - just so that it can enable applications in applied sciences etc. But logically, it feels like this should be a higher-end scope that has all the bells and whistles; however, that does limit the audience (And hence community interest) which is not exactly great for an opensource project.

Quote
Would you consider contributing in the development of an oscilloscope?  It is a big project for just one guy to complete.  There is DSP, trigger engines, an AFE, modules, casing design and so many more areas to be completed.  Hardware design is just a small part of the product.  Bugs also need to be found and squashed,  and there is documentation to be written.  I'm envisioning the capability to add modules to the software and the hardware interfaces will be documented so 3rd party modules could be developed and used.
Yes. My github handle is shaunakde - I would LOVE to help in any way I can (but probably I will be most useful in the documentation for now)


Quote
I'm terrible at naming products.  "BluePulse" is very unlikely to be a longer term name.  I'll welcome any suggestions.
apertumScope - coz latin :P
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #79 on: November 19, 2020, 08:20:25 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

I did consider this when I first embarked on the project, but the reverse-engineering exercise would be very significant.  I'd also be stuck with whatever the manufacturer decided in their hardware design, that might limit future options.  And, you're stuck if that scope gets discontinued, or has a hardware revision which breaks things.  They might patch the software route that you use to load your unsigned binary (if they do implement that) or change the hardware in a subtle and hard to determine way (swap a couple pairs on the FPGA layout).

rhb also discovered the hard way that poking around a Zynq FPGA with 12V DC on the same connector as WE# is not conducive to the long-term function of the instrument.

It's a different story with something like a wireless router because those devices are usually running Linux with standard peripherals - an FPGA is definitely not a standard peripheral.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3933
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #80 on: November 19, 2020, 08:22:49 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

 :horse:

This has been discussed to death. It cannot be done in a time frame needed. Reverse engineering existing scope to the point that you understand everything about it's architecture is much harder than simply designing one from the scratch. There were dozens of attempts, that ended up with loading custom linux on scopes, and no way to talk to acquisition engine. Mainly running Doom on scope.
Scope architecture is interleaved design, hardware acquisition, hardware acceleration, system software, scope software and app side acceleration (gpu etc..)
Design decision have been made by manufacturers how to implement it. You end up with something that, in the end, won't have much more capabilities that what it had in the start. You just spent 20 engineer years to have same capabilities and different fonts...
Only way that maybe would make sense to try would be if existing manufacturer would open their design (publish all internal details) as a starting point. Which will happen, well, never.

FOSS scope for the sake of FOSS is a waste of time. GOOD FOSS scope is not... If good FOSS scope existed, it would start to spread through academia, hobby, industry and if really used, that would fuel it's progress. Remember Linux and Kicad... When did they take off? When they became useful and people started using them, not when they became free. They existed and were free for many years and happily ignored by most.

And then you would have many manufacturers that would make them for nominal price. It would be easy for them. like Arduino and clones. Someone did the hard work of hardware design. And someone else will take care of software... That is a dream for manufacturing.. Very low cost... I bet you that what we estimate now to be 600 USD could be had for 300 USD (or less) if mass manufacturing happens...
 
The following users thanked this post: Someone, Kean

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #81 on: November 19, 2020, 08:26:21 am »
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

There is no need to use the fractional position when in the situation where 1 wave point is plotting across 1 pixel or less.  At that point any fractionality is lost to the user unless you do antialiasing.  I prototyped antialiasing on the GPU renderer, and the results were poor because a waveform never looked "sharp".  I suspect no scope manufacturer makes a true antialiased renderer, it just doesn't add anything.

With that in mind your sinx/x problem becomes only an issue at faster timebases (<50ns/div) and that is where you start plotting more pixels than you have points so your read input data is slower than your write output data.  Your sinx/x filter is essentially a FIR filter with sinc coefficients loaded in and every nth parameter set to zero (if I understand it correctly!)  That makes it prime territory for the DSP blocks on the FPGA.

There isn't any reason why, with an architecture like the Zynq you can't do a hybrid of the two - software renderer configures hardware rendering engine for instance by loading list of pointers and pixel offsets for each waveform.  While it would probably end up being the ultimate bottleneck,  the present system is doing over 80,000 DMA transfers per second to achieve the waveform update rate, and that is ALL controlled by the ARM.  So if that part is eliminated and driven entirely by the FPGA logic and the ARM is just processing the sinx/x offset and trigger position then the performance probably won't be so bad.
« Last Edit: November 19, 2020, 08:28:03 am by tom66 »
 

Offline sb42

  • Contributor
  • Posts: 42
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #82 on: November 19, 2020, 08:33:22 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

Internet router: mass-market consumer product, cheap commodity hardware, Linux does everything.
Oscilloscope: completely the opposite in every way? ;)

The only way I see this happening would be for one of the A-brand manufacturers to come up with an open-platform scope that's explicitly designed to support third-party firmware, kind of like the WRT54GL of oscilloscopes. I suspect that the business case isn't very good, though.

Manufacturers of inexpensive scopes iterate too quickly for reverse engineering to be practical.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #83 on: November 19, 2020, 09:12:04 am »
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

The present Zynq solution is scalable to about a max. of 2.5GSa/s across all active channels.   That would use the bandwidth of two 64-bit AXI Master ports and require a complex memory gearbox to ensure the data got assembled correctly in the RAM.  It would require a minimum dual channel memory controller, as the present memory bandwidth is ~1.8GB/s, so you just wouldn't have enough bandwidth to write all your samples without losing them.

You can't get to 5GSa/s across all active channels (i.e. 1.25GSa/s per channel in 4ch mode or 2.5GSa/s in ch1,ch3 mode) without having a faster memory bus, AXI bus, and fabric, so this platform can't make a 4ch 500MHz scope.    Which pushes the platform towards the UltraScale part, or a very fast FPGA front end (e.g. Kintex-7) connected to a slower backbone (e.g. a Zynq or a Pi) if you want to be able to do that.

If the platform is modular though then there is an option.  The product could "launch" with a smaller FPGA/SoC solution and the motherboard could be replaced at a later date with the faster SoC solution.  There would be software compatibility headaches, as the platforms would differ considerably, but it would be possible to share most of the GUI/Control/DSP stuff, I think.

The really big advantage with keeping it all within something like an UltraScale is that things become memory addressable.  If that can also be done with PCI-e then that could be a winner.  It seems that the Nvidia card doesn't expose PCI-e ports though which is a shame.  You'd need to do it over USB3.0.

With UltraScale though, it would be possible to fit an 8GB SODIMM memory module and have ~8Gpts of waveform memory available for long record mode. 

That would be a pretty killer feature.  You could also upgrade it at any time (ships with a 2GB DDR4 module, install any laptop DDR4 RAM that's fast enough.)
« Last Edit: November 19, 2020, 09:13:59 am by tom66 »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #84 on: November 19, 2020, 09:35:06 am »
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.
There is no need to use the fractional position when in the situation where 1 wave point is plotting across 1 pixel or less.
I disagree, if you zoom in then it will become visible (even before display "fill in" interpolation appears).

Between ADC sample rate, acquisition memory sample rate, and display points/px, there may be differences between any of those and opportunities for aliasing and jitter to appear.

While the rendering etc is all the exciting/pretty stuff, I agree with many posters above that the starting point to get people working on the project would be a viable AFE and ADC capture system. There are many ways to use that practically and trying to lock down the processing architecture/concept/limitations at the start might be putting the cart before the horse.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #85 on: November 19, 2020, 09:38:54 am »
I disagree, if you zoom in then it will become visible (even before display "fill in" interpolation appears).

Between ADC sample rate, acquisition memory sample rate, and display points/px, there may be differences between any of those and opportunities for aliasing and jitter to appear.

While the rendering etc is all the exciting/pretty stuff, I agree with many posters above that the starting point to get people working on the project would be a viable AFE and ADC capture system. There are many ways to use that practically and trying to lock down the processing architecture/concept/limitations at the start might be putting the cart before the horse.

Well yes, but at that point you are no longer doing 1 pixel to 1 wavepoint or more.   So you can recompute the trigger position if needed when zooming in.  The fractional trigger point can then be used, if needed.  In the present implementation, the trigger point is supplied as a fixed point integer with a 24-bit integer component and 8-bit fractional component.

Reinterpreting data when zooming in seems to be pretty common amongst scopes.  When I test-drove the SDS5104X you had a sinx/x interpolation 'bug' which only became visible at certain timebases once stopped.  So the scope would be reinterpreting the data in RAM as needed.

« Last Edit: November 19, 2020, 09:41:46 am by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #86 on: November 19, 2020, 10:18:02 am »
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

The present Zynq solution is scalable to about a max. of 2.5GSa/s across all active channels.   That would use the bandwidth of two 64-bit AXI Master ports and require a complex memory gearbox to ensure the data got assembled correctly in the RAM.  It would require a minimum dual channel memory controller, as the present memory bandwidth is ~1.8GB/s, so you just wouldn't have enough bandwidth to write all your samples without losing them.

You can't get to 5GSa/s across all active channels (i.e. 1.25GSa/s per channel in 4ch mode or 2.5GSa/s in ch1,ch3 mode) without having a faster memory bus, AXI bus, and fabric, so this platform can't make a 4ch 500MHz scope.    Which pushes the platform towards the UltraScale part, or a very fast FPGA front end (e.g. Kintex-7) connected to a slower backbone (e.g. a Zynq or a Pi) if you want to be able to do that.

I've also done some number crunching on bandwidth. A cheap solution to get to higher bandwidths is likely using multiple (low cost) FPGAs with memory attached and use PCI express to transfer the data (at a lower speed) to the processor module. With PCIexpress you can likely get rid of the processor inside the Zync as well. Multiple small FPGAs are always cheaper compared to using 1 big FPGA.

The Intel (formerly Altera) Cyclone 10 FPGA's look like a better fit compared to Xilinx' offerings where it comes to memory bandwidth, I/O bandwidth and price. The memory interface on the Cyclone 10 has a peak bandwidth of nearly 15GB/s (64bit wide but 72 bit is possible), there is a hard PCIexpress gen2 IP block (x2 on the smallest device and x4 on the larger ones) and 12.5Gb/s transceivers which also support the JESD204B ADC interface. On top of that it seems Intel supports OpenCL for FPGA development which could offer a relatively easy path to migrate code between GPU and FPGA (seeing is believing though). The price look doable; the 10CX085 (smallest part) sits at around $120 in single quantities.
« Last Edit: November 19, 2020, 12:17:55 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66, ogden

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #87 on: November 19, 2020, 12:34:38 pm »
I'll give the Cyclone 10 devices a look.  I'd initially ruled them out due to cost but if we're looking at UltraScale it makes sense to consider similar devices.

I do think we want an ARM of some kind on the FPGA.  It makes the system control so much easier to implement and maintain.  FSMs everywhere for system/acquisition control do not a happy debugger make.

There are enough FSMs in the present design to deal with acquisition, stream out, DMA, trigger, etc. and getting those to behave in a stable fashion was quite the task.  Keeping a small real-time processor on the FPGA makes a lot of sense. Of course there are options with soft-cores here but a hard core is preferred due to performance and it doesn't eat into your logic area.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #88 on: November 19, 2020, 01:11:18 pm »
I'll give the Cyclone 10 devices a look.  I'd initially ruled them out due to cost but if we're looking at UltraScale it makes sense to consider similar devices.

I do think we want an ARM of some kind on the FPGA.  It makes the system control so much easier to implement and maintain.  FSMs everywhere for system/acquisition control do not a happy debugger make.
For that stuff a simple softcore will do just fine (for example the LM32 from Lattice) but it is also doable from Linux. Remember that a lot of hardware devices on a Linux system have realtime requirements too (UART for example) so using an interrupt is perfectly fine. A system is much easier to debug if there are no CPUs scattered allover the place. One of my customer's projects involves a system which has a softcore inside an FPGA and a processor running Linux. In order to improve / simplify the system a lot of functionality is being moved from the softcore to the main processor.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #89 on: November 19, 2020, 02:31:54 pm »
The Intel (formerly Altera) Cyclone 10 FPGA's look like a better fit compared to Xilinx' offerings where it comes to memory bandwidth, I/O bandwidth and price. The memory interface on the Cyclone 10 has a peak bandwidth of nearly 15GB/s (64bit wide but 72 bit is possible), there is a hard PCIexpress gen2 IP block (x2 on the smallest device and x4 on the larger ones) and 12.5Gb/s transceivers which also support the JESD204B ADC interface. On top of that it seems Intel supports OpenCL for FPGA development which could offer a relatively easy path to migrate code between GPU and FPGA (seeing is believing though). The price look doable; the 10CX085 (smallest part) sits at around $120 in single quantities.
You forgot to mention that you will have to pay 4k$ per year in order to be able to use them :palm:

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #90 on: November 19, 2020, 02:32:56 pm »
I'll give the Cyclone 10 devices a look.
Don't bother. It's garbage.

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #91 on: November 19, 2020, 02:33:32 pm »
Sigh.  At least I can build for Zynq using Vivado WebPACK.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #92 on: November 19, 2020, 02:38:22 pm »
I'll give the Cyclone 10 devices a look.
Don't bother. It's garbage.
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #93 on: November 19, 2020, 02:54:39 pm »
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
LP subfamily, which can be used with free version of their tools, doesn't even have memory controllers  :palm:
GX subfamily is behind a heavy paywall (4k$ a year).
You are better off using Kintex-7 devices from Xilinx, lower end ones (70T and 160T) can be used with free tools, license for 325T can be purchased with a devboard and subsequently used for your own designs (Xilinx device-locked license allows using that part in any package and speed grade, not necessarily the one that's on a devboad, and it's a permanent license, not subscription), the cheapest Kintex-7 devboard that I know of that ships with a license is Digilent's Genesys 2 board for $1k, and you can find 325T devices in China for 200-300$ a pop, as opposed to Digikey prices of 1-1.5k$ a pop. Or you can talk to Xilinx directly, and they typically provide deep discounts - it won't be as cheap as you can get them in China, but it will be a fully legit devices and you can be sure you can always buy them at that price, while sources in China tend to be ad-hoc - they appear on a market, they sell their stock, and they disappear forever. These devices provide up to 64/72bit 933MHz DDR3 interface (~14.6 GBytes/s of bandwidth), up to 16 transceivers which can go as high as 12.5 Gbps (depending on package and speed grade), and all of that in convenient 1 mm pitch BGA packages with 400 or 500 user IO balls so you can connect a lot of stuff to it.
But most importantly, Kintex-7 fabric is significantly faster than even Artix-7/Spartan-7 one, which is faster than anything Intel offers in Cyclone family.
« Last Edit: November 19, 2020, 03:00:37 pm by asmi »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #94 on: November 19, 2020, 03:01:06 pm »
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
LP subfamily, which can be used with free version of their tools, doesn't even have memory controllers  :palm:
GX subfamily is behind a heavy paywall (4k$ a year).
You are better off using Kintex-7 devices from Xilinx, lower end ones (70T and 160T) can be used with free tools, license for 325T can be purchased with a devboard and subsequently used for your own designs (Xilinx device-locked license allows using that part in any package and speed grade, non necessarily the one that's on a devboad, and it's a permanent license, not subscription), the cheapest Kintex-7 devboard that I know of that ships with a license is Digilent's Genesys 2 board for $1k, and you can find 325T devices in China for 200-300$ a pop, as opposed to Digikey prices of 1-1.5k$ a pop. Or you can talk to Xilinx directly, and they typically provide deep discounts - it won't be as cheap as you can get them in China, but it will be a fully legit devices and you can be sure you can always buy them at that price, while sources in China tend to be ad-hoc - they appear on a market, they sell their stock, and they disappear forever. These devices provide up to 64/72bit 933MHz DDR3 interface (~14.6 GBytes/s of bandwidth), up to 16 transceivers which can go as high as 12.5 Gbps (depending on package and speed grade), and all of that in convenient 1 mm pitch BGA packages with 400 or 500 user IO balls so you can connect a lot of stuff to it.

For a moment assume the $4k for the Cyclone 10 license drops to 0. Are there any technical problems with the Cyclone 10 FPGAs? The Kintex device you are proposing is about 3 times more expensive (up to 10 times when comparing Digikey prices). Even with the $4k subscription it doesn't take selling a lot of boards to break even.
« Last Edit: November 19, 2020, 03:04:11 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #95 on: November 19, 2020, 03:18:44 pm »
For a moment assume the $4k for the Cyclone 10 license drops to 0. Are there any technical problems with the Cyclone 10 FPGAs? The Kintex device you are proposing is about 3 times more expensive (up to 10 times when comparing Digikey prices).
I'm not interested in discussing spherical horses in the vacuum. I prefer practical approach. And it tells me for 4k$ a year I can buy quite a bit of Kintex devices, which can be used with free tools. Besides even top of the line GX device fall short of what 325T offers, while being priced similarly to 325T from China, with it's relatively obtainable license. If we stick to free versions of tools, 160T offers quite a bit of resources - up to 8 12.5 Gbps MGTs, same 64/72bit 933 DDR3 interface, up to x8 PCIE-express 2 link (PCIE Gen 3 is possible, but you have to roll your own, or buy commercial IP) and all other goodies of 7 series family.

Even with the $4k subscription it doesn't take selling a lot of boards to break even.
Did you forget the part that it's an open source project? That means tools must be free.

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 12716
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #96 on: November 19, 2020, 03:46:27 pm »
I did consider this when I first embarked on the project, but the reverse-engineering exercise would be very significant.

And the manufacturer could decide to stop production of that model at any moment and you'll be back to square one.
« Last Edit: November 19, 2020, 03:55:10 pm by Fungus »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #97 on: November 19, 2020, 08:33:07 pm »
Did you forget the part that it's an open source project? That means tools must be free.
Some tradeoff needs to be made here. If requiring free tools increases the price of each unit with several hundreds of US dollars then that will be an issue for adoption of the platform in general. For example: one of my customers has invested nearly 20k euro in tooling to be able to work on an open source project. Something else to consider is risk versus free tools. For example: the commercial PCB package I use can do impedance and cross-talk simulation of a PCB and has extensive DRC checks for high speed designs. Free tools like Kicad are not that advanced so they need more time for manual checking and/or pose a higher risk of an error in the board design which needs an expensive re-spin. At some point software which costs several $k is well worth it just from a risk management point of view. Developing hardware is expensive. IIRC I have sunk about 3k euro in my own USB oscilloscope project (which wasn't a waste even though I stopped the project).

Anyway,  I think your suggestion for the Kintex 70T (XC7K70T) is a very good one. I have overlooked that one. Price wise it seems to be on par with the Cyclone 10CX085 and it can do the job as well.
« Last Edit: November 19, 2020, 08:38:51 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #98 on: November 19, 2020, 09:01:57 pm »
The Zynq 7014S has 40k LUTs + ARM processor + hard DDR controller fabric.  1 unit starting from $89 USD.

IMO that wins over the 65k LUTs + no hardcore CPU + no DDR3 controller in the Kintex 70T -- and in my experience the MIG eats up easily 20% of the logic area for just a single channel DDR3 controller, I'm not convinced that would be a worthwhile trade off here. By the time you've implemented everything that the Zynq gives you "for free" (including 256KB of fast on-chip RAM) you're stepping up several grades of 'pure FPGA' to get there.
 
« Last Edit: November 19, 2020, 09:05:39 pm by tom66 »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #99 on: November 19, 2020, 09:10:04 pm »
The present oscilloscope fits in ~20% of the logic area of the 7014S.  That's including a System ILA which uses about 10% of logic area. I imagine the 'render engine' would use about 10-15% of logic area if implemented on the FPGA fabric.  We're not constrained by the fabric capacity right now - but more by the speed of the logic.  And a lot of that is down to experience in timing optimisation which is an area I have a lot to learn about.

Moving up a speed grade may be more beneficial than moving to a bigger device.
« Last Edit: November 19, 2020, 09:12:12 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #100 on: November 19, 2020, 09:28:18 pm »
The Zynq 7014S has 40k LUTs + ARM processor + hard DDR controller fabric.  1 unit starting from $89 USD.

IMO that wins over the 65k LUTs + no hardcore CPU + no DDR3 controller in the Kintex 70T -- and in my experience the MIG eats up easily 20% of the logic area for just a single channel DDR3 controller, I'm not convinced that would be a worthwhile trade off here. By the time you've implemented everything that the Zynq gives you "for free" (including 256KB of fast on-chip RAM) you're stepping up several grades of 'pure FPGA' to get there.
Well, that was the old MIG and that made me roll my own (way more resource efficient) DDR2 controller a long time ago. The hard IP DDR3 controllers in modern Xilinx FPGA devices however don't eat any logic. Realistically you can't create a DDR3 controller running at hundreds of MHz from generic IOB cells anyway. The timing needs to be trained etc. And since the Kintex 7 series is related to the Zync series it has exactly the same (hard IP) memory controller as the Zync has.

The distributed oscilloscope design I have made for one of my customers based on a Spartan6 LX45T (which has 44k logic cells) contains an LM32 soft-core, timing logic, network interface + hardware firewall (don't ask me why that is in there), various peripherals and ofcourse the oscilloscope part. This design uses about half of the logic without doing much on optimisation. The smallest Kintex part has 65k logic cells. I really don't think running out of logic resources is going to be an issue.
« Last Edit: November 19, 2020, 09:39:30 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #101 on: November 19, 2020, 09:33:17 pm »
The Kintex-7 doesn't have a hard DDR3 controller!

Spartan-6 and such do, but that was migrated over to the fabric in the later series of devices.

I haven't prototyped it on a Kintex-7 but on an Artix-7, the MIG used 20% of the device. Admittedly that would have been a smaller 7A35T, but that is still 10% of logic used on the big Kintex device assuming it maps similarly - I don't know what the effect of moving to a 32-bit controller would be but I imagine it would further increase device utilisation.  There is hardware acceleration for it on the FPGA fabric - the SERDES drivers for instance are optimised for DDR controllers - but it's still a 'soft IP' at heart.

The biggest limitation of Zynq (when it comes to RAM) is in the standard speed grade the max DDR3 freq is 533MHz.  You have to go up speed grade to get to 667MHz (or write PLL registers and overclock, but that is asking for trouble.)
« Last Edit: November 19, 2020, 09:37:02 pm by tom66 »
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #102 on: November 19, 2020, 09:38:18 pm »
IMO that wins over the 65k LUTs + no hardcore CPU + no DDR3 controller in the Kintex 70T -- and in my experience the MIG eats up easily 20% of the logic area for just a single channel DDR3 controller, I'm not convinced that would be a worthwhile trade off here. By the time you've implemented everything that the Zynq gives you "for free" (including 256KB of fast on-chip RAM) you're stepping up several grades of 'pure FPGA' to get there.
You missed the part that Kintex fabric is significantly faster than Atrix fabric (close to 50% in my experience, one design that barely closes at 180 MHz on Artix fabric, easily closes at 280 MHz on Kintex). And MIG in 160T part in FF package can implement 64bit 933MHz DDR3 interface, while Zynq only does 32bit 533 MHz. So not only you're getting 2 times wider data bus, you also get 400 MHz more frequency too. All it all, it's about 3.5x memory bandwidth. You also get 10G transceivers as opposed to 6G.
That said, you can get both in Zynq-030 - it's got same 2 cores as your device, and you get 125K LUTs of fast Kintex fabric, can implement 64bit DDR3 *in addition* to what Zynq provides, and you get 4 10G transceivers. And it's also included in free version of Vivado.
« Last Edit: November 19, 2020, 09:41:46 pm by asmi »
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #103 on: November 19, 2020, 09:50:44 pm »
I haven't prototyped it on a Kintex-7 but on an Artix-7, the MIG used 20% of the device.
MIG typically consumes 6 to 8k LUTs for DDR3 (quite a bit less for DDR2), and it obviously doesn't scale with device. Just for the hell of it I just created an absolute monster of controller with dual channel SODIMM controller for S100 device, and it took 24K LUTs. That is freaking 128 bit wide data bus!
I personally think that K160T is the best midrange device - it can be used with free tools, it allows to implement a 64/72bit DDR3 interface via both discrete components and SODIMM module, and it can run it pretty fast - up to 933 MHz for a single-rank SODIMM module. You also get up to 8 10G transceivers either for talking to really high-end ADCs via JESD204B, or to connect to your main processing block, or both.
« Last Edit: November 19, 2020, 09:57:48 pm by asmi »
 
The following users thanked this post: tom66

Offline james_s

  • Super Contributor
  • ***
  • Posts: 16104
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #104 on: November 19, 2020, 09:53:45 pm »

Internet router: mass-market consumer product, cheap commodity hardware, Linux does everything.
Oscilloscope: completely the opposite in every way? ;)

The only way I see this happening would be for one of the A-brand manufacturers to come up with an open-platform scope that's explicitly designed to support third-party firmware, kind of like the WRT54GL of oscilloscopes. I suspect that the business case isn't very good, though.

Manufacturers of inexpensive scopes iterate too quickly for reverse engineering to be practical.

Obviously there are large differences, but the benefit is the same. Open source firmware has the ability to be improved over time as deficiencies are corrected and new features added. The market of oscilloscopes is vastly smaller than that of consumer routers, but the market for high end DIY open source designed from scratch oscilloscopes is a tiny fraction of the already small market for oscilloscopes overall.

The main thing that would appeal to me about a commercial scope is the packaging, it's a nice tidy form factor with nice buttons and knobs and everything in a professional molded housing, and all of the front end hardware is there, the place they are usually most lacking is in the software side of things. Ultimately I suppose this whole project is really only interesting to me from an academic standpoint, it's interesting to see the inner workings of a modern DSO and it's a truly impressive achievement, but if I'm going to spend $500+ I'd buy a TDS3000 and hack it to 500 MHz or a Siglent or Rigol and put up with potentially buggy firmware. Seems like this debate was also beat to death not too long ago, very few people are going to pay as much or more for an incomplete open source device than it costs to buy a ready made off the shelf instrument that is ready to use right out of the box just for the sake of it being open source. The main advantage of open source is cost, anyone can duplicate an open source project so competition drives cost down while innovation continues to result in incremental improvements to the design. If the cost is not significantly lower than a commercial product of similar performance then the market is very, very limited to that tiny segment of the population with the knowledge and desire to tinker and improve upon it. Most people don't care, most of the users of open source projects do not tinker with the source themselves even if they like the idea that they technically could if they wanted to.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #105 on: November 19, 2020, 10:00:58 pm »
The Kintex-7 doesn't have a hard DDR3 controller!

Spartan-6 and such do, but that was migrated over to the fabric in the later series of devices.

I haven't prototyped it on a Kintex-7 but on an Artix-7, the MIG used 20% of the device. Admittedly that would have been a smaller 7A35T, but that is still 10% of logic used on the big Kintex device assuming it maps similarly - I don't know what the effect of moving to a 32-bit controller would be but I imagine it would further increase device utilisation.  There is hardware acceleration for it on the FPGA fabric - the SERDES drivers for instance are optimised for DDR controllers - but it's still a 'soft IP' at heart.
After reading Xilinx UG586 I think you are right-ish (the implementation seems to be a hybrid) but still I think that a lot of the logic the MIG is creating can be removed especially if data is read/written (mostly) sequentially and not random. I'm not sure what the difference in complexity is between the Wishbone bus (which is simple and I know very well) versus AXI (which I know nothing about).
« Last Edit: November 19, 2020, 10:02:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #106 on: November 19, 2020, 10:02:40 pm »
Well, that was the old MIG and that made me roll my own (way more resource efficient) DDR2 controller a long time ago. The hard IP DDR3 controllers in modern Xilinx FPGA devices however don't eat any logic. Realistically you can't create a DDR3 controller running at hundreds of MHz from generic IOB cells anyway. The timing needs to be trained etc. And since the Kintex 7 series is related to the Zync series it has exactly the same (hard IP) memory controller as the Zync has.
1. As was said, there is no hard memory controllers in 7 series (except in Zynqs).
2. Fabric in 7 series *is* fast enough to implement "soft" memory controller with the little help of some HW blocks like phasers to implement write/read leveling.
3. Different 7 series devices have different fabric, and the difference is quite drastic. Spartan-7 and Artix-7 have the same fabric, Kintex-7 has faster one, I don't know about Virtex-7 but suspect it's even faster than K7.
For Zynqs, devices -020 and below have Artix fabric, while -030 and above - Kintex one. So they are not all the same either.
 
The following users thanked this post: tom66, nctnico

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #107 on: November 19, 2020, 10:09:47 pm »
Does anyone know what fabric is in the Zynq UltraScale?
It doesn't seem to be the same as the other 7 series devices, being on a 20nm process (Spartan/Artix/Kintex/Virtex-7 are all 28nm)

I'll have to give the DDR3 MIG a second thought.  But,  I don't think memory bandwidth is the ultimate limit here unless we were looking at sampling rates above 2.5GSa/s and those start requiring esoteric ADC parts with large BOM figures attached to them.

Could build a $3000 oscilloscope but would people really buy that in enough volume to make it worthwhile?
« Last Edit: November 19, 2020, 10:11:42 pm by tom66 »
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #108 on: November 19, 2020, 10:13:56 pm »
Does anyone know what fabric is in the Zynq UltraScale?
It doesn't seem to be the same as the other 7 series devices, being on a 20nm process (Spartan/Artix/Kintex/Virtex-7 are all 28nm)
As far as I know it's the same as it Kintex UltraScale+, so it should be super-fast. Overall Zynq MPSoC'es are great devices, the only two problems with them are price and packages (they by and large are very big, requiring 10 layer PCBs for full breakout). I would love to use them in my projects, but the price...
Which is why I'm seriously looking at Zynq-030 - 2 cores up to 1GHz, Kintex fabric and 10G transceivers is a great combination. And you can find them too in China for reasonable amount of money.

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #109 on: November 19, 2020, 10:20:48 pm »
Does anyone know what fabric is in the Zynq UltraScale?
It doesn't seem to be the same as the other 7 series devices, being on a 20nm process (Spartan/Artix/Kintex/Virtex-7 are all 28nm)

I'll have to give the DDR3 MIG a second thought.  But,  I don't think memory bandwidth is the ultimate limit here unless we were looking at sampling rates above 2.5GSa/s and those start requiring esoteric ADC parts with large BOM figures attached to them.

Could build a $3000 oscilloscope but would people really buy that in enough volume to make it worthwhile?
You also need to think about how long it will take to process the data. In my own USB design data came in at 200Ms/s but it could process acquired data at over 1000Ms/s. Say you have 4 channels with 500Mpts of memory with a maximum samplerate of 250Ms/s. Having a memory bandwidth of 1Gs/s would be enough for acquisition purposes. However in such a case you don't want the memory bandwidth between the processing part (whether inside the FPA or external) to become a bottleneck. Especially if the memory bandwidth needs to be shared between sampling and processing (think about double buffering here). Otherwise things like decoding and full record math will become painfully slow.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #110 on: November 19, 2020, 10:21:23 pm »
Indeed, but the shame about the Zynq-030 is it's the only package option there in FBG484.

So, if you go for the cheapest 484 ball part, you are stuck with the -030, no route to upgrade.

You can go for the 676 ball part which also has -035 and -045 series variants. But now the BOM is larger, and you might not need the extra IO.  (The 400 ball part with the extra bank of the 7020 is enough for 2 x ADC interfaces + plenty of IO to spare, with an 8 layer board to route it all out.)

I'm still not convinced I need a Kintex part.  The current 7014S solution has been tested up to 1.25GSa/s which is the max I could get from HMCAD1511 before it lost PLL lock.  At least some of that is down to the PLL not having sufficient amplitude to meet the specification of the HMCAD ADC even at 1GHz.   That is also without any line training on the SERDES inputs and with the internal logic running at 180MHz,  ADC frontend running at 1/8th sample clock.  I do encounter timing issues above 200MHz causing periodic AXI lockup conditions though, I believe that is down to the lack of timing optimisation.  (Currently have a -4ns worst negative slack)

Really it depends on the goals for this project,  I never considered much more than 2.5GSa/s which could be achieved with a second ADC port and 128-bit internal bus on a Zynq 7020.    Moving to a Kintex might allow that to run at 250MHz with a 64-bit port,   but it's not as if there is a lack of fabric capacity now for a large bus.
« Last Edit: November 19, 2020, 10:23:50 pm by tom66 »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #111 on: November 19, 2020, 10:27:52 pm »
Does anyone know what fabric is in the Zynq UltraScale?
It doesn't seem to be the same as the other 7 series devices, being on a 20nm process (Spartan/Artix/Kintex/Virtex-7 are all 28nm)

I'll have to give the DDR3 MIG a second thought.  But,  I don't think memory bandwidth is the ultimate limit here unless we were looking at sampling rates above 2.5GSa/s and those start requiring esoteric ADC parts with large BOM figures attached to them.

Could build a $3000 oscilloscope but would people really buy that in enough volume to make it worthwhile?
You also need to think about how long it will take to process the data. In my own USB design data came in at 200Ms/s but it could process acquired data at over 1000Ms/s. Say you have 4 channels with 500Mpts of memory with a maximum samplerate of 250Ms/s. Having a memory bandwidth of 1Gs/s would be enough for acquisition purposes. However in such a case you don't want the memory bandwidth between the processing part (whether inside the FPA or external) to become a bottleneck. Especially if the memory bandwidth needs to be shared between sampling and processing (think about double buffering here). Otherwise things like decoding and full record math will become painfully slow.

Yes so that is the goal: 32-bit DDR3 interface @ 667MHz, assuming a standard Zynq 7020 in enhanced speed grade, gets us up to 5.3GB/s. 

100,000 wfm/s at ~600 points per waveform is only a read back bandwidth of 60MB/s.  It's mostly the write bandwidth you need.  (You need the write bandwidth for the pre-trigger, assuming you want a (pre-trigger*nwaves) bigger than the blockRAM supports.)

Read bandwidth starts trending higher, strangely enough as you go to a longer timebase and the blind time reduces as a fraction of the active acquisition time.  At which point the current limitation is the CSI-2 bus to the Pi,  and the Pi itself has memory bandwidth issues.

I have the capability to implement a 4 lane CSI-2 peripheral,  which doubles bandwidth to around 3.2Gbit/s (400MB/s).  At which point we are nearing the capacity of PCI-e or USB3, although only in one direction.

One reason to go to 32-bit interface is that we then have the performance available to do a write-read-DSP-write-read cycle -- we can start using the DSP blocks to work on the waveform data we just acquired,  and then render it for the next frame.  That would allow the DSP fabric to be used in a (psuedo-)pipelined manner.    I'm working on a concept for the render-engine on the FPGA to see how practical it would be.
« Last Edit: November 19, 2020, 10:31:04 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #112 on: November 19, 2020, 10:55:38 pm »
Really it depends on the goals for this project,  I never considered much more than 2.5GSa/s which could be achieved with a second ADC port and 128-bit internal bus on a Zynq 7020.    Moving to a Kintex might allow that to run at 250MHz with a 64-bit port,   but it's not as if there is a lack of fabric capacity now for a large bus.
Maybe it is just a cost versus benefit (future growth) question. A Kintex would open the option to go for a design which has 4x 1Gs/s (maybe 1.25Gs/s to break the magic 500MHz barrier) without doing a major re-design of the PCB and internal FPGA logic.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 7672
  • Country: us
    • SiliconValleyGarage
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #113 on: November 19, 2020, 11:39:38 pm »
cool ! . couple of ideas
-memory as dimm so i is user expandable
-cm4 , which you already mentioned.
-each acquisition channel as a pci card. ( not necessarly form factor. ) the beauty of that is that you can make a machine that has 1,2,3,4,5,6,7,8,9 whatever channels you need. just plug in more cards. there are pci hub chips avaialble.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #114 on: November 19, 2020, 11:49:20 pm »
The thought of using PCIexpress to aggregate more channels over seperate boards has crossed my mind too but you'll need seperate FPGAs on each board and ways to make triggers across channels happen. You quickly end up with needing to time-stamp triggers and correlate during post processing because the acquisition isn't fully synchronous.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #115 on: November 20, 2020, 12:11:59 am »
Indeed, but the shame about the Zynq-030 is it's the only package option there in FBG484.
Why limit to 484? You can fully breakout FFG676 package on a 6 layer PCB as it's 1 mm pitch package, so you can fit two traces between pads and breakout vias. Take a look at device diagram. 3 rows go out on a top layer, 3 more - on the first internal signal layer, 2 more - on the second internal signal layer, and 2 final rows - on the bottom layer (since two last rows are only partially populated, you will have enough space for decoupling caps). You only need 0.09 mm traces and spacings for the top layer - because the pads are oversized to 0.53 mm as per Xilinx recommendations, but you can get away with 0.5 mm pads, this will allow using 0.1 mm traces/spacings. For other layers it's even easier because you can use 0.2/0.4 mm vias which will leave you 0.6 mm of space for 2 traces. The fab I use for 6 layer board - WellPCB - can do even 0.08 mm traces, so you aren't even at their limit - even though I've done enough boards with them to be confident that they will deliver 0.08 mm traces with no issues.

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #116 on: November 20, 2020, 12:33:04 am »
One reason to go to 32-bit interface is that we then have the performance available to do a write-read-DSP-write-read cycle -- we can start using the DSP blocks to work on the waveform data we just acquired,  and then render it for the next frame.  That would allow the DSP fabric to be used in a (psuedo-)pipelined manner.    I'm working on a concept for the render-engine on the FPGA to see how practical it would be.
Maybe you should consider adding another DDR3 memory interface on PL side. This way you will have acquisition memory separated from processing memory, data comes from ADC straight into acquisition RAM, and then you create a processing pipeline from acquisition RAM into your main RAM. This is how most commercial oscilloscopes work, if you will look closely on teardowns, you will see those separate memory devices.
This will also allow you to create more sophisticated triggers because they can work with potentially a lot of samples right in acquisition RAM, and in doing so they won't be consuming bandwidth of your PS-side memory, which - as you said - is a fixed quantity which can't really be easily scaled, while PL-side memory bandwidth can (CLG484 package has fully bonded out 33,34 and 35 banks, this will allow you to create 64bit memory interface). Think about stuff like triggering on receiving a certain byte(s) via some peripheral bus like SPI or UART.
« Last Edit: November 20, 2020, 12:40:18 am by asmi »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #117 on: November 20, 2020, 04:05:39 am »
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

So far you've both just said there is a fixed filter/delay, which is a worry when you plan to have acquisition rates close to the input bandwidth.

Equally doing sinc interpolation is a significant processing cost (time/area tradeoff), and just one of the waveform rate barriers in the larger plotting system. Although each part is achievable alone its how to balance all those demands within the limited resources, hence suggesting that software driven rendering is probably a good balance for the project as described.

I consider time shifting data by fractional samples very trivial. And sinc(t) is not that expensive if you know how.

Data *must* be acquired at the fastest sample rate and downsampled in the FPGA.  If this is not done the data will be aliased, a depressingly common issue at all price tiers.

Certain operations must be done at acquisition sample rate.  Display is rather leisurely at 30-120 fps.

I have been over the DSP pipeline so many times I've lost count.  The only thing I am sure of is I will think of another improvement the next pass.

Reg
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #118 on: November 20, 2020, 06:31:41 am »
Certain operations must be done at acquisition sample rate.  Display is rather leisurely at 30-120 fps.
Now you're just linking concepts which have almost nothing to do with each other. Offline (CPU or otherwise) processing can be done at any rate, very few things in a digital oscilloscope have to be done at the full throughput of the ADCs, but triggering is one of them and you're both constantly talking away from that point.
 
The following users thanked this post: rf-loop, tautech

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 3612
  • Country: cn
  • Born in Finland with DLL21 in hand
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #119 on: November 20, 2020, 07:32:07 am »
Very intersting discussion and project.

Without going details and complex explanation.

Before start anything, imho, there must be clear and designed that whole digital side trigger engine need be just after ADC and always using ADC full native samplerate and capable to do it repeatedly with full designed capturing speed and this is really busy place. It need HW and it need not so complex but fast brute force to do it well.
Example Rohde&Schwarz have done small miracles in this in they prime RTO models.

This "trigger engine" need include also as perfect fine interpolation as possible between ADC raw samples as possible in real time. Fine adjusting position for display it is simple secondary thing. This architecture selection and decision need do and keep before so much more. Later it is extremely difficult or even impossible in practice. 
This kind of trigger engine need include many functions of course starting from simplest possible edge trigger going to very complex advanced intelligent "shape" recognize trigger... it can be even self learning for anomalies in signal. Clever intelligent full speed trigger is only road to do The advanced oscilloscope.

Trigger is key for intelligent and powerful glitch hunting.... not so much this over advertised wfm/s speed what was originally launched perhaps by Keysight because they find how to advertise and all is put for rare "glitch hunting".  If clever people have clever scope and work is hunting some rare glitch he do not need more than one time per second capturing scope IF scope is just only waiting this glitch occurs when scope know what he is hunting (waiting). Intelligence is imho better than brute force like enormous  repeating capturing speed. Put scope to work and leave humans do more important things than watching desperately scope screen and waiting.

If trigger engine find match it need trig but it need still do it fast. Not find these from acquisition memory... it need do realtime from ADC continuous stream because it is way to eliminate blind time trap in glitch hunting. Of course there need be some knowledge first what kind of anomaly we are hunting... just for one example about it difficult and why whole thing need be direct ADC out stream and capable of handling this stream... there is no free lounges if want do good.
Of course it is natural there need do some compromises. More and more compromises and soon it is dropped to elcheapo scopes level.

If trigger full engine is not in this position and extremely well made, all rest is useless playing and walking with road from problems to problems. Whole good oscilloscope main heart is just this trigger engine what is fist priority to design. Things after then, acquisition memory... displaying ans so on... they are important but if all these are nice and then trigger engine is "easy made" whole end is to garbage collection... was nice project what teached lot of.

So or so... imho, trigger engine just after ADC is heart of whole scope. This make good scope and this make poor or bad scope. First time this come on the board tens of years ago between Tek and HP.  This do not go so that first do some working nice image scope and after then start thinking oh... it need this trig... oh it need also this trig...  No, it is FIRST thing what need design and deeply.  First need be well done trigger high performance trigger engine (partially software and partially hardware) what can do all what later need.

But what I have seen now here about this project... amazing one mans work, really amazing.
If practice and theory is not equal it tells that used application of theory is wrong or the theory itself is wrong.
-
Huawei HarmonyOS 2.0  |  ArcFox Alpha S
 
The following users thanked this post: tom66, Someone, nuno, 2N3055, jxjbsd, YetAnotherTechie

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 12716
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #120 on: November 20, 2020, 08:06:13 am »
This "trigger engine" need include also as perfect fine interpolation as possible between ADC raw samples as possible in real time.

Not true. All you need to know is that one sample is below the trigger level and the next sample is above it (for simple rising edge trigger).

You can do all the fine interpolation much later when you go to display the trace on screen.

Which approach is better? That's harder to say.

 
« Last Edit: November 20, 2020, 08:11:36 am by Fungus »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #121 on: November 20, 2020, 08:10:21 am »
Certain operations must be done at acquisition sample rate.  Display is rather leisurely at 30-120 fps.
Now you're just linking concepts which have almost nothing to do with each other. Offline (CPU or otherwise) processing can be done at any rate, very few things in a digital oscilloscope have to be done at the full throughput of the ADCs, but triggering is one of them and you're both constantly talking away from that point.

The trigger is running at the full sample rate (1GSa/s) on this prototype.   Every sample is capable of generating a trigger.

Realignment is not done for every trigger because as stated that can be done once the waveform is captured - based on the difference between the sample and the ideal trigger point.  For instance, if you want to trigger at 8'h7f but you actually got a trigger at 8'h84 then you know it is 5 counts off, so look up in your table for that given timebase for the pixel offset.

This operation only needs to be done at the waveform rate of the scope - e.g. 20k/100k times a second - and is part of the render engine, not the capture engine.
 

Offline gf

  • Frequent Contributor
  • **
  • Posts: 518
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #122 on: November 20, 2020, 08:41:38 am »
All you need to know is that one sample is below the trigger level and the next sample is above it (for simple rising edge trigger).

Not generally granted, e.g. for signals close to Nyqust. If two adjacent samples are e.g. 0.0 and 0.1 then the analog ADC input signal can still raise up to 1.0 and go down again between the samples, while still not violating the sampling theorem. In this case your algorithm would miss an edge trigger at level 0.5 completely although the original signal does cross this level.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #123 on: November 20, 2020, 08:52:15 am »
Certain operations must be done at acquisition sample rate.  Display is rather leisurely at 30-120 fps.
Now you're just linking concepts which have almost nothing to do with each other. Offline (CPU or otherwise) processing can be done at any rate, very few things in a digital oscilloscope have to be done at the full throughput of the ADCs, but triggering is one of them and you're both constantly talking away from that point.

The trigger is running at the full sample rate (1GSa/s) on this prototype.   Every sample is capable of generating a trigger.

Realignment is not done for every trigger because as stated that can be done once the waveform is captured - based on the difference between the sample and the ideal trigger point.  For instance, if you want to trigger at 8'h7f but you actually got a trigger at 8'h84 then you know it is 5 counts off, so look up in your table for that given timebase for the pixel offset.

This operation only needs to be done at the waveform rate of the scope - e.g. 20k/100k times a second - and is part of the render engine, not the capture engine.
You keep talking about offset in counts, but trigger interpolation is in time. There isn't a lookup because the slope of the signal isn't known a-priori, interpolating the trigger point on the time axis needs at least 2 points (more is preferable) which is already an impractical size for a LUT. Yes it can be done offline (as mentioned by Fungus above), but it still needs access to the raw ADC samples that created the trigger, which are not always what is stored in the acquisition buffer.

And yes, the shift can be applied at render time, which complicates that processing (and how the acquisition filtering further changes alignment) while needing the offset value forwarded with the matching acquisition. Closely related to channel skew adjustment/trimming (which can destroy many of the naive assumptions of memory access patterns in a multichannel scope).
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #124 on: November 20, 2020, 09:08:31 am »
You keep talking about offset in counts, but trigger interpolation is in time. There isn't a lookup because the slope of the signal isn't known a-priori, interpolating the trigger point on the time axis needs at least 2 points (more is preferable) which is already an impractical size for a LUT. Yes it can be done offline (as mentioned by Fungus above), but it still needs access to the raw ADC samples that created the trigger, which are not always what is stored in the acquisition buffer.

Yes, and every raw sample is piped into RAM and stored; nothing about the trigger data is thrown away.  1GSa/s sampling rate and 1GSa/s memory write rate.  The samples that generate the trigger are stored as well as pre- and post- trigger data.  No filtering, no downsampling at this point.

The principle is, if the input filter has the correct response (it needs to roll off before Nyquist so that you avoid the described Nyquist headaches) you can calculate slope from the delta from the presumed trigger point (which is at t=0 - centre of the waveform) and the actual trigger point at t=? ... the actual trigger point will be offset from the point that the oscilloscope triggered at by a fraction of the sample rate (0-1ns).  It will never be more than that fraction because then the next comparator would have generated the trigger instead.    When you are at the described mode where 1ns < 1pixel,  you can ignore this data because the waveform points aren't going to be plotted fractionally on the screen.  This is only needed for sinx/x modes.   You'd need LUTs for different timebases or channel configurations (1-4ch), but this is something that could be generated at 'production time' as part of the calibration process.
   
A more complex trigger engine could use several samples to inform a slope calculation.  Since the Zynq ARM has full visibility of the samples, it's possible to do a calculation like this before each waveform is sent to the interpolator or the rendering engine.  I don't think that would be terribly difficult to do either,  although it would be more complex than what I've suggested it might be less sensitive to noise on the trigger point.

I would say that once you are operating at an input frequency >> than the rating of the scope, you can't rely on the trigger being reliable, just as you can't rely on the amplitude being reliable.  My DS1000Z falls over on Fin > 130MHz, even though the amplitude is stable.
« Last Edit: November 20, 2020, 09:10:17 am by tom66 »
 
The following users thanked this post: Zucca

Offline Zucca

  • Supporter
  • ****
  • Posts: 3486
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #125 on: November 20, 2020, 09:34:03 am »
You guys are too smart for me, it's hard to follow.

Anyway some inputs from my side

1) Avoid anything which has Broadcom or Qualcomm chips = sporadic pain in the ass guaranteed.
2) As much RAM as possible
3) SATA port for proper SSD?

Great work!
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 
The following users thanked this post: egonotto

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #126 on: November 20, 2020, 10:03:30 am »
You keep talking about offset in counts, but trigger interpolation is in time. There isn't a lookup because the slope of the signal isn't known a-priori, interpolating the trigger point on the time axis needs at least 2 points (more is preferable) which is already an impractical size for a LUT. Yes it can be done offline (as mentioned by Fungus above), but it still needs access to the raw ADC samples that created the trigger, which are not always what is stored in the acquisition buffer.

Yes, and every raw sample is piped into RAM and stored; nothing about the trigger data is thrown away.  1GSa/s sampling rate and 1GSa/s memory write rate.  The samples that generate the trigger are stored as well as pre- and post- trigger data.  No filtering, no downsampling at this point.

The principle is, if the input filter has the correct response (it needs to roll off before Nyquist so that you avoid the described Nyquist headaches) you can calculate slope from the delta from the presumed trigger point (which is at t=0 - centre of the waveform) and the actual trigger point at t=? ... the actual trigger point will be offset from the point that the oscilloscope triggered at by a fraction of the sample rate (0-1ns).  It will never be more than that fraction because then the next comparator would have generated the trigger instead.    When you are at the described mode where 1ns < 1pixel,  you can ignore this data because the waveform points aren't going to be plotted fractionally on the screen.  This is only needed for sinx/x modes.   You'd need LUTs for different timebases or channel configurations (1-4ch), but this is something that could be generated at 'production time' as part of the calibration process.
I'm afraid this approach is too simplistic. The trigger comparator needs to have threshold levels to filter out noise which also means you need to use multiple points (at least 4 but more is better) to determine the actual trigger point (=where the signal crossed the trigger point). A problem on many digital trigger oscilloscopes (Siglent is a good example) is that the trigger point becomes a focal point in the centre and smears out the edges of a signal.

I think in the end it may turn out that doing the comparator part digitally and the positioning in software (based on seperately stored samples around the trigger point) is necessary.
« Last Edit: November 20, 2020, 10:14:05 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #127 on: November 20, 2020, 10:09:27 am »
The principle is, if the input filter has the correct response (it needs to roll off before Nyquist so that you avoid the described Nyquist headaches) you can calculate slope from the delta from the presumed trigger point (which is at t=0 - centre of the waveform) and the actual trigger point at t=? ... the actual trigger point will be offset from the point that the oscilloscope triggered at by a fraction of the sample rate (0-1ns).
This is getting silly, you still can't provide a mathematical example of your proposed method. How can the slope be known a-priori? With an ideal AFE filter, any frequency (slope) less than the cutoff could be occurring around the trigger point.

Even with the trivial example of a perfect sine wave of constant frequency being sampled perfectly (and below Nyquist) shifting it with DC while keeping the trigger threshold static would present different slopes at the trigger point. Just the phasing of the points when the frequency isn't rational with the sampling frequency causes significant shifts and jitter as the waveform approaches the Nyquist rate.
 
The following users thanked this post: rf-loop, 2N3055

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #128 on: November 20, 2020, 12:11:15 pm »
I think there's a misunderstanding here as to how this would work,  there's no need to do any 'a-priori' calculation as *everything is done* after we have the *whole waveform captured and stored in RAM*.  We trigger 'roughly' and then correct the trigger point using the data we have around the trigger point.  That latter stage is software, performed well after the samples are gathered, just before any interpolation or rendering/plotting is performed.   I believe it could be done using one or at most two data points to calculate the slope of the signal at that point.  Let me go away and model it and see how wrong I am ... I've only done the calculations on paper so far so I'm prepared to admit I could be wrong here.

The present application does, in fact, support noise filters on the trigger, but the high and low thresholds are calculated beforehand as centred around the ideal trigger point.  Hysteresis can be set to any value within reason (1..max_sample).   So on a rising edge we trigger on the high trigger level,  and only when the signal goes below the low trigger level do we generate a falling edge.  Therefore, we can work out the level based on the type of edge that we intended to trigger on;  again, we don't need to store anything other than the waveform data (and the trigger edge that we used, in case we have a trigger engine that alternates edge types.)   

There isn't currently any bandwidth filtering on the trigger samples (i.e. LF/HF/AC);  I'm not certain of the best way to implement that yet.  They may have an effect on the jitter of the scope trigger, but the bandwidth of those filters tends to be quite low (~50kHz or so) which means trigger jitter of 1ns (uncorrected) should be insignificant in those cases.  I would like to see a scope that had adjustable filtering for the trigger signal (set the -3dB point for the trigger), I've been discussing this with Reg for some time and we have a few ideas on how to go about it even on realtime data.

Don't let perfect be the enemy of good.   If this gets realistic trigger jitter ~100ps or less for a 1ns ADC clock, then I'm prepared to accept it as a perfectly sufficient way to correct when interpolation is performed; the jitter would then be less than 1 pixel visible to the user.

The waveforms I have captured so far show a "visibly" jitter free capture down to 50ns/div without any trigger jitter correction thus far, on a variety of complex waveforms as well as simple sine waves.    ~100ps or less jitter would require the jitter correction to work down to roughly 10x interpolation (5ns/div).  And, the good news is, this is scalable with the sampling rate.  If the ADC is faster, then you can still do up to 10x interpolation without too many headaches.  I'm not too convinced that there is a great deal of benefit in going beyond 10x interpolation (at which point your scope is *really* lying to you about the signal, rather than just misleading you), but if so, there may need to be more thought as the ADC noise could start influencing the trigger slope detection,  which may require an adjusted algorithm.
« Last Edit: November 20, 2020, 12:13:50 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #129 on: November 20, 2020, 01:42:24 pm »
A few things to look out for:
- the trigger point may be outside the acquisition data so you can't use the acquired data to calculate the trigger point
- it should be possible to trigger on very slow edges as well. Think tens of micro-Volts per second

There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: egonotto

Online tautech

  • Super Contributor
  • ***
  • Posts: 22232
  • Country: nz
  • Taupaki Technologies Ltd. NZ Siglent Distributor
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #130 on: November 20, 2020, 01:49:35 pm »
I would say that once you are operating at an input frequency >> than the rating of the scope, you can't rely on the trigger being reliable, just as you can't rely on the amplitude being reliable.  My DS1000Z falls over on Fin > 130MHz, even though the amplitude is stable.
:o
2-3* rated BW for stable triggering is not an unrealistic expectation IME.

I'd certainly be setting my sights higher with a new design.
Avid Rabid Hobbyist
 

Offline tmbinc

  • Regular Contributor
  • *
  • Posts: 239
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #131 on: November 20, 2020, 02:47:22 pm »
tom66, thank you, this is super impressive work!

A few years ago I've worked a bit on the Siglent SDS1x0xX-E (https://github.com/360nosc0pe/fpga) reverse-engineer/hack. We've got it to a level where we can control the frontends (coupling, attenuation, BW), capture the ADC data, and push that to memory on the PL. On the PS, we had Linux and some test code to pull data out of RAM and display it; eventually the goal was to render into accumulation buffers in blockram (which is what Siglent does, hence the crappy resolution to make it fit), but didn't get that far - we never got further than basically driving the hardware correctly, but that part worked well.

Without going too much into the topic of creating new hardware vs. hacking existing hardware, I think the design shares a lot of the same choices so for an open source oscilloscope, I would be very interested in cooperating and/or potentially porting your code to this platform.

Also, nice work on the CSI-2 interface! How does your CSI-2 Phy look on the FPGA side? Do you need to implement LP support or only high-speed? This is a very elegant, cheap and fast solution to capture lot of data into a RPI. (I've so far always used an FT2232H in FIFO mode, but it adds significant cost and especially on a RPi3, the USB alone eats a full CPU core due to the bad USB controller design.) I assume receiving data on CSI-2 doesn't take up a lot of CPU resources on the RPi if you can DMA large blocks.
 
The following users thanked this post: tom66

Offline tv84

  • Super Contributor
  • ***
  • Posts: 2377
  • Country: pt
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #132 on: November 20, 2020, 03:08:39 pm »
Getting interesting...  :popcorn:
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3933
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #133 on: November 20, 2020, 04:15:50 pm »
With 100MHz and 1GHz sampling, it is customary to have 5ns/div and no visible triggering jitter. Picoscope with those specs has 3 ps RMS trigger jitter specified.. That should be target.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #134 on: November 20, 2020, 05:39:46 pm »
- the trigger point may be outside the acquisition data so you can't use the acquired data to calculate the trigger point

The trigger point will always be within acquisition data.  That is a limitation of this approach if you want to correct the trigger point using data available.  You'll notice that on a Rigol DS1000Z the pre-trigger is limited to roughly the waveform length (so the trigger point is at the far right hand side of the display.)  Now, that seems to be a self-imposed limit (they might be using BlockRAM for the pre-trigger or have some software limitation) and no such limit will exist in this design, but there is still the requirement to have the trigger within the data set, as that is the transition point from pre- to post-trigger state.  Post-trigger could be set to only a couple of samples - the current engine supports a minimum of 2 words or 16 samples for either the pre- or post-trigger buffers.   But, you will have to have some data around the trigger point to be able to correct it.

Even the Agilent DSOX2012A I have has a limit of -250us pre-delay in 1GSa/s sampling mode (2ch active) ... coincidentally (or not) exactly 500kpt of data?  The limit only changes when the timebase requires the ADC sample rate to drop.  The pre-trigger window stops exactly at the moment of the trigger plus a few samples.

In all cases you should have data from around the trigger ... I can't think of a DSO that does not have such a limitation.     I suppose it would be plausible to record 16 words of data either side the trigger if it so happens that the trigger is outside of the acquisition window,  but I'm not sure if this additional complexity would be worth it for a fairly unusual use case.  I will consider it, though.

- it should be possible to trigger on very slow edges as well. Think tens of micro-Volts per second

That shouldn't be an issue.  This trigger correction only starts to have an effect when the rise time is <50ns or so.  Outside of that window the naive assumption that first triggered word = trigger point is more than adequate.  The current hardware supports DC to >100MHz triggering, although the AC coupled front end obviously limits the lower end.

2-3* rated BW for stable triggering is not an unrealistic expectation IME.

I'd certainly be setting my sights higher with a new design.

Remember, this prototype runs at 1GSa/s with a rated 100MHz bandwidth.  In multiplexed mode, it has a Nyquist bandwidth of just 125MHz (4 channels enabled).  That's essentially the same as a Rigol DS1104Z or Siglent SDS1104X-E.  If the ADC is faster and you have more data, then the trigger could reliably go beyond the rated B/W of the scope,  but the B/W of a scope is an upper bound.  Most signals will have a fundamental far below the rated bandwidth.  You wouldn't look at a 50MHz square wave on a 100MHz oscilloscope and expect perfect reconstruction.  Looking at a 300MHz sine wave on a 100MHz scope and complaining that the trigger is a bit jittery would be silly, in my opinion.

I'd be curious how the competition performs here.  I may get my Zynq board to output a 200MHz clock to see how well my Rigol can trigger on it with a rated B/W limit of 100MHz.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #135 on: November 20, 2020, 05:40:51 pm »
With 100MHz and 1GHz sampling, it is customary to have 5ns/div and no visible triggering jitter. Picoscope with those specs has 3 ps RMS trigger jitter specified.. That should be target.

5ns/div implies (assuming a 1920-wide canvas and 12 divisions, is that fair?) about 31ps per 'virtual' sample.  How could you determine if the trigger jitter was any better than 31ps in that case?  AFAIK Picoscope doesn't plot sub-pixels (same as most scopes.)
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #136 on: November 20, 2020, 05:54:55 pm »
- the trigger point may be outside the acquisition data so you can't use the acquired data to calculate the trigger point

The trigger point will always be within acquisition data.  That is a limitation of this approach if you want to correct the trigger point using data available.  You'll notice that on a Rigol DS1000Z the pre-trigger is limited to roughly the waveform length (so the trigger point is at the far right hand side of the display.)  Now, that seems to be a self-imposed limit (they might be using BlockRAM for the pre-trigger or have some software limitation) and no such limit will exist in this design, but there is still the requirement to have the trigger within the data set, as that is the transition point from pre- to post-trigger state.  Post-trigger could be set to only a couple of samples - the current engine supports a minimum of 2 words or 16 samples for either the pre- or post-trigger buffers.   But, you will have to have some data around the trigger point to be able to correct it.

Even the Agilent DSOX2012A I have has a limit of -250us pre-delay in 1GSa/s sampling mode (2ch active) ... coincidentally (or not) exactly 500kpt of data?  The limit only changes when the timebase requires the ADC sample rate to drop.  The pre-trigger window stops exactly at the moment of the trigger plus a few samples.

In all cases you should have data from around the trigger ... I can't think of a DSO that does not have such a limitation.
Well, I can not think of a DSO which limits the pre-trigger range to the length of the acquisition record  ;) For example: My GW Instek allows me to set the pre-trigger point far outside the acquisition record. It is pretty much a requirement for being able to do jitter measurements so I'm rather surprised there are DSOs out there which have limited pre-trigger abilities.
« Last Edit: November 20, 2020, 06:00:26 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #137 on: November 20, 2020, 06:06:54 pm »
- the trigger point may be outside the acquisition data so you can't use the acquired data to calculate the trigger point

The trigger point will always be within acquisition data.  That is a limitation of this approach if you want to correct the trigger point using data available.  You'll notice that on a Rigol DS1000Z the pre-trigger is limited to roughly the waveform length (so the trigger point is at the far right hand side of the display.)  Now, that seems to be a self-imposed limit (they might be using BlockRAM for the pre-trigger or have some software limitation) and no such limit will exist in this design, but there is still the requirement to have the trigger within the data set, as that is the transition point from pre- to post-trigger state.  Post-trigger could be set to only a couple of samples - the current engine supports a minimum of 2 words or 16 samples for either the pre- or post-trigger buffers.   But, you will have to have some data around the trigger point to be able to correct it.

Even the Agilent DSOX2012A I have has a limit of -250us pre-delay in 1GSa/s sampling mode (2ch active) ... coincidentally (or not) exactly 500kpt of data?  The limit only changes when the timebase requires the ADC sample rate to drop.  The pre-trigger window stops exactly at the moment of the trigger plus a few samples.

In all cases you should have data from around the trigger ... I can't think of a DSO that does not have such a limitation.
Well, I can not think of a DSO which limits the pre-trigger range to the length of the acquisition record. My GW Instek allows me to set the pre-trigger point far outside the acquisition record. It is pretty much a requirement for being able to do jitter measurements.

Both the Rigol DS1074Z and the Agilent DSOX2012A I have do this.

The Rigol limits it to the current memory setting (on Auto, it would be 600 pts at 50ns/div).
The Agilent limits it to the total memory of the scope (~500kpts/channel).

See video from my 1000Z:


This is a necessary function of a scope with pre-trigger, since you don't know when the trigger will occur your pre trigger going further back in time requires more memory.

Nothing I am suggesting here is unusual ... it seems pretty much every DSO manufacturer has come across similar limitations.

Now where there is a difference is post-trigger.  That can be done without memory, so I expect the manufacturers are saving some segment of the trigger samples to do the trigger de-jitter.  In which case, I retract a bit of what I said before about this being an edge case, that is wrong,  it is a normal use case and it will need to be supported.
 
The following users thanked this post: egonotto

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3933
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #138 on: November 20, 2020, 06:07:28 pm »
With 100MHz and 1GHz sampling, it is customary to have 5ns/div and no visible triggering jitter. Picoscope with those specs has 3 ps RMS trigger jitter specified.. That should be target.

5ns/div implies (assuming a 1920-wide canvas and 12 divisions, is that fair?) about 31ps per 'virtual' sample.  How could you determine if the trigger jitter was any better than 31ps in that case?  AFAIK Picoscope doesn't plot sub-pixels (same as most scopes.)

Pico 3406D supports up to 20GS/s in ETS mode, so needs triggering that can cope with that.
I'm afraid I don't understand what you mean by "doesn't plot sub-pixels (same as most scopes.)"?
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #139 on: November 20, 2020, 06:27:22 pm »
tom66, thank you, this is super impressive work!

A few years ago I've worked a bit on the Siglent SDS1x0xX-E (https://github.com/360nosc0pe/fpga) reverse-engineer/hack. We've got it to a level where we can control the frontends (coupling, attenuation, BW), capture the ADC data, and push that to memory on the PL. On the PS, we had Linux and some test code to pull data out of RAM and display it; eventually the goal was to render into accumulation buffers in blockram (which is what Siglent does, hence the crappy resolution to make it fit), but didn't get that far - we never got further than basically driving the hardware correctly, but that part worked well.

Without going too much into the topic of creating new hardware vs. hacking existing hardware, I think the design shares a lot of the same choices so for an open source oscilloscope, I would be very interested in cooperating and/or potentially porting your code to this platform.

Also, nice work on the CSI-2 interface! How does your CSI-2 Phy look on the FPGA side? Do you need to implement LP support or only high-speed? This is a very elegant, cheap and fast solution to capture lot of data into a RPI. (I've so far always used an FT2232H in FIFO mode, but it adds significant cost and especially on a RPi3, the USB alone eats a full CPU core due to the bad USB controller design.) I assume receiving data on CSI-2 doesn't take up a lot of CPU resources on the RPi if you can DMA large blocks.

Interesting project! I am impressed someone managed to do that. 

There may be some 'scope' for collaboration, so let's keep talking and see if we can help each other out.  Not sure how much would be reusable, but maybe some would be.

Regarding CSI-2.  I was able to write a PLL register on an authentic Pi camera to get clock down to 12MHz ... image goes bad (too dark because shutter times etc wrong) but you can then switch the camera into a test pattern mode.  Using this, you can reverse-engineer the protocol on as little as a Rigol DS1074Z.  I built a board to allow me to do this - it sits between a Pi camera and a Pi and allows me to 'snoop' on the bus between the two (see attached)

The CSI-2 Phy on the FPGA side is an implementation of Xilinx XAPP894 using the Passive circuit they suggest with custom Verilog driving a pair of OSERDESE2 blocks and a bloody complex FSM to manage the whole process of generating packets and data streams.  I prototyped this on a smaller PCB in the first run and spent a few months reverse engineering the protocol using what documentation I could find.    It is something I really need to re-engineer at some point.  It was initially designed with a BlockRAM interface i.e. data would be copied into BRAM and output from there.  That was sufficient for testing but eventually I ended up bolting on an AXI stream interface.  So you set up a transfer of X lines of video data each with 2048 bytes and the AXI DMA manages the rest of this.  To simplify things, the two lanes terminate at the same moment (i.e. odd data lengths are fundamentally unsupported.)  But I want to add the capability (as CSI-2 supports) for odd line lengths and jumbo packets at some point.

Annoyingly with a Pi it is 'all or nothing'... if you don't get it all right it doesn't work at all.

One consequence of this design choice is all packets have to be 2048 byte multiples - if they are not they are padded with null bytes.  So not useful for small packets - those are sent over the SPI bus right now. But the protocol is fairly robust.  I can reliably transfer 180MB/s from Zynq RAM to the Pi for hours on end with zero bit errors.

I don't implement the true LP protocol as the Pi camera doesn't use it so I don't support e.g. lane turnaround or low speed communication over that.  I do of course implement the start-of-transmission and end-of-transmission signals, and the small packet header format for SoF/EoF and the larger packet format.  Presently the CRC is set to all zeroes ... the Pi doesn't seem to use this and it makes the logic easier.

I also implement start and end on the clock lane, putting the clock lane into LP when not transmitting.  There is no need in the specification to do this, but it improves reliability if the Pi failed to sync onto the first SoT packet it would never see any data for the duration of operation.  It also saves power (about 0.1W).

One interesting way you can determine if a device is actually utilising the checksum is to deliberately degrade the link.  In my case I added some 10pF to D1+ D1- pair,  I had plenty of bit corruption,  but all lines were appearing on the data and the frame was otherwise intact.  That told me Pi ignores the checksum (or sets an ignorable error flag/increments some counter) which meant I could avoid implementing that part of the specification.

You are correct that on the Pi side this is all DMA driven so the data essentially arrives in memory at a given point and you can read it from there.  You need to be careful of a few things:
- Pi and transmitter need to both know how big the packet is (so if you send 2045 lines, set the receiver to 2045 lines) otherwise an odd effect where the first few lines get offset with garbage occurs
- The Pi needs to be 'ready' to receive before the FPGA starts otherwise the CSI core gets into an error state

At present only the process that uses MMAL can access the data at a given pointer, which creates a few headaches.  That would be good to solve.  If you want to share the data between processes, it requires a memcpy :( because the MMAL data is private to a given process.  There is a Linux-kernel solution to this that a friend was looking into for me, but I need to awaken that. 
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #140 on: November 20, 2020, 06:38:43 pm »
Pico 3406D supports up to 20GS/s in ETS mode, so needs triggering that can cope with that.
I'm afraid I don't understand what you mean by "doesn't plot sub-pixels (same as most scopes.)"?

Right - OK, I didn't consider ETS.  I'm not planning on implementing it, I don't see a major benefit from it.  It may be possible to do it at lower wfm/s but at higher rates it would require the PLL to hop frequency too often. 

But, even if you have ETS, at the end of the day, when you have a sample to plot, say, at 50ps in time... it is going to land on exactly one pixel.  Fractional plotting does not appear to be implemented by any mainstream OEM, I have tried Tek 3000 series, Siglent 5000X, Agilent/Keysight 2000X and 3000X,  and various Rigol scopes. 

If you have a 50ns/div timebase (12 divs so 600ns span), and 1920 pixels to plot your waveform points on, then each pixel would represent 31ps of time.  You cannot represent finer than this: you do not have the pixels to do so. So, there is no benefit to achieving any better than pixel-perfect representation, so in this case, anything better than 31ps jitter is no better information.

This applies for sinx/x too, as a sinx/x interpolator works like a regular FIR filter with most of its inputs set to zero. (10x interpolator would have 9 samples at zero and 1 sample at your input value) so you can only shift by interpolated-sample intervals. 

« Last Edit: November 20, 2020, 06:42:06 pm by tom66 »
 
The following users thanked this post: 2N3055

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #141 on: November 20, 2020, 07:22:02 pm »
With 100MHz and 1GHz sampling, it is customary to have 5ns/div and no visible triggering jitter. Picoscope with those specs has 3 ps RMS trigger jitter specified.. That should be target.

5ns/div implies (assuming a 1920-wide canvas and 12 divisions, is that fair?) about 31ps per 'virtual' sample.  How could you determine if the trigger jitter was any better than 31ps in that case?  AFAIK Picoscope doesn't plot sub-pixels (same as most scopes.)

Pico 3406D supports up to 20GS/s in ETS mode, so needs triggering that can cope with that.
I'm afraid I don't understand what you mean by "doesn't plot sub-pixels (same as most scopes.)"?
At 100ps/div you'll definitely see a 31ps difference in delay. But then again having a 3ps RMS trigger jitter is pretty impressive. That is >US$20k oscilloscope territory. I'm not sure whether extremely low trigger jitter specs are something to aim for right now. At some point noise of the system is going to contribute a lot to the trigger jitter and it may need external circuitry to produce a clean, low jitter trigger.

Regarding pre/post trigger. It may be that I got those the wrong way around (semantics) but the point is that it should be possible to move the trigger point way to the left and have data AFTER the trigger after a very long delay.
« Last Edit: November 20, 2020, 07:23:37 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #142 on: November 20, 2020, 07:37:42 pm »
Yes, for post-trigger it will need to be implemented.   I will get the present implementation working with just post-trigger in memory but will consider how to enable this to work for long post-trigger delays.  The same principle should be usable for both cases, just need to keep a local record of samples around the trigger point if they are outside of the memory depth.

I have some DIY to catch up to on the weekend, so maybe won't get that much time to look at this specifically, but will still give it some "brain time".
 

Offline dave j

  • Regular Contributor
  • *
  • Posts: 91
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #143 on: November 20, 2020, 08:14:24 pm »
If you have a 50ns/div timebase (12 divs so 600ns span), and 1920 pixels to plot your waveform points on, then each pixel would represent 31ps of time.  You cannot represent finer than this: you do not have the pixels to do so. So, there is no benefit to achieving any better than pixel-perfect representation, so in this case, anything better than 31ps jitter is no better information.
Just because you can only plot using pixels doesn't mean you don't need to store waveform points to a higher resolution. Consider the attached image. The white lines are at five times higher pitch than the orange ones. If you were only storing points at the lower pitch the orange lines would appear identical. Not a problem for horizontal traces but for nearly but not quite vertical ones, such as fast edges, you could clearly see a difference.
I'm not David L Jones. Apparently I actually do have to point this out.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #144 on: November 20, 2020, 08:41:22 pm »
If you have a 50ns/div timebase (12 divs so 600ns span), and 1920 pixels to plot your waveform points on, then each pixel would represent 31ps of time.  You cannot represent finer than this: you do not have the pixels to do so. So, there is no benefit to achieving any better than pixel-perfect representation, so in this case, anything better than 31ps jitter is no better information.
Just because you can only plot using pixels doesn't mean you don't need to store waveform points to a higher resolution. Consider the attached image. The white lines are at five times higher pitch than the orange ones. If you were only storing points at the lower pitch the orange lines would appear identical. Not a problem for horizontal traces but for nearly but not quite vertical ones, such as fast edges, you could clearly see a difference.

Right - but here's the thing - the data is there.  Nothing is being lost -- it's just not being reconstructed, if that makes sense.

This is relating to how the data is reconstructed into a real signal.    At any given zoom level there is little benefit in going beyond the display resolution of your display device (you cannot get more pixels than there are actually on the panel.)  So, there is no point in showing <31ps jitter, for instance, if the minimum display resolution is 31ps, because nothing will ever make that usefully visible to the user.

If you zoom in one step then, yes, you do want to make that visible at that stage.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2738
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #145 on: November 20, 2020, 09:32:41 pm »
Fractional plotting does not appear to be implemented by any mainstream OEM, I have tried Tek 3000 series, Siglent 5000X, Agilent/Keysight 2000X and 3000X,  and various Rigol scopes.
The reconstruction (plotting) filter in the megazoom IV is matched to the expected bandwidth of the front end so it may be difficult to see with the slower models. But on faster models the plotting is most certainly not hard aligned to the trigger, and can be seen to move with at least 1 px of precision at 2ns/div (64px, 31ps). Noting that scope uses an analog trigger so there is additional jitter from that hardware which isn't eliminated as would be with a digital trigger.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #146 on: November 21, 2020, 01:39:30 am »
The principle is, if the input filter has the correct response (it needs to roll off before Nyquist so that you avoid the described Nyquist headaches) you can calculate slope from the delta from the presumed trigger point (which is at t=0 - centre of the waveform) and the actual trigger point at t=? ... the actual trigger point will be offset from the point that the oscilloscope triggered at by a fraction of the sample rate (0-1ns).
This is getting silly, you still can't provide a mathematical example of your proposed method. How can the slope be known a-priori? With an ideal AFE filter, any frequency (slope) less than the cutoff could be occurring around the trigger point.

Even with the trivial example of a perfect sine wave of constant frequency being sampled perfectly (and below Nyquist) shifting it with DC while keeping the trigger threshold static would present different slopes at the trigger point. Just the phasing of the points when the frequency isn't rational with the sampling frequency causes significant shifts and jitter as the waveform approaches the Nyquist rate.

Old military electronics tech trick:    Measure the harmonics of a square wave on the spectrum analyzer to determine the slew rate.

All of this is a basic application of the Fourier transform and causality.

We have given much thought to triggering and I think we can do better than anyone else.  Not implemented yet, but nothing difficult to do.  I spent quite a bit of time on the subject. The limitation is free time to devote to the project.

If anyone wants to commit their time to working on the task I shall be pleased to advise.  It's actually quite easy if you know how.

Have Fun!
Reg
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #147 on: November 21, 2020, 01:53:46 am »
It would be better to just explain the math in detail so people know what they are getting into instead of pulling up smoke screens. Open source means full disclosure  ;D
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Someone, egonotto

Offline snoopy

  • Frequent Contributor
  • **
  • Posts: 743
  • Country: au
    • Analog Precision
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #148 on: November 21, 2020, 06:51:05 am »
Pico 3406D supports up to 20GS/s in ETS mode, so needs triggering that can cope with that.
I'm afraid I don't understand what you mean by "doesn't plot sub-pixels (same as most scopes.)"?

Right - OK, I didn't consider ETS.  I'm not planning on implementing it, I don't see a major benefit from it.  It may be possible to do it at lower wfm/s but at higher rates it would require the PLL to hop frequency too often. 

But, even if you have ETS, at the end of the day, when you have a sample to plot, say, at 50ps in time... it is going to land on exactly one pixel.  Fractional plotting does not appear to be implemented by any mainstream OEM, I have tried Tek 3000 series, Siglent 5000X, Agilent/Keysight 2000X and 3000X,  and various Rigol scopes. 

If you have a 50ns/div timebase (12 divs so 600ns span), and 1920 pixels to plot your waveform points on, then each pixel would represent 31ps of time.  You cannot represent finer than this: you do not have the pixels to do so. So, there is no benefit to achieving any better than pixel-perfect representation, so in this case, anything better than 31ps jitter is no better information.

This applies for sinx/x too, as a sinx/x interpolator works like a regular FIR filter with most of its inputs set to zero. (10x interpolator would have 9 samples at zero and 1 sample at your input value) so you can only shift by interpolated-sample intervals.

Tek TDS7XX, TDS7XXX, TDS5XXX all offer ETS. The TDS7XXX, and TDS5XXX also offer real time sinx/x interpolation probably because it has much more computational power compared to the earlier TDS7XX scopes. The ETS works extremely well on these scopes. I don't think any other vendor does it as well as Tek does. The downside to ETS is that it requires a repetitive waveform :(

cheers
« Last Edit: November 21, 2020, 06:53:39 am by snoopy »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #149 on: November 21, 2020, 10:24:51 am »
The trade off with ETS is that your waveform rate has to fall because you need to hop the PLL frequency often.

The ADF4351 I'm using takes about 80us to lock in "Fast Lock Mode" which is intended for fast channel changes, not including the time required to write the registers on the device over SPI.  In the most optimistic case, that sets your acquisition rate at 12,500 wfm/s.   Faster devices do exist but they would still end up being the ultimate limit in the system. 

ETS is making up for poor sinx/x interpolation,  you can do everything ETS does, and arguably more accurately, with a good interpolator.  (Assuming your input is correctly bandlimited for the normal ADC sampling rate.)

Working on the Python sampling model now.
« Last Edit: November 21, 2020, 10:26:38 am by tom66 »
 

Offline snoopy

  • Frequent Contributor
  • **
  • Posts: 743
  • Country: au
    • Analog Precision
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #150 on: November 21, 2020, 11:04:07 pm »
The trade off with ETS is that your waveform rate has to fall because you need to hop the PLL frequency often.

The ADF4351 I'm using takes about 80us to lock in "Fast Lock Mode" which is intended for fast channel changes, not including the time required to write the registers on the device over SPI.  In the most optimistic case, that sets your acquisition rate at 12,500 wfm/s.   Faster devices do exist but they would still end up being the ultimate limit in the system. 

ETS is making up for poor sinx/x interpolation,  you can do everything ETS does, and arguably more accurately, with a good interpolator.  (Assuming your input is correctly bandlimited for the normal ADC sampling rate.)

Working on the Python sampling model now.

Yes ETS requires many bites of the cherry to reconstruct the waveform so therefore waveform update rate suffers and the incoming waveform needs to be stable during this time. However am I right in saying that with ETS you don't get any phase shift from an interpolation filter which is never perfect ?
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #151 on: November 21, 2020, 11:30:03 pm »
ETS is making up for poor sinx/x interpolation,  you can do everything ETS does, and arguably more accurately, with a good interpolator.  (Assuming your input is correctly bandlimited for the normal ADC sampling rate.)
Actually the whole point of ETS is to go (far) beyond the Nyquist limit of the ADC. One could even envision adding a sampling head which together with the 14bit version of the ADC could result in a very unique device. But implementing ETS and sampling in itself isn't that interesting. The real challenge is to be able to trigger accurately on a signal which has a very high frequency (several GHz). Several of the older high frequency DSOs (Tektronix TDS820 and Agilent 54845a for example) can't trigger on high frequency signals.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #152 on: November 22, 2020, 08:52:55 am »
That's true, at that point you'd essentially be implementing something similar to a sampling scope, to achieve ~500MHz repetitive bandwidth.

I think it's a different project, but there's no practical reason an ETS-capable variant (perhaps a software change, with hardware lacking B/W filters) could be developed.

Not the focus for the first version but one of the goals for this project is upgradeability and customisability. 
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #153 on: November 23, 2020, 09:07:32 pm »
I modelled the triggering prototype using a Python script and Numpy/matplotlib.  See attached for the script, for anyone interested. 

Overall quite impressed - triggering jitter was relatively low but I need to tweak the search range and coefficients somewhat to get performance to be similar across all trigger levels.  I think I should move towards a sinc interpolator for the trigger predictor, but this simple linear predictor (using the error at the trigger point and the local slope based on 4 samples) gets to ~360ps jitter for a 1ns sample period with 1.5LSB ADC noise simulated.

The biggest problem seems to be that my predictor is much less accurate at certain trigger levels - it seems to be some kind of quantisation effect.  I will have to continue tweaking to see if I can improve this.

This implementation could be performed with less than 16 bytes memory per trigger point (5 samples + 1 timestamp) and so should be practical for long post-trigger delays where samples are normally outside of acquisition memory.  The slope approximation should be possible to do with an 8-bit LUT.
 

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 3612
  • Country: cn
  • Born in Finland with DLL21 in hand
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #154 on: November 24, 2020, 06:03:51 am »
Think  if you are scope manufacturer and you need write datasheet.

Imho, this jitter what you show do not look good at all. Without further knowledge and analyze  but 300ps or 600ps numbers give just first feel: horrible.

Here some pick-ups from some scopes data sheets, note also sampling interval and adjust these for 1ns interval sampling.

These are just random examples. Some are good, some are "acceptable".
All these are 8bit ADC scopes.  Least these numbers, without further thinking, looks bit different and better or more nice.

Some  1Gsa/s 0.5k$ scope
Trigger Jitter: < 100 ps 


Some 1k$ 2Ch and 1.5k$ 4Ch,  2Gsa/s scope
Trigger Jitter: CH1 - CH4: <10 ps rms, 6 divisions pk-pk, 2 ns edge
(EXT trig: <200 ps rms ) My note: This EXT trig is simplest old traditional analog pathway / comparator trigger.


Some 5Gsa/s scope 
Trigger Jitter: <9ps RMS (typical) for ≥300MHz sine and ≥6 divisions peak to peak amplitude for vertical gain settings from 2.5mV/div to 10V/div.
Trigger Jitter: <5ps RMS (typical) for ≥500MHz sine and ≥6 divisions peak to peak amplitude for vertical gain settings from 2.5mV/div to 10V/div.

Some expensive 5 Gsa/s scope
Trigger jitter: full-scale sine wave of frequency set to –3 dB bandwidth < 1 ps (RMS) (meas.)

Some expensive 10Gsa/s scope
Trigger and Interpolator Jitter: ≤ 3.5 ps RMS (typical)


Low trigger jitter is one extremely important in scopes. How can measure example signal time jitter if scope own trigger jitter is horrible.
Good scopes may also have measurement functions what can measure and display jitter distribution over more or less long time. It is waste of time if scope own jitter is bad.

As I told my opinion in my previous message, whole trigger engine is one of most important part of good scope. All can draw nice looking images to screen but all can not do High Performance trigger engine.

Here topic name include  "High performance".  Why.

Now you tell "Overall quite impressed - triggering jitter was relatively low..." and show this some kind of simulation about trigger jitter with horrible looking numbers.

When you go forward with this trigger engine and things relative it...  you can do some training with this: imagine you are manufacturer and you need reach Trigger engine performance you can write Trigger jitter least <20ps  rms   to data sheet.
 
What ever nice things and nice looking waveform draw scope have and even high performance this and that but if it do not have High Performance trigger engine it is just more or less nice looking thing for decorate lab. And same for analog front end quality and sampling quality. All know... garbage in - garbage out... 


Still it looks nice project... but there read "High performance"... so try keep it. Least related to trigger, what is perhaps most difficult parts of oscilloscope, when try "High performance" < "High End" < "State of art" class where need be more or much more better than normal.
If practice and theory is not equal it tells that used application of theory is wrong or the theory itself is wrong.
-
Huawei HarmonyOS 2.0  |  ArcFox Alpha S
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #155 on: November 24, 2020, 08:23:47 am »
The trigger prototype here is intending to demonstrate the concept of a trigger based on data around the digital trigger - and it is not necessarily the final implementation.  In the best cases the trigger jitter is <20psrms but there is presently an unresolved dependency on the trigger level.

You are not wrong that 300ps would not be ideal for a 'real scope'. The initial goal of the prototype is to replicate the performance of a 1GSa/s oscilloscope,  at a ~$500 price point, so 100ps or less is a fair goal, and "High Performance" in this regard refers to the *state-of-the-art* for existing open-source oscilloscope projects, many of which are based around PC oscilloscope platforms or sample at 10 MSA/s, not 1GSa/s. 

If I am to aim for something around the 2.5GSa/s oscilloscope benchmark, then I need to make the jitter better, of course.

31ps is the level at which the jitter becomes indistinguishable on the display surface, assuming a 1080p display so there is little point (at a minimum timebase of 5ns/div) in achieving anything better, for a 1GSa/s oscilloscope.  If a 2.5GSa/s oscilloscope has a 2ns/div or 1ns/div setting, then the requirements drop to around 5-10ps.
« Last Edit: November 24, 2020, 08:27:30 am by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #156 on: November 24, 2020, 08:51:09 am »
The trigger prototype here is intending to demonstrate the concept of a trigger based on data around the digital trigger - and it is not necessarily the final implementation.  In the best cases the trigger jitter is <20psrms but there is presently an unresolved dependency on the trigger level.

You are not wrong that 300ps would not be ideal for a 'real scope'. The initial goal of the prototype is to replicate the performance of a 1GSa/s oscilloscope,  at a ~$500 price point, so 100ps or less is a fair goal, and "High Performance" in this regard refers to the *state-of-the-art* for existing open-source oscilloscope projects, many of which are based around PC oscilloscope platforms or sample at 10 MSA/s, not 1GSa/s. 
I agree. AFAIK the R&S RM3000 doesn't even specify trigger jitter and from judging how fat a trace gets around the trigger point it isn't very good. But then again this oscilloscope isn't made for jitter analysis.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66

Online asmi

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #157 on: November 24, 2020, 07:07:03 pm »
"High Performance" in this regard refers to the *state-of-the-art* for existing open-source oscilloscope projects, many of which are based around PC oscilloscope platforms or sample at 10 MSA/s, not 1GSa/s. 
Oh, so that explains some things. I also had a question in my mind of what exactly is "high performance" about 1 GSa scope. Initially I thought that it was ETS with very high analog bandwidth, but now it appears that it simply means "less of a crap compared to what's already out there in the open source sphere".

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #158 on: November 24, 2020, 08:36:04 pm »
Perhaps you should read the opening post?    :-//
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #159 on: November 25, 2020, 12:26:11 am »
"High Performance" in this regard refers to the *state-of-the-art* for existing open-source oscilloscope projects, many of which are based around PC oscilloscope platforms or sample at 10 MSA/s, not 1GSa/s. 
Oh, so that explains some things. I also had a question in my mind of what exactly is "high performance" about 1 GSa scope. Initially I thought that it was ETS with very high analog bandwidth, but now it appears that it simply means "less of a crap compared to what's already out there in the open source sphere".

This started as an open source version of a product that Micsig now has on the market at a price point a bit higher than Tom's initial goal.  We certainly won't try to undercut the Chinese.   My aborted "Scope Wars" thread was an attempt at documenting the results of my market research of what we viewed as competitive product:  Rigol, Instek, Siglent.  At the time Micsig was not on the radar.

Once I got involved it morphed into a "beat the crap out of HPAK, Tek & R&S" project for me. And I *think* I have almost sold Tom on that.

The current goal is under active discussion.  A lot has changed in the last 18 months.

The canonical response to "I want "high performance" is, "How much money would you spend?"

Have Fun!
Reg
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 7672
  • Country: us
    • SiliconValleyGarage
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #160 on: November 25, 2020, 12:55:22 am »
The thought of using PCIexpress to aggregate more channels over seperate boards has crossed my mind too but you'll need seperate FPGAs on each board and ways to make triggers across channels happen. You quickly end up with needing to time-stamp triggers and correlate during post processing because the acquisition isn't fully synchronous.
PCI for data dump only. there would be a dedicated ribbon cable carrying the 'qualifier signals' for trigger. kinda like what they do with graphics cards.  the realtime stuff does not go over pci. it is fpga to fpga comms. you would not need too many signals. could use wired-or  wired-and principle. i'm ok with one fpga per baords. would be smaller. you could do a 2 channel acquisition board. then you can do 2 4 6 8 input machines.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #161 on: November 25, 2020, 09:01:53 am »
In the present implementation the data is stored in a 68-bit wide FIFO.  The 64 bits are ADC samples, 1 bit is a trigger indicator and 3 bits are the trigger index.

I only store the 64 bits of data in actual RAM presently, though.  There is only one trigger pointer for any given sample so that goes into a state register which is latched into the acquisition linked list on an interrupt.  I want to replace the acquisition control on the CPU with an FPGA control engine, as the interrupts and slow AXI-slave interface limit the total waveform rate.  I could get a very minimal blind time if the FPGA is doing everything (if I'm very clever - and I like to delude myself into thinking I am - it might be zero lost samples between individual waveforms with frequent triggers.)
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #162 on: November 28, 2020, 10:20:34 pm »
Hi all,

I made the decision to release all the FPGA designs, software (including Python application) and hardware designs under the MIT licence now. 

scopy-fpga contains the application code, FPGA design, IP repositories and STM32F0 firmware for the system. 
https://github.com/tom66/scopy-fpga

scopeapp is the Python application that runs on the Raspberry Pi that provides the UI.  It contains the rendering engine and rawcam capture libraries.
https://github.com/tom66/scopeapp

scopy-hardware contains hardware designs (schematics, gerbers, STEP file) for the design in CircuitMaker (interested parties can get an invite to the project on CM too, just PM me.)
https://github.com/tom66/scopy_hardware

What I want to do now is build a community of interested individuals and see where we can take this project as I think from the interest here it clearly 'has legs'.   The existing hardware platform is quite capable but I would like to do more and want to flesh out the modular capability and investigate higher performance tiers.  There is obviously debate over where this project can go and I think there are many interested parties who would use it and would be interested in contributing.  There is also a commercial aspect to be considered.

I will release a survey/Google Forms tomorrow, to gather some thoughts and then see where to go from there.

And if it goes nowhere, that's fine.  It's been a fun project to work on,  and maybe what I've developed so far can help others.
 
The following users thanked this post: egonotto, DEV001

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #163 on: November 28, 2020, 10:26:16 pm »
As I wrote before: I want to spend some time on creating a 1M Ohm analog frontend that is compatible with the HMCAD1520 / HMCAD1511 ADCs. This will need some number crunching first.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #164 on: November 29, 2020, 06:49:24 pm »
It would of course be very interesting to see what you come up with nctnico.  In the meantime I am focused on the digital systems engineering parts of this project.  I am presently designing the render-acquisition engine which would replace the existing render engine in software on the Pi. 
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #165 on: November 29, 2020, 06:50:17 pm »
Questionnaire for those interested in the project.  

I'd appreciate any responses to understand what features are a priority and what I should focus on.

https://docs.google.com/forms/d/e/1FAIpQLSdm2SbFhX6OJlB834qb0O49cqowHnKiu7BEsXmT3peX4otOIw/formResponse

All responses will be anonymised and a summary of the results will be posted here (when sufficient data exists.)
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #166 on: November 29, 2020, 07:09:02 pm »
HMCAD1520

Analog really doesn't seem too interested in selling these. Those lead times ...

In an ADC market filled with boutique rip off prices these always stood out a bit too much.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #167 on: November 29, 2020, 09:52:54 pm »
HMCAD1520

Analog really doesn't seem too interested in selling these. Those lead times ...

In an ADC market filled with boutique rip off prices these always stood out a bit too much.

Other customers?  Perhaps?
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #168 on: November 30, 2020, 08:09:37 am »
Analog really doesn't seem too interested in selling these. Those lead times ...

In an ADC market filled with boutique rip off prices these always stood out a bit too much.

Yes, it is an odd part, but I've heard from a few people familiar with the market for ADCs and if you know who you are talking to, you can get inexpensive Chinese parts with surprisingly decent performance that easily beat Western equivalent parts in terms of performance per buck.  A great deal of that has been driven by the budget oscilloscope and test equipment market, as well as cheaper RF SDR and amateur radio kit.  Digi-Key and the likes only tend to capture mainstream parts that are worth stocking.

Fundamentally there's not much that's too specialised about ADC design now - these designs are decades old and we have audio cards with 24-bit ADCs running at 192kHz ... this is sort of like the opposite end of the performance spectrum - it's a process problem, not a design problem.

The HMCAD1520 is available on Digi-Key, they have decent stock (~299 parts) and a 14 week lead time for more, which seems OK to me.     I had no issue buying the HMCAD1511 when building the first prototypes, though I only bought two.

I'd imagine ADI only keep these parts and don't develop additional variants because they have existing customers that are happy with them from when they bought Hittite (the part actually comes from Arctic Silicon's "Blizzard" family of ADCs.  They are/were a Norwegian firm that Hittite acquired before ADI acquired them.)  But, it would be nice to see more lower cost parts.

My plan is to figure out a multiplexing arrangement where two ADC chips could be used to sample at 2.5GSa/s.  I have already managed to get a HMCAD1511 stable at >1.2GSa/s.  That would enable a realistic 2.5GSa/s oscilloscope (400ps sample period) with say 350MHz per channel B/W in 1 channel mode.  I also suspect that the '1520 features might only be lasered out (or they may not be disabled at all!) on the '1511,  as the two ADCs seem to use very similar cores/structures - though I am yet to confirm this. 
« Last Edit: November 30, 2020, 08:11:33 am by tom66 »
 

Offline gf

  • Frequent Contributor
  • **
  • Posts: 518
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #169 on: November 30, 2020, 07:06:38 pm »
From HMCAD1520 datasheet:
Quote
High speed Modes (12-bit / 8-bit)
Quad Channel Mode: Fsmax = 160 / 250 MsPs
Dual Channel Mode:  Fsmax  = 320 / 500 MsPs
Single Channel Mode: Fsmax = 640 / 1000 MsPs

I'm wondering whats up with the 640 MSPS?
The AC specifications of the HMCAD1520 are only given up to 640 MSPS, but not for 1000.
And Max. Conversion Rate is specifed as 640 as well (1 ch).
When do the 1000 apply? Do they only apply in HMCAD1511 compatibility mode (8-bit)?
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #170 on: November 30, 2020, 07:12:11 pm »
I'm wondering whats up with the 640 MSPS?
The AC specifications of the HMCAD1520 are only given up to 640 MSPS, but not for 1000.
And Max. Conversion Rate is specifed as 640 as well (1 ch).
When do the 1000 apply? Do they only apply in HMCAD1511 compatibility mode (8-bit)?

Yes, it's 1GSa/s in 8-bit mode, 640MSa/s in 12-bit and 160MSa/s in 14-bit mode.
IMO 14-bit mode is a bit useless but probably a consequence of the internal 14-bit core (which HMCAD1511 shares)
 

Offline gf

  • Frequent Contributor
  • **
  • Posts: 518
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #171 on: November 30, 2020, 09:26:06 pm »
I guess you mean 105 MSa/s in precision mode, do you? (160 is obviously for 12-bit high speed @4ch)

I don't think that precicion mode is really useless.
The main point of precision mode are IMO not the 14 bits, but the following:

Quote
the high speed modes all utilize interleaving to achieve high sampling speed. Quad channel mode interleaves 2 ADC branches, dual channel mode interleaves 4 ADC branches, while  single  channel  mode  interleave all 8 ADC branches. In precision mode interleaving is not required and each ADC channel uses one ADC branch only.

This eliminates interleaving spurs, leading to a significantly better SFDR and SINAD.

The cost is a maximum sampling rate of 105 MSa/s - but with all 4 channels enabled.
So with 4 channels enabled, the precision mode sampling rate is only by a factor ~1.5 lower than the 160 MSa/s for 12-bit 2-fold interleaved high speed mode. I find this trade-off not too bad.
« Last Edit: November 30, 2020, 09:29:23 pm by gf »
 
The following users thanked this post: tom66

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #172 on: November 30, 2020, 09:36:02 pm »
I guess you mean 105 MSa/s in precision mode, do you? (160 is obviously for 12-bit high speed @4ch)

That's what I get for not double checking the datasheet and quoting from memory.

Quote
the high speed modes all utilize interleaving to achieve high sampling speed. Quad channel mode interleaves 2 ADC branches, dual channel mode interleaves 4 ADC branches, while  single  channel  mode  interleave all 8 ADC branches. In precision mode interleaving is not required and each ADC channel uses one ADC branch only.

This eliminates interleaving spurs, leading to a significantly better SFDR and SINAD.

The cost is a maximum sampling rate of 105 MSa/s - but with all 4 channels enabled.
So with 4 channels enabled, the precision mode sampling rate is only by a factor ~1.5 lower than the 160 MSa/s for 12-bit 2-fold interleaved high speed mode. I find this trade-off not too bad.

OK, that's actually a really good point and one I didn't consider. Thanks! 
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #173 on: December 05, 2020, 11:43:42 am »
Thanks for all the comments so far and for those who have filled out the survey.  For anyone who has missed it, please submit your response here:

https://docs.google.com/forms/d/e/1FAIpQLSdm2SbFhX6OJlB834qb0O49cqowHnKiu7BEsXmT3peX4otOIw/viewform

All responses are appreciated - I am looking to make an announcement in the new year regarding the direction of this project.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #174 on: December 05, 2020, 12:18:50 pm »
It would of course be very interesting to see what you come up with nctnico.  In the meantime I am focused on the digital systems engineering parts of this project.  I am presently designing the render-acquisition engine which would replace the existing render engine in software on the Pi.
I'd advise against that. With the rendering engine fixed inside the FPGA you'll loose a lot of freedom in this part. Lecroy scopes do all their rendering in software to give them maximum flexibility for analysis. A better way would be to finalise the rendering in software first and then see what can be optimised where using the FPGA is the very very last resort. IMHO it would be a mistake to put the rendering inside the FPGA because it will fixate a lot of functionality and lock many people out of being able to help improve this design.
« Last Edit: December 05, 2020, 12:22:01 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #175 on: December 05, 2020, 04:33:43 pm »
All the FPGA should be doing is digital phosphor accumulation.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #176 on: December 05, 2020, 04:38:52 pm »
All the FPGA should be doing is digital phosphor accumulation.
No, not at this point in the project. This can be done in software just fine.

If you look at Siglent's history you'll notive they have rewritten their oscilloscope firmware at least 3 times from scratch before getting where they are now. Creating oscilloscope firmware is hard and it is super easy to paint yourself into a corner. The right approach is to get the basic framework setup first (going through several iterations for sure) and then optimise. IMHO the value of this project is going to be in flexibility to make changes / add new features. If people want crazy high update rates they can buy an exisiting scope and be done with it.

For example: if the open source platform allows to add a Python or C/C++ based protocol decoder in a couple of hours then that is a killer feature. Especially if the development environment already runs on the oscilloscope so no software installation for cross compiling or whatever is needed. If OTOH you'd need to get a Vivado license first and spend a couple of days on understanding the FPGA code then nobody will want to do this.

A good example is how the Tektronix logic analyser software can be extended by decoders: https://xdevs.com/guide/tla_spi/
« Last Edit: December 05, 2020, 04:43:28 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno, JPortici

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #177 on: December 05, 2020, 04:39:45 pm »
It would of course be very interesting to see what you come up with nctnico.  In the meantime I am focused on the digital systems engineering parts of this project.  I am presently designing the render-acquisition engine which would replace the existing render engine in software on the Pi.
I'd advise against that. With the rendering engine fixed inside the FPGA you'll loose a lot of freedom in this part. Lecroy scopes do all their rendering in software to give them maximum flexibility for analysis. A better way would be to finalise the rendering in software first and then see what can be optimised where using the FPGA is the very very last resort. IMHO it would be a mistake to put the rendering inside the FPGA because it will fixate a lot of functionality and lock many people out of being able to help improve this design.

The problem with software rendering is you can't do as much with software as you can do with dedicated hardware blocks. The present rendering engine achieves ~23k wfms/s and is about as optimised as you can achieve on a Raspberry Pi ARM processor taking maximum advantage of cache design and hardware hacks.  And that is without vector rendering, which currently approximately halves performance.

An FPGA rendering engine should easily be able to achieve over 200k wfm/s and while raw waveforms rendered per second is a case of diminishing returns (there probably is not much benefit with the 1 million waves/sec scopes from Keysight - feel free to disagree with me here) there is still some advantage to achieving e.g. 100k wfm/s which is where many US$900 - 1500 oscilloscopes seem to be benchmarking.

This also frees the ARM on the Pi to be used for more useful things - while theoretically 100kwfm/s might be possible if all four ARMs were busy would this be a good thing? The UI would become sluggish and features like serial decode would depend on the ARM processor too, in all likelihood, and therefore would suffer in performance.

As for maintainability, that shouldn't be as much of a concern. Sure, it is true that the raw waveform engine may not be maintained as much (it is a 'get it right and ship' thing in my mind), but the rest of the UI and application will be in userspace, including cursors, graticule, that sort of thing.  In fact, it is likely that all the FPGA renderer will do is pass out a rendered image of the waveform for a given channel which the Pi or other applications processor can plot at any desired location.  Essentially, as Marco states, the FPGA is doing the digital phosphor part which is the thing that needs to be fast.  The applications software will always have access to the waveform data too.
« Last Edit: December 05, 2020, 04:41:38 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #178 on: December 05, 2020, 04:44:57 pm »
Trust me, nobody cares about waveforms per second! It is not a good idea to just pursue a crazy high number just for the sake of achieving it. There are enough readily made products out there for sale for the waveforms/s aficionado. IIRC the Lecroy Wavepro 7k series tops at a couple of thousand without any analysis functions enabled.

You have to define the target audience. What if someone has a great idea on how to do color grading differently but if that is 'fixed' inside the FPGA there is no way to change it. Also, with rendering fixed inside the FPGA you basically end up with math traces for anything else and you can't make a waveform processing pipeline (like GStreamer does for example) that easely.

I'm 100% sure that the software and GPU approach offers the best flexibility and is the way to the future (also for oscilloscope manufacturers in general). A high end version can have a PCIexpress slot which can accept a high end video card to do the display processing. The waveforms/s rate goes up immediately and doesn't take any extra development effort. Again, look at the people upgrading their Lecroy Wavepro 7k series with high end video cards.
« Last Edit: December 05, 2020, 04:55:01 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Zucca

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #179 on: December 05, 2020, 04:52:58 pm »
No, not at this point in the project.
For a minimum functional prototype to get some hype going that makes sense, high capture rate digital phosphor and event detection is a high end feature. Budgeting some room/memory for it in the FPGA costs very little time though.
« Last Edit: December 05, 2020, 04:56:02 pm by Marco »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #180 on: December 05, 2020, 04:57:27 pm »
No, not at this point in the project.
For a minimum functional prototype to get some hype going that makes sense, high capture rate digital phosphor is a high end feature. Budgeting some room/memory for it in the FPGA costs very little time though.
But it is just one trick you don't really need and it seriously hampers the rest of the functionality. Look at how limited the Keysight oscilloscopes are; basically one trick ponys. If you use a trigger then the chance of capturing a specific event is 100% and you don't need to stare at the screen without blinking your eyes. At this moment time is better spend on getting the trigger system extended so it can trigger on specific features and sequences of a signal.
« Last Edit: December 05, 2020, 05:14:08 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #181 on: December 05, 2020, 05:44:59 pm »
Digital phosphor is more to get a general idea how the circuit is behaving, using it for detecting if a signal goes beyond bounds by eye seems kinda silly. High capture rates are also valuable for fault detection and also benefit from being implemented in the FPGA. The two features are orthogonal ... but for a minimum prototype FPGA implementation for both could be delayed, even if the latter has higher priority.
« Last Edit: December 05, 2020, 05:48:34 pm by Marco »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #182 on: December 05, 2020, 06:23:23 pm »
The point is that you can do 'digital phospor' just fine in software; doing it in FPGA right now just hampers progress of the project and it doesn't add much in terms of usefulness. Look at high end signal analysis oscilloscopes; none of them have high waveform update rates. It is just that Keysight has been hyping this to be a useful feature on their lower end gear while it isn't. Also realise that the highest waveform update rates happens at a very specific time/div setting only. A high update rate has never helped me to solve a problem. Deep memory and versatile trigger methods are much more useful.
« Last Edit: December 05, 2020, 06:26:59 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #183 on: December 05, 2020, 06:38:57 pm »
Well, there's always the option for both.  The present Python application architecture has support for different rendering engines - ArmWave is just the only one presently implemented but FPGAWave would also be an option.  In that case, the user would have an option to select their preference, and the Zynq SoC would select the required data stream and mode for the CSI transfer engine.

Personally one of the benefits I find from high waveform render rates is that jitter and ringing is more clearly understandable - I know how frequent an event is.

Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer for digital phosphor to avoid too much stairstepping implying a necessarily higher capture/render rate. More so potentially at higher zoom levels where there is much more than 1 wave point per displayed X column.
« Last Edit: December 05, 2020, 06:41:39 pm by tom66 »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #184 on: December 05, 2020, 06:49:17 pm »
When you're initiating the capture of a waveform based on stuff happening hundreds/thousands of samples after a simple/protocol trigger, I'm not sure calling it flexible triggering does that justice.

It's high capture rate pass/fail testing. A feature which can really still wait for a minimum viable prototype, just stick to simple triggers for the moment.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #185 on: December 05, 2020, 07:30:53 pm »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction. Secondly you can't see very dark colors so the intensity has to start somewhere half way. So at the hardware level you are limited to 100 levels. And then there is the limit of what the human eye can distinguish. If you have 32 or maybe 64 different levels you have more than enough to draw a meaningfull picture. However, intensity grading is just mimicing analog oscilloscope behaviour; it doesn't add much in terms of usefullness. Color grading or reverse intensity (see my RTM3000 review) are far more usefull to look at a signal compared to 'simple' intensity grading. Having 8 levels of intensity grading is likely to be more informative in terms of providing meaningfull information; with just 8 levels there will be a clear binning effect of how often a trace hits a spot.
« Last Edit: December 05, 2020, 08:17:24 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Someone, JamesLynton, JPortici

Offline JamesLynton

  • Contributor
  • Posts: 35
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #186 on: December 05, 2020, 08:53:41 pm »
Very sensible on the intensity binning idea, I like that idea *a lot* :)
May also help to have an adjustable +/- exponential tracking curve to assign binning transition spread on the fly when you are trying to tease 'data' out that frequently isn't quite statistically linear in its repetition rate.

Also, awesome project, after being rather disappointed by the UI, Features & Performance of all the commercial pc based dongle scopes I've seen so far, this immediately is looking really nice.
« Last Edit: December 05, 2020, 08:56:24 pm by JamesLynton »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #187 on: December 06, 2020, 09:20:42 am »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction.

That's not really true - not on any modern TFT LCD at least.  Gamma correction is done in the analogue DAC that supplied with gamma reference levels. The DACs for each pixel column interpolate (linear, but it's a close approximation) between these channels.  The resulting effect is that all 256 codes have a useful and distinct output and the output is linear.   This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.   

A typical big LCD panel has 16 gamma channels, 8 for each drive polarity.  Cheaper panels use 6 or 8 channels, with dithering used to interpolate further between these levels.

Secondly you can't see very dark colors so the intensity has to start somewhere half way. So at the hardware level you are limited to 100 levels. And then there is the limit of what the human eye can distinguish.
If you have 32 or maybe 64 different levels you have more than enough to draw a meaningfull picture. However, intensity grading is just mimicing analog oscilloscope behaviour; it doesn't add much in terms of usefullness. Color grading or reverse intensity (see my RTM3000 review) are far more usefull to look at a signal compared to 'simple' intensity grading. Having 8 levels of intensity grading is likely to be more informative in terms of providing meaningfull information; with just 8 levels there will be a clear binning effect of how often a trace hits a spot.

Many people would say the human eye can distinguish between at least 10 bits of resolution but possibly more.   Obviously not all that useful on an 8 bit panel but it is a bit of a fallacy to say the human eye is the limit here.   It is true that totally dark colours are not as useful but this is what the intensity control on most oscilloscopes does - it adjusts the minimum displayed brightness.  It is still probably fair to say at least 200 codes of the displayed codes are useful.  You could always turn up the intensity control to see those darker values, even if the brighter values now saturate.  But you need to have the depth of the intensity bins large enough to store this data to then make use of this function.

I would agree that colour grading is really useful and perhaps more useful than regular intensity grading though it depends on the application.  Really what we're looking at here is having enough resolution in the internal buffers to then use this data, either with simple intensity grading or with arbitrary colour grading. The present ArmWave renderer supports regular intensity grading, inverted, and rainbow/palette modes.

Edit: fixed typo
« Last Edit: December 06, 2020, 10:46:21 am by tom66 »
 
The following users thanked this post: rf-loop

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #188 on: December 06, 2020, 10:58:42 am »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction.

That's not really true - not on any modern TFT LCD at least.  Gamma correction is done in the analogue DAC that supplied with gamma reference levels. The DACs for each pixel column interpolate (linear, but it's a close approximation) between these channels.  The resulting effect is that all 256 codes have a useful and distinct output and the output is linear.   This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.   

A typical big LCD panel has 16 gamma channels, 8 for each drive polarity.  Cheaper panels use 6 or 8 channels, with dithering used to interpolate further between these levels.
Well, I'm doing a lot with TFT panels in all shapes and sizes but I have never seen one which has gamma correction inside the panel. The panel typically uses 8 bit LVDS data which comes from a controller which does gamma correction. But what goes into the panel is still 8 bit.

And there is also a difference between being able to see different shades and how many different shades you can actually interpret. Sometimes less is more. If you look at the Agilent 54835A for example you'll see that the color grading uses binning. Every color is assigned a specific bin which says how many waveforms have been captured inside that bin. IMHO you have to be very careful not to hunt for eye candy (or worse: analog scope emulation which hides part of the signal by definition) but think about ways to show a signal on screen in a way which provides meaningfull information about the signal.
« Last Edit: December 06, 2020, 11:48:44 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #189 on: December 06, 2020, 02:28:50 pm »
Many inexpensive panels generate the gamma voltages internally in the source drivers to reduce cost of having external references, but this absolutely is a thing:

https://www.ti.com/lit/ds/symlink/buf12800.pdf  as an example.  When I was a student I made fair bank replacing AS15-F gamma reference ICs on LCD T-con boards for LCD televisions. They would common fail causing a badly distorted or inverted image.

The voltages steer the output codes for the DAC.  The panel data indeed is 8-bit input and the DAC has only 256 valid output codes, but the output is nonlinear.  An additional signal from the T-con flips the output from 7.5V - 15V range to 7.5V - 0V for pixel inversion (maintaining zero net bias). This is common amongst most LCD panels, although there are some older/cheaper panels that use 6-bit DACs with looser gamma correction and dithering.

You could do an experiment:  put a 256-level gradient on a display of choice, provided it is wide enough you should be able to see distinct stair-stepped bands.  If the gradient has nonlinear steps, then the gamma correction is done before the DACs.  If it has linear bands, then there is no gamma correction applied to the digital output. 
« Last Edit: December 06, 2020, 02:33:16 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #190 on: December 06, 2020, 03:11:53 pm »
It is still probably fair to say at least 200 codes of the displayed codes are useful.  You could always turn up the intensity control to see those darker values, even if the brighter values now saturate.
The problem with this approach is that you basically are displaying something which is not quantifiable. When testing oscilloscopes people often use AM modulated signals to create a pretty picture. But that picture doesn't say anything about the signal. OTOH if you use fixed level binning then the number of visible levels actually says something about the AM modulation depth.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Frequent Contributor
  • **
  • Posts: 518
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #191 on: December 06, 2020, 04:05:47 pm »
Quote
Well, I'm doing a lot with TFT panels in all shapes and sizes but I have never seen one which has gamma correction inside the panel. The panel typically uses 8 bit LVDS data which comes from a controller which does gamma correction. But what goes into the panel is still 8 bit.

If a panel takes an input signal with a color depth of only 8 bits per channel, then it is necessary that the signal is gamma-encoded (i.e. not linear), otherwise a single quantization step would be clearly visible in dark regions, and one could not display smooth gradients. Human vision is not linear. Uniform luminance spacing is not perceptually uniform as well, but the human vision can distinguish smaller luminance steps in dark regions than in bright regions.

Regarding discernable shades of gray: The human vision can adapt to several decades of luminance (e.g. outoor bright sunlight vs. indoor candle light), but at a particular adaptation state it cannot distinguish more than about 100 gray levels (with perceptually unifirm spacing from black to white). If I'd want to be able to distinguish adjacent bins clearly, then I'd not use more than 32 bins.

Quote
This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.

The aim is that the display outputs linear luminance. So the LCD column driver needs to undo the gamma encoding of the input signal, and additionally compensate any non-linearily of the LC cell's voltage to optical transmittance transfer function.

Instead of using a non-linear DAC, this could be also done with a LUT in the digital domain. Then the DAC could be linear, but it would need to have more bits (and most of the levels were unused).
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #192 on: December 06, 2020, 05:50:45 pm »
FWIW In grad school I created  256 step color and gray scale plots on an $80K Gould-Dianza graphics system attached to a VAX 11/780.  The steps were not visible.

There is a lot of folk lore about the sensitivity of the human eye which may be readily disproved by simple experiment.  While the eye is very sensitive to color,  that sensitivity does not extend to the intensity of arbitrary color scales.

Have Fun!
Reg
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3933
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #193 on: December 06, 2020, 06:52:22 pm »
FWIW In grad school I created  256 step color and gray scale plots on an $80K Gould-Dianza graphics system attached to a VAX 11/780.  The steps were not visible.

There is a lot of folk lore about the sensitivity of the human eye which may be readily disproved by simple experiment.  While the eye is very sensitive to color,  that sensitivity does not extend to the intensity of arbitrary color scales.

Have Fun!
Reg

You are very correct on this. That is why all kinds colour grading displays were invented.

Nico is right.. if you're displaying pixel retrace frequency/distribution and encoding it in pixel intensity, there has to be compression of all values from minimum clearly visible (but obviously dimmed) to full intensity of pixel. So there is obvious nothing, something obviously visible meant to be only one repetition  and maximum brightness for pixels that get lit up all the time.  You cannot go from 0. It probably has to be nonlinear. What people are used to is simply response characteristics of phosphorus. That will compress on high side, once you get bright enough it won't be brighter, the dot will start to bloom.

I also agree with Nico about colour grading. I cannot comprehend why more manufacturers use reverse grading (to highlight rare events, not frequent ones, you want to see the outliers..).

Regards,
 
The following users thanked this post: tom66

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #194 on: December 06, 2020, 09:20:16 pm »
Reverse grading seemed obvious to me.  Hence the present code supports it although it's not exposed on the UI.

The rendering engine presently has a 16-bit accumulator as 8-bit was insufficient without saturating arithmetic. In reality I think something like a 12-bit buffer would be sufficient.    The resulting 16-bit values are taken through a palette lookup process to produce the resulting pixel value. So inverting the palette is pretty simple, just flip the table (just want to exclude the zeroth value so you don't write pixels everywhere.)

It really depends on what you want to achieve from intensity grading.  I think there's a mix of uses:

- Some users just want more detail than just 'hit' or 'not hit' and to see the approximate intensity of a pixel indicating the energy in that area (I suspect this is the primary category of user.)  These users expect their DSO to behave ~roughly the same as every other DSO, although obviously there are opportunities to improve this behaviour.

- Some users are doing things like eye diagram or jitter analysis and setting a threshold where you can say <10% of events hit this bin could be useful.  In this case I suspect the users benefit from either reverse intensity grading or rainbow/custom palette grading.

- Others are just expecting a DSO to behave like an analog scope, especially so when in XY mode.  I suspect this is a relatively small category of user, and this user drives the inclusion of 'variable persistence' modes in most modern oscilloscopes.

« Last Edit: December 06, 2020, 09:25:07 pm by tom66 »
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #195 on: December 06, 2020, 09:29:59 pm »
One other thing:-

The present prototype when in 'Auto' memory depth (which is currently the only memory depth exposed to the user and otherwise behaves similarly to the Rigol DS1000Z 'Auto' function) uses all available RAM as a history buffer. With 256MB of RAM and at 50ns/div (~23k wfm/s, 610 pts), this gives approximately 17 seconds of history buffer that is recorded in real time.  In my mind, this is far more useful than any infinite or variable persistence feature, and as far as I can tell, only Siglent expose this in normal use - which led to Dave complaining about it as it was turned on by default.    As far as I can see, there is no reason not to enable this function by default, as it is just a case of walking through memory pointers.  If the user selects a larger memory size, then the instrument will have less record time, but should always have the amount of memory available that the user requests.

Most people know this function as segmented memory.  The only difference is it's a continuously active segmented memory function, which adapts to current settings to make the most use of the memory available.  It avoids that headache of pressing the 'STOP' button and missing the trigger by a few milliseconds.

This is one time the user might want to turn down the waveform rate as e.g. reducing the update rate to 1k wfm/s would increase the memory time to over 6 minutes.  Giving the user that trade off is valuable (this is pretty much always found on scopes with segmented memory).  Depending on the future platform choice,  I expect a later version of the scope to support at least 1GB of RAM which would give around 900 Mpts of usable waveform memory.  So at 23k wfm/s, instrument could record ~1 minute of waveform history and select any one of those timestamped frames or analyse any single given capture.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #196 on: December 06, 2020, 09:47:36 pm »
One other thing:-

The present prototype when in 'Auto' memory depth (which is currently the only memory depth exposed to the user and otherwise behaves similarly to the Rigol DS1000Z 'Auto' function) uses all available RAM as a history buffer. With 256MB of RAM and at 50ns/div (~23k wfm/s, 610 pts), this gives approximately 17 seconds of history buffer that is recorded in real time.  In my mind, this is far more useful than any infinite or variable persistence feature, and as far as I can tell, only Siglent expose this in normal use - which led to Dave complaining about it as it was turned on by default.    As far as I can see, there is no reason not to enable this function by default, as it is just a case of walking through memory pointers.  If the user selects a larger memory size, then the instrument will have less record time, but should always have the amount of memory available that the user requests.
There are a few remarks to be made here:

1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.

2) Having a history buffer running in the background is standard on Yokogawa and R&S oscilloscopes as well. The memory left after the user's memory depth selection (which can be set to auto meaning to use just enough memory to fill the screen) is used as a history buffer.

3) Segmented recording is close to history mode but the user selects a specific record length and number of records instead of the oscilloscope doing this automatically. The distinction is between the oscilloscope determining something automatically versus the user being very specific in order to tailor the oscilloscope configuration to a particular measurement. Having a history buffer with 100k segments while the user is only interested in 5 is counter productive.

4) Variable and infinite persistence are required on a DSO. I regulary use infinite persistence for tests which take hours to weeks. I just want to see the extends of where a signal goes (and it doesn't need crazy high update rates).

Another nice feature to have is detailed mask testing. Again it seems oscilloscope makers aim for high update speeds but in doing so they throw the baby out with the bathwater. To give an example: I have a product which outputs a low and high frequency signal during several seconds. A 10Mpts oscilloscope can sample this signal with enough detail however it turns out that mask testing seems to use peak-detect and decimates the data to a couple of hundred points. It would be nice to be able to compare traces with a length of 10Mpts (or more). It doesn't matter if it is slow; it will always be faster and more accurate compared to checking a signal visually.
« Last Edit: December 06, 2020, 10:46:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3933
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #197 on: December 06, 2020, 11:16:00 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).

I think best way to explain this is to try call it that LeCroy is sample rate defined, sample buffer length is calculated in time (not samples) and it is same as displayed time base, with defined maximum.
That means it will keep sample rate and retrigger rate as high as possible at all times, until it reaches max memory allowed, and only then it will start dropping sample rate.

That is very good strategy for general purpose scope because it maximises retrigger rate, and captures only data needed for time span we are interested in. It is simple to think about from operators standpoint: I have 120ns of data. It was taken at 5GS/s so I know there is no aliasing on my 200 MHz signal...

It is not so good for FFT, where we want exact control over sample buffer size and sample rate...
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #198 on: December 06, 2020, 11:29:30 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).
Let's keep it at me not being able to explain it.  8) I know perfectly how it works and why it is bad in which situation. It is based on my own hands-on experience; I have owned a Siglent oscilloscope in the past and also own a Lecroy oscilloscope (I don't think there is any DSO brand left from which I have not used/owned a DSO myself; yes including Picoscope).
« Last Edit: December 06, 2020, 11:32:43 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: 2N3055

Online tautech

  • Super Contributor
  • ***
  • Posts: 22232
  • Country: nz
  • Taupaki Technologies Ltd. NZ Siglent Distributor
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #199 on: December 06, 2020, 11:34:28 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).

I think best way to explain this is to try call it that LeCroy is sample rate defined, sample buffer length is calculated in time (not samples) and it is same as displayed time base, with defined maximum.
That means it will keep sample rate and retrigger rate as high as possible at all times, until it reaches max memory allowed, and only then it will start dropping sample rate.

That is very good strategy for general purpose scope because it maximises retrigger rate, and captures only data needed for time span we are interested in. It is simple to think about from operators standpoint: I have 120ns of data. It was taken at 5GS/s so I know there is no aliasing on my 200 MHz signal...

It is not so good for FFT, where we want exact control over sample buffer size and sample rate...
Maybe just maybe he will one day understand just why these different strategies are used but maybe not as wfps has never been of high concern for him....no guesses as to why.  ::)

3 choices, ASIC, ADC allowing for large captures and ADC with optimised wfps...pick your poison and understand its limitations.
« Last Edit: December 07, 2020, 12:12:40 am by tautech »
Avid Rabid Hobbyist
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #200 on: December 07, 2020, 12:02:11 am »
I never said high trigger rates (waveforms/s) are never necessary. Sometimes they are and in that case I simply select a shorter memory length to speed up the acquisition process. However aiming for insanely high waveforms/s quickly lands you in an area where there are diminishing returns. The oscilloscope manufacturers tend to claim a high waveform update rate makes it more likely to catch glitches but in the end they never get to 100% due to blind time (which can be avoided BTW at the cost of ending up with a weirdly drawn signal). However measuring is about 100% certainty so if you want to capture a glitch with 100% percent certainty during a given interval the only way out is deep memory (+analysis) or triggering (combined with infinite persistence and/or saving a screendump).
« Last Edit: December 07, 2020, 12:25:33 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #201 on: December 07, 2020, 12:29:50 pm »
There's no reason you can't take the Rigol approach (likely the same on other instruments as well) and give the user a choice of 'Auto' vs a memory depth selection.

In 'Auto' the scope always optimises for waveform rate which IMO is the good, default optimisation (I think this is what most people expect unless they are using the scope for a special application.) 

If you select say 120k points but are on a short timebase, then the update rate drops appropriately and the available capture exceeds that of the visible window.  In fact, all the timebase control does in this instance, is inform the oscilloscope what 'auto' mode it should use and how many points it should apply.  In essence, there is no actual difference in a capture at 120k points at say 10us/div and one at 50ns/div,  they both capture the same data.   It is just a matter of how it is displayed to the user and the timebase control is more of a horizontal zoom control.

In all modes, if there is free waveform RAM, use that RAM to store a history buffer.  120k points gets 2000 waveforms for instance. 
 

Offline Zucca

  • Supporter
  • ****
  • Posts: 3486
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #202 on: December 07, 2020, 12:51:57 pm »
Trust me, nobody cares about waveforms per second!

+1
If I want high waveforms per second I do not search a device in the Open Source jungle.
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3933
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #203 on: December 07, 2020, 02:58:24 pm »
I agree that it is not needed to have 1 MWfrms/sec, but in normal interactive mode it should have enough for fluid display. From what was said previously that is already OK.. 20-25 kWfms per second is more than enough for interactive work..
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #204 on: December 07, 2020, 05:30:05 pm »
Do note the present performance is dot mode only.  Rigol achieve 50 kwfm/s on dot mode on the DS1000Z series,  the headline 25 kwfm/s figure is given in vector mode (a refreshing change that they don't quote the absolute fastest, unrealistic figure!)

I expect vector mode will be a bit slower, it depends on how many vectors need to be drawn.  I've an optimal algorithm in mind but it's limited to 2 pixels/cycle due to the ARM ALU size.  Maybe with NEON I can do more (64-bit add with 4 or 8 terms if using 16 or 8-bit saturating arithmetic) but it would require carefully hand coded assembly.  That's if I decide to further optimise ArmWave which as I've indicated here I'm not certain is the best route yet. 
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #205 on: December 07, 2020, 07:06:38 pm »
if you want to capture a glitch with 100% percent certainty during a given interval the only way out is deep memory (+analysis) or triggering (combined with infinite persistence and/or saving a screendump).
It's much easier to compare a capture against bounds relative to a reference signal on the fly than doing digital persistence on the fly. Linear memory access vs. defacto random access.
 

Offline tv84

  • Super Contributor
  • ***
  • Posts: 2377
  • Country: pt
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #206 on: December 07, 2020, 07:24:22 pm »
It's much easier to compare a capture against bounds relative to a reference signal on the fly than doing digital persistence on the fly.

Can you elaborate on what you mean by "digital persistence on the fly"?
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #207 on: December 07, 2020, 08:25:01 pm »
I feel persistence will be quite easy to implement.  In infinite persistence, pixels are only updated if the value is greater than the previous - this can be adjusted at the final framebuffer stage so there are relatively few pixel values to compare.  For variable persistence a moving average filter could be used although that would have a non-linear decay function (not sure if this is a problem.) Alternatively N buffers (~1024x256x16) would need to be stored and summed together although this would get computationally very expensive for longer persistence periods.

It seems that Tek use an interesting approach for variable persistence on their newer scopes. They apply a random noise function to the previous buffer, which models the approximate desired persistence.  The disadvantage of this method is that the trace constantly looks noisy.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #208 on: December 08, 2020, 12:07:42 am »
Can you elaborate on what you mean by "digital persistence on the fly"?
Trying to updating the bucket counts for persistence at full sample rate is pretty much impossible, determining if it's within a given bound of a reference signal fairly trivial.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #209 on: December 08, 2020, 08:44:11 pm »
Trying to updating the bucket counts for persistence at full sample rate is pretty much impossible, determining if it's within a given bound of a reference signal fairly trivial.

What do you mean by this?
Testing every sample against a reference signal is still fairly expensive.

Mask testing after the waveform is captured is relatively easy and could be done in the rendering engine.  The mask could be defined by some % of the signal e.g. 99% of all samples which could be gathered after say ~30 seconds of persistence data is collected.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #210 on: December 09, 2020, 12:53:41 am »
Testing every sample against a reference signal is still fairly expensive.
Instead of linearly storing a byte per sample, you need to also retrieve two for the upper and lower bounds and do two comparisons. it's fairly expensive, but not unreasonably expensive like it gets for digital phosphor.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #211 on: December 09, 2020, 07:20:10 pm »
What would the upper and lower bounds here be?  Surely the lower bound is always going to be zero?  You could store the peak min/max value for each horizontal pixel in the post-processing stage, but I'm not sure what value this would have?
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #212 on: December 10, 2020, 04:06:02 am »
They are part of the mask for a reference signal for pass/fail testing. The mask will be computed based on an area around the current sample, so you can't really determine that on the fly just from the reference signal, so you need the two values per sample to compare against.
« Last Edit: December 10, 2020, 04:09:09 am by Marco »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #213 on: December 11, 2020, 02:05:16 am »
I have proposed computing statistics so as to be able to trigger on "trace outside x.x sigma bound" and even histograms. This is not a "start of sweep" trigger, but a data event trigger.  I've given careful thought to the resource requirements and it seems quite tractable to me for an Ultrascale implementation.

It's important to distinguish between things which must be done in real time and which simply need to appear to be done in real time.  Most of what a DSO does does not need to be done in hard real time.  A screen refresh delay is of no concern.  Trigger point alignment, AFE correction, anti-alias filtering, downsampling and a few other things must be done in hard real time, but once the data are in the format needed for the selected data view, the time constraints become quite relaxed.

I have the view that a DSO should do everything it is possible to do with the resources available.

My primary concern now is the AFE input filter.  It should be a high order Bessel-Thomson filter to provide accurate waveform shape.  I've got every reference I can find, but unfortunately, the maximally flat phase gets skimpy treatment and I've still not figured out how to analyze and design one from first principles.  I can do a design by hand or with software, but I can't write the derivation on a whiteboard.  More work required.
I'd very much like to see threads discussing how to time synchronize waveforms, implement advanced triggers, do signal processing operations e.g. FFT, etc.

I keep reading a lot of "you can't do this", "you have to do that", but precious little, "this is how you implement that". It would be nice to have more of the latter and less of the former.

Have Fun!
Reg
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #214 on: December 11, 2020, 08:47:16 am »
Indeed.  That was the key realisation for this project, that most of the work can be done 'after the fact',  once  you have captured the data.  Provided you have a sufficiently large buffer and data rate between your capture engine and display engine you can do quite a lot with non-realtime processors.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #215 on: December 11, 2020, 08:59:30 am »
I'm working on an AFE filter. Right now I've arrived at a 5th order Bessel with -3dB at 200MHz. Assuming a samplerate of 500Ms/s it could be a bit steeper (higher order) but then the parts get to unrealistic values. But there will be a 1st order roll-off as well so the -3dB point might need some further tweaking. I think other oscilloscopes use steeper filters at the cost of introducing more phase shift.

I've also recalculated the attenuator part of the schematic I posted earlier. It seems quite usefull and ticks all the boxes (including having a constant capacitance towards the probe); better than I remember.
« Last Edit: December 11, 2020, 09:04:49 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #216 on: December 13, 2020, 05:07:09 pm »
I'm working on an AFE filter. Right now I've arrived at a 5th order Bessel with -3dB at 200MHz. Assuming a samplerate of 500Ms/s it could be a bit steeper (higher order) but then the parts get to unrealistic values. But there will be a 1st order roll-off as well so the -3dB point might need some further tweaking. I think other oscilloscopes use steeper filters at the cost of introducing more phase shift.

I've also recalculated the attenuator part of the schematic I posted earlier. It seems quite usefull and ticks all the boxes (including having a constant capacitance towards the probe); better than I remember.

The -3 dB point needs to be around 125 MHz to produce a good step response.  At 80% of Nyquist the edge rings badly.  Also there is no way for a 5th order Bessel to prevent significant aliasing.  With a 50% of Nyquist corner, a 5th order filter will only be about -30 dB at Nyquist whereas you need -42 dB for an 8 bit ADC.

An 80% corner,  5th order filter will be about -7.5 dB at Nyquist with the consequence that FFT displays will be hopelessly borked in certain cases.

Reg
 
The following users thanked this post: 2N3055

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #217 on: December 13, 2020, 08:45:37 pm »
First see how it behaves and go from there. As already stated: the Bessel filter won't be the only part limiting the frequency response. Analog filters also wrap around in the digital domain so you don't need to get to -48dB at Nyquist.
« Last Edit: December 13, 2020, 08:48:32 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3167
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #218 on: December 13, 2020, 09:01:45 pm »
[snip]
 Analog filters also wrap around in the digital domain so you don't need to get to -48dB at Nyquist.

WTF?  This is so basic I'm speechless!

Edit: To make clear, an 8 bit ADC can digitize a <7 bit signal range.  Hence the -42 dB stated previously.  This is 80 year old mathematics.  If you want to argue with that, I'll just wander off.
« Last Edit: December 13, 2020, 09:07:17 pm by rhb »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21555
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #219 on: December 13, 2020, 09:08:52 pm »
[snip]
 Analog filters also wrap around in the digital domain so you don't need to get to -48dB at Nyquist.

WTF?  This is so basic I'm speechless!
Just think about it and look at it from a practical point of view. Frequency continues to roll off, signals consist of harmonics and at 200MHz you are already over the limit of what can be measured with a standard hi-impedance probe.  The probe itself will already cause a significant high frequency attenuation.

There is a ton of information available on this forum about anti-aliasing filters and DSOs. But since this thread is about an open source design you are free to fit whatever filter you like. I will go for what is the standard approach (which is to have a bandwidth of fs/2.5) for now.

In a nutshell:
From an error perspective: 1% is more than 2 bits (2 bits = 12dB). So if the attenuation is 3dB at 0.4fs, 48 - 12 = 36dB at Nyquist (0.5 fs) and 48dB at 0.6 fs then the amplitude error is less than 1% due to aliasing. Another issue to factor in is that in order to show the shape of a waveform you will at the very least want to see the first 2 (base and 1st) and preferably at least 3 of the harmonic frequencies. For an aliasing error to occur a harmonic frequency would need to be between 0.5 fs and 0.6 fs (and be closer to .5 fs to have the biggest impact). Remember that an oscilloscope isn't a precision instrument nor a data acquisition device and at the -3dB point the amplitude error is already near 30% !

In the end it is all about compromises; getting the highest bandwidth with the least horrible step response. And there is always the option to include two filters; one with the best step response and one with the highest bandwidth.
« Last Edit: December 13, 2020, 10:40:08 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4392
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #220 on: December 14, 2020, 05:24:52 pm »
Does raise the question of how to use the 12-bit 640MSa/s mode.  I had considered limiting that to 500MSa/s as that fits in an even multiple of 16-bit samples with 4 bits unused.   But that makes the 4ch Nyquist 62.5MHz and if you have a 12 bit ADC with ~10.5 ENOB after AFE noise, then you need filter to be rolling off to -63dB with say a -3dB bandwidth of 40-50MHz.  Even enabling the full 640MSa/s is still only 80MHz Nyquist so practical upper B/W limit is still ca 50MHz.

I don't think that is practical so 12-bit mode will always have some risk of aliasing if used on 4ch mode.  Switching filters (other than a simple 20MHz Varicap filter) seems impractical and in any case a single pole filter driven by a varicap is unlikely to roll off quickly enough to be useful for 12-bit mode.

So what do you do?
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 5250
  • Country: nl