Author Topic: A High-Performance Open Source Oscilloscope: development log & future ideas  (Read 70352 times)

0 Members and 4 Guests are viewing this topic.

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Questionnaire for those interested in the project.  

I'd appreciate any responses to understand what features are a priority and what I should focus on.
https://docs.google.com/forms/d/e/1FAIpQLSdm2SbFhX6OJlB834qb0O49cqowHnKiu7BEsXmT3peX4otOIw/formResponse

All responses will be anonymised and a summary of the results will be posted here (when sufficient data exists.)

Introduction

You may prefer to watch the video I have made: 


Over the past year and a half I have been working on a little hobby project to develop a decent high performance oscilloscope, with the intention for this to be an open source project.  By 'decent' I class this as something that could compete with the likes of the lower-end digital phosphor/intensity graded scopes e.g. Rigol DS1000Z,  Siglent SDS1104X-E,  Keysight DSOX1000, and so on.   In other words, 8-bit ADC,  1 gigasamples per second sampling rate on at least 1 channel,  200Mpt of waveform memory and rendering at least capable of rendering 25,000 waveforms/second.

The project began for a number of reasons. The first was because I wanted to learn and understand more about FPGAs;  having only been one to blink an LED on an FPGA dev kit before implementing an oscilloscope seemed like a validating challenge.  Secondly, I wasn't aware of any high performance open source oscilloscopes,  ones that could be used every day by an engineer on their desk.  I've since become aware of ScopeFun but the project is a little different from ScopeFun as they do the data processing on a PC whereas I intended to create a self-contained instrument with data capture and display in one device.   For the display/user interface I utilise a Raspberry Pi Compute Module 3.  This is a decent little device but crucially it has a camera interface port capable of receiving 1080p30 video, which works out to about 2Gbit/s of raw bandwidth.  While this isn't enough to buffer raw samples from an oscilloscope, it's sufficient once you have a trigger criteria and if you have an FPGA in the loop to capture the raw data.

At the heart of the oscilloscope is a Xilinx Zynq 7014S system on chip on a custom PCB, connected to 256MB of DDR3 memory clocked at 533MHz.   With the 16-bit memory interface this  gives us a usable memory bandwidth of ~1.8GB/s.   The Zynq is essentially an ARM Cortex-A9 with an Artix-7 FPGA on the same die,  with a number of high performance memory interfaces between the two.  Crucially, it has a hard on-silicon memory controller, unlike the regular Artix-7, which means you don't use up 20% of logic area implementing that controller.   The FPGA acquires data using a HMCAD1511 ADC, which is the same ADC used in the Rigol and Siglent budget offerings.  This ADC is inexpensive for its performance grade (~$70) and available from Digi-Key.  A variant HMCAD1520 device offers 12-bit and 14-bit capability, with 12-bit at 500MSa/s.  The ADC needs a stable 1GHz clock which is provided in this case by an ADF4351 PLL.

Data is captured from the ADC front end and packed into RAM using a custom acquisition engine on the FPGA.  The acquisition engine also works with a trigger block which uses the data in the raw ADC stream to decide when to generate a trigger event and therefore when to start recording the post-trigger event.  The implemented oscilloscope has both pre- and post-trigger events with a custom size for both from just a few pre-trigger samples to the full buffer of memory.    The data is streamed over an AXI-DMA peripheral into blocks defined by software running on the Zynq.  The blocks are streamed out of the memory into a custom CSI-2 peripheral also using a DMA block (using a large scatter-gather list created by the ARM.) The CSI-2 data bus interface is reverse-engineered,  from documentation publicly available on the internet and by analysing a slowed-down data bus from an official Pi camera, with a modified PLL, captured on my trusty Rigol DS1000Z.   I have a working HDL and hardware implementation that reliably runs at >1.6Gbit/s and application software on the Pi then renders the data transmitted over this interface.   Most application software is written in Python on the Pi,  with a small bit of C to interface with MMAL and to render the waveforms.  The Zynq software is raw embedded C, running on baremetal/standalone platform.  All Zynq software and HDL was developed with Vivado and Vitis toolkit from Xilinx.

Now, caveats:  Only edge triggers (rising/falling/either) are currently supported, and only a 1ch mode is currently implemented for acquisition;  it is mostly a data decimation problem for 2ch and 4ch modes but this has not been implemented for the prototype.  All rendering is done in software presently on the Pi as there were some difficulties keeping a prototype GPU renderer stable.  This rendering task uses 100% of one ARM core on the Pi (there is almost certainly a threading benefit available but that is unimplemented at present due to Python GIL nonsense) but the ideal goal would be to do the rendering on the Pi's GPU or on the FPGA.   A fair bit of the ARM on the Zynq is busy just managing system tasks like setting up AXI DMA transactions for every waveform,  which could probably be sped up if this was done all on the FPGA. 

The analog front end for now is just AC coupled.  I have a prototype AFE designed in LTSpice, but I haven't put any proper hardware together yet.

The first custom PCB (the "Minimum Viable Product") was funded by myself and a generous American friend who was interested in the concept.  It cost about £1,500 (~$2,000 USD or 1,700 EUR, approx) to develop in total, including two prototypes (one with a 7014S and one with a 7020;  the 7020 prototype has never been used).  This was helped in part by a manufacturer in Sweden,  SVENSK Elektronikproduktion,  who provided their services at a great price due to the interest in the project (particular thanks to Fredrik H. for arranging this.)  It is a 6 layer board, which presented some difficulty in implementation of the DDR3 memory interface (ideal would be 8-10 layers), but overall results were very positive and the interface functions at 533MHz just fine. 

The first revision of the board worked with only minor alterations required.  I've nothing but good words to say about SVENSK Elektronikproduktion, who helped bring this prototype to fruition very quickly and even with a last minute change and a minor error on my part that they were able to resolve.  The board was mostly assembled by pick and place including the Zynq's BGA package and DDR3 memory, with some parts later hand placed.  I had the first prototypes delivered in late November 2019 and had the prototype up and running by early March 2020 and the pandemic meant I had a lot more time at home so development continued at rapid pace from then onwards.  The plan was to demonstrate the prototype in person at EMFCamp 2020 but for obvious reasons that event was cancelled.


(Prototype above is the unused 7020 variant.)

Results

I have a working, 1GSa/s oscilloscope that can acquire and display >22,000 wfm/s.  There is more work to be done but at this stage the prototype demonstrates the hardware is capable of providing most needs from the acquisition system of a modern digital oscilloscope. 

The attached waveform images show:
1. 5MHz sine wave modulated AM with 10kHz sine wave
2. 5MHz sine wave modulated FM with 10kHz sine wave + 4MHz bias
3. 5MHz positive ramp wave
4. Psuedorandom noise
5. Chirp waveform (~1.83MHz)
6. Sinc pulse

The video also shows a live preview of the instrument in action.

Where next?

Now I'm at a turning point with this project.   I had to move job and location for personal reasons, so took a two month break from the project while starting at my new employer and moving house.  But, I'm back to looking at this project, still in my spare time.  And, having reflected a bit ...

A couple of weeks ago the Raspberry Pi CM4 was released.   It's not pin compatible with the CM3,  which is of course expected as the Pi 4 has PCI-Express interface and an additional HDMI port.  It would make sense to migrate this project to the CM4;  the faster processor and GPU present an advantage here.  (I have already tested the CSI-2 implementation with a Pi 4 and no major compatibility issues were noted.) 

There's also a lot of other things I want to experiment with.  For instance, I want to move to a dual channel DDR3 memory interface on the Zynq, with 1GB of total memory space available.  This would quadruple the sampling memory, and more than double the memory bandwidth (>3.8GB/s usable bandwidth), which is beneficial when it comes to trying to do some level of rendering on the FPGA.    It's worth looking at the PCI-e interface on the CM4 for data transfer,  but CSI-2 still offers some advantages, namely that it wouldn't be competing with bandwidth from the USB 3.0 or Ethernet peripherals if those are used in a scope product.  PCI-e would also require a higher grade of Zynq with a hard PCI-e core implemented, or a slower HDL implementation of PCI-e, which might present other difficulties.

I'm also considering completely ripping up the Pi CM4 concept and going for a powerful SoC+FPGA like a Zynq UltraScale,  but that would be a considerably more expensive part to utilise, and would perhaps change the goals of this project from developing an inexpensive open-source oscilloscope,  to developing a higher-performance oscilloscope platform for enthusiasts.  The cheapest UltraScale processor is around $250 USD but features an on-device dual ARM Cortex-A53 complex (a considerable upgrade over the ARM Cortex-A9 in the Zynq 7014S),  Mali-400 GPU and DDR4 memory controllers;  this would allow for e.g. an oscilloscope capture engine with gigabytes of sample memory (up to 32GB in the largest parts!), and we'd no longer be restricted into running over a limited bandwidth camera interface which would improve the performance considerably there.

I think there's a great deal of capability here when it comes to supporting modularity.  What I'd like to offer is something along the lines of the original Tek mainframes, where you can swap an acquisition module in and out to change the function of the whole device.  A small EEPROM would identify the right software package and bitstream to load and you can convert your oscilloscope into e.g. a small VNA,  spectrum analyser,  a CAN/OBDII module with analog channels for automotive work,  etc.  on the fly.

The end goal is a handheld, mains and/or battery-powered oscilloscope, with a capacitive 1280x800 touchscreen (optional HDMI output), 4 channels at 100MHz bandwidth and 1GSa/s multiplexed,  minimum 500MSa of acquisition memory and at least 30,000 waveforms/second display rate (with a goal of 100kwaves/sec rendered and 200kwaves/sec captured for segmented memory modes.)   I also intend to offer a two channel arbitrary signal generator output on the product, utilising the same FPGA as for acquisition.  The product is intended to be open-source in its entirety, including the FPGA design and schematics,  firmware on the processor and application software on the main processor.  I'll publish details on these in short order, provided there's sufficient interest.

Full disclosure - I have some commercial interest in the project.  It started as just a hobby project, but I've done everything through my personal contracting company, and have been in discussions with a few individuals and companies regarding possible commercialisation.  No decisions have been made yet, and I intend for the project to be FOSHW regardless of the commercial aspects.

The questions for everyone here is:
- Does a project like this interest you?   If so, why?   If not, why not?

- What would you like to see from a Mk2 development - if anything:  a more expensive oscilloscope to compete with e.g. the 2000-series of many manufacturers that aims more towards the professional engineer,  or a cheaper open-source oscilloscope that would perhaps sell more to students, junior engineers, etc.?  (We are talking about $500USD difference in pricing.  An UltraScale part makes this a >$800USD product - which almost certainly changes the marketability.)

- Would you consider contributing in the development of an oscilloscope?  It is a big project for just one guy to complete.  There is DSP, trigger engines, an AFE, modules, casing design and so many more areas to be completed.  Hardware design is just a small part of the product.  Bugs also need to be found and squashed,  and there is documentation to be written.  I'm envisioning the capability to add modules to the software and the hardware interfaces will be documented so 3rd party modules could be developed and used.

- I'm terrible at naming products.  "BluePulse" is very unlikely to be a longer term name.  I'll welcome any suggestions.

Offline YetAnotherTechie

  • Regular Contributor
  • *
  • Posts: 222
  • Country: pt
I vote for this to be the most interesting post of the year, Great work!!  :-+
 
The following users thanked this post: egonotto, Trader

Online artag

  • Super Contributor
  • ***
  • Posts: 1070
  • Country: gb
I like the idea, mostly because I REALLY like the idea of an open-source scope that's got acceptable performance. Something I could add a feature to when I want it.

I think you've made fabulous progress, and I think you very much need to watch out for the upcoming problems :

It's very easy to get lost in a maze of processor directions - stretch too far and your completion data disappears over the horizon, set your targets too low and you end up with something that's obsolete before its finished.

The same goes for expansion and software plans - there's a temptation to do everything, resulting in plans that never get finalised, or an infrastructure that's too big for the job.

I don't say this negatively, to put you off - I put these points forward as problems that need a solution.

I'm interested in helping if I can.
 
The following users thanked this post: tom66, james_s

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3348
  • Country: ua
We need help of Chinese guys to produce and sell cheap hardware for such project :)

It will be also nice to see Altera Cyclone version.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
It's an incredibly impressive project, with that kind of output from just one person on their personal time it would not surprise me if you got a few job offers from T&M companies. It's very interesting from the standpoint of seeing in detail how a modern DSO works although I think you will be really hard pressed to compete with companies like Rigol and Siglent. I personally would be very interested if the alternative was spending $10k on a Tektronix, Keysight or other A-list brand but the better known Chinese companies deliver an incredible amount of bang for the buck. Building something this complex in small quantities is expensive, and it's probably too complex for all but the most hardcore DIY types to assemble themselves. On top of that, the enclosure is a very difficult part, at least in my own experience. Making a nice enclosure and front panel approaching the quality of even a low end commercial product is very difficult. Not trying to rain on your parade though, looks very cool and I'll be watching with interest to see how this pans out.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Thanks for the comments.

The hardware isn't all that expensive - the current BOM works out as just under US$200 in 500 off quantity.  That means it would be feasible to sell this at US$500-600, which although a little more expensive than the cheapest Rigol/Siglent offerings, may be more attractive with the open source aspect.   Adding the UltraScale and removing the Pi adjusts the BOM by +US$150, which starts pushing the device into the US$800-$1000 range.    Perhaps it would be worth discussing with Xilinx - I know they give discounted FPGAs to dev kit manufacturers - if they are interested in this project they may consider a discounted price.  The Zynq is the most expensive single part on the board.  But, so far, all pricing is based on Digi-Key strike prices with no discounts assumed.

The idea would be to sell an instrument that has only a touchscreen and 4 probe inputs.  The mechanical design of a keypad, knobs, buttons etc and injection moulded case would be significant, and the tooling is not cheap, so an extruded aluminum case would be used.  Of course a touchscreen interface wouldn't be attractive to everyone, so a later development might include an external keypad/knob assembly,  or you could use a small keyboard.  Optionally, the unit could contain a Li-Ion battery and charger, which would allow it to be used away from power for up to 5-6 hours.  (The present power consumption is a little too high for practical battery use, but the Zynq and AFE components are running continuously with no power saving considerations right now.)

There isn't much chance someone could hand-assemble a prototype like this.  The BGA and DDR memory make it all but impossible for the most enthusiastic members on this forum.  There was a reason that, despite having (in my own words) reasonably decent hand-soldering skills, I went with a manufacturer to build the board.  I did not want gremlins from having a BGA ball go open circuit randomly, for instance.  I was very careful in the stencil specification and design to ensure the BGA was not overpasted.  The 7014S board has been perfectly reliable, all considered, even while the Zynq was running at 75C+ pre-heatsink.

While I've not had any offers from T&M companies (although - I've not asked or offered it) I did get my present job as an FPGA/senior engineer with this project as part of the interview process (as Dave says - bring prototypes - they love them!)    There are a couple in the Cambridge area but I'm not really interested in selling out to anyone,  I wanted to develop this project because there is no great open source scope out there yet and it was a great way to get used to high speed FPGAs and memory interfaces.  I've never laid out a DDR memory interface before so it felt incredibly validating that it worked first time.

Regarding Altera parts there would not be much point in using them - the cheapest comparable Altera SoC is double the price of the Zynq and has a slower, older ARM architecture.  The Zynq is a really nice processor!
« Last Edit: November 16, 2020, 08:46:41 am by tom66 »
 
The following users thanked this post: egonotto

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16647
  • Country: 00
The idea would be to sell an instrument that has only a touchscreen and 4 probe inputs.  The mechanical design of a keypad, knobs, buttons etc and injection moulded case would be significant, and the tooling is not cheap, so an extruded aluminum case would be used.  Of course a touchscreen interface wouldn't be attractive to everyone, so a later development might include an external keypad/knob assembly,  or you could use a small keyboard.

Have a look at Micsigs. Their UI is really good, much faster/easier than traditional "twisty knob" DSOs.

Note that they now make a model with knobs at the side, I'd bet that's because a lot of people were put off by the idea of a touchsceen-only device.

(Although having owned one for a couple of weeks I can say that any fears are unfounded. It works perfectly)

Optionally, the unit could contain a Li-Ion battery and charger, which would allow it to be used away from power for up to 5-6 hours.

Micsig again...  >:D
« Last Edit: November 16, 2020, 11:01:59 am by Fungus »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
I'm aware of the Micsig device,  I do quite like it.  So this is comparable to a Micsig device but with open source hardware and firmware, plus modular capability - the ability to remove the AFE and replace it with a different module for a different task for example.  Plus considerably better system and acquisition performance.

I'm a fairly avid user of touchscreen devices in general and while I think there is a case for knobs and dials on a scope,  it can be replicated with competent UI design and a capacitive multitouch screen.  The problem with adding knobs and dials onto a portable device is that once you drop it, you risk damage to the encoders and plastics.  A fully touchscreen device with BNCs being the only exposed elements would be more rugged. Of course, you shouldn't drop any test equipment, but once it is in a portable form factor, it WILL get dropped, by someone.
 

Online artag

  • Super Contributor
  • ***
  • Posts: 1070
  • Country: gb
I've always tended to prefer real knobs and dials, especially in comparing pc-instruments against traditional ones. But we're all getting more used to touchscreens : what they usually lack is a really good, natural usage paradigm. I haven't tried the Micsig devices but have noticed people commenting positively on them.

The WIMP interface is very deeply embedded in us now and tablets don't quite meet it. Some gestures (swipe, pinch) have become familiar but not enough to replace a whole panel.  I think we'll slowly get more used to it, and learn how to make that more natural.

I like the modularity idea, but it's hard to know where to place an interface. The obvious modules are display, acquisition memory and AFE. Linking the memory and display tightly gives fast response to control changes. Linking the memory and AFE gives faster acquisition. There's also some value in using an existing interface for one of those. Maybe USB3 is fast enough, though I think using the camera interface is really cunning. Another processor option - which also has a camera interface and a GPU - is the NVidia Jetson.

My feeling is that AFE should be tightly coupled to memory, so that as bandwidths rise they can both improve together. As long as the memory to display interface is fast enough for human use, it should be 'fast enough'. The limitation of that argument is when a vast amount of data is acquired and needs to be processed before display. Process in the instrument and you can't take advantage of the latest computing options for display processing. Process in the display/PC and you have to transfer through the bottleneck.


 
 

Offline tv84

  • Super Contributor
  • ***
  • Posts: 3221
  • Country: pt
with open source hardware and firmware, plus modular capability

Love your modular capability and the implementation. You are one of the 1st to do such a one-man real implementation.

Usually many talk about this but stop short of beginning such a daunting task: they end up not deciding on the processors, the modularity frontiers, they only do SW, others only do HW, etc, etc...

Many other choices could be made but you definitely deserve a congratulation!  :clap: :clap: :clap:

Whatever you decide to do, just keep it open source and you will always be a winner!

RESPECT.
 
The following users thanked this post: tom66, cdev, 2N3055

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #10 on: November 16, 2020, 12:48:10 pm »
I like that post processing is done inside the GPU. Having a PCI express interface on the newer RPis would be a benefit. It is also an option to use a SoC chip directly on the board and use a lower cost FPGA (Spartan 6 LXT45 for example) that reads data from the ADC, does some rudimentary buffering and streams it into the PCIexpress bus.
« Last Edit: November 16, 2020, 12:53:46 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline fcb

  • Super Contributor
  • ***
  • Posts: 2117
  • Country: gb
  • Test instrument designer/G1YWC
    • Electron Plus
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #11 on: November 16, 2020, 01:12:59 pm »
Great work so far. Although the cost/performance benefit you've outlined is not sufficient to make it compelling commercial project, perhaps it could find a niche?

I'd probably have turned the project on it's head -> what's the best 'scope I can build with Pi Compute module for £XXX.  Also, I wouldn't be afraid of a touchscreen/WIMP interface, if implemented well it can be pretty good - although still haven't seen one YET that beats the usuability of an old HP/Tek.
https://electron.plus Power Analysers, VI Signature Testers, Voltage References, Picoammeters, Curve Tracers.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #12 on: November 16, 2020, 01:18:52 pm »
My concept for modularity is to keep it very simple.  The AFE board will essentially be the HMCAD15xx ADC plus the necessary analog front end hardware and the 1GHz clock.
 
Then the ADC interfaces with n LVDS pairs going into the Zynq. If I put the 484 ball Zynq on the next board, then I have the capacity for a large number of LVDS pairs. 

The modules could be double-wide,  i.e. a 4 channel AFE,  or single-wide,  i.e. a 2 channel AFE  and you could then use some arbitrary other module in the second slot.  The bitstream and software would be written to be as flexible as possible, although it is possible that not all modular configurations will be allowable.  (For instance it might not be possible to have two output modules at once;  the limits would need to be defined.)

For instance, you could have a spectrum analyser front end that contains the RF mixers, filters and ADC, and the software on the Zynq just drives the LO/PLL over SPI to sweep, and performs an FFT on the resulting data.  The module is different - but gathering the data over a high speed digital link is a common factor.

The modules would also be able to share clocks or run on independent clock sources.  The main board could provide a 10MHz reference (which could also be externally provided or outputted) and the PLLs on the respective boards would then generate the necessary sampling clock.

The bandwidth of this interface is less critical than it sounds,  for 8Gbit/s ADC (1GSa/s 8-bit) then just 10 LVDS pairs are needed.  A modern FPGA has 20+ on a single bank and on the Xilinx 7 series parts, each has an independent ISEREDESE2/OSERDESE2 which means you can deserialise and serialise as needed on the fly on each pin.   There are routing and timing considerations but I've not had an issue with the current block running at 125MHz,  I think I might run into issues trying to get it above 200MHz with a standard -3 grade part.

My unfinished modularity proposal is here:
https://docs.google.com/spreadsheets/d/1hpS83vqnude4Z6Bsa2l4NRGaMY8nclvE8eZ_bKncBDo/edit?usp=sharing

So the idea is that most of the modules are dumb but we have a SPI interface if needed for smarter module interfacing,  which allows e.g. an MCU on the module to control attenuation settings.

The MCU could communicate, via a defined standard, what its capabilities are. If the instrument doesn't have the software it needs, then it can pick that up over the internet via Wi-Fi or ethernet or from a USB stick.

One other route I have is to use a 4-lane CSI module as the Pi does support that on the CM3/CM4.  This doubles available transfer bandwidth.  I do need to give PCI-e a good thought though because it allows bidirectional transfer - the current solution is purely unidirectional.

IMO there is little benefit in using a separate FPGA + SoC because you lose that close coupling that the Zynq has.  The ARM on the Zynq is directly writing registers on the FPGA side to influence acquisition, DMA behaviour etc.  That would have to fit over a SPI or small digital link, which would constrain the system considerably.  In fact, currently the Pi controls the Zynq over SPI,  and that is slow enough to cause issues, so I will be moving away from that in a future version.
 
The following users thanked this post: Simon_RL

Offline jxjbsd

  • Regular Contributor
  • *
  • Posts: 123
  • Country: cn
  • 喜欢电子技术的网络工程师
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #13 on: November 16, 2020, 02:25:31 pm »
 :-+
Very good work. I very much agree to keep it simple, and now only the main functions are implemented. It would be great if most of the functions of TEK465 are implemented. Others such as: advanced trigger, FFT, can be implemented later. Only one core board is made, and various control knobs or touch screens are implemented through external boards, which can increase the number of core boards. Simple and flexible may be the advantages of open source hardware. Programming may be the difficulty of this project.
« Last Edit: November 16, 2020, 02:32:31 pm by jxjbsd »
Analog instruments can tell us what they know, digital instruments can tell us what they guess.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #14 on: November 16, 2020, 02:46:27 pm »
IMO there is little benefit in using a separate FPGA + SoC because you lose that close coupling that the Zynq has.  The ARM on the Zynq is directly writing registers on the FPGA side to influence acquisition, DMA behaviour etc.  That would have to fit over a SPI or small digital link, which would constrain the system considerably.
That is where PCIexpress comes in. This gives you direct memory access both ways; in fact the FPGA could push the acquired data directly into the GPU memory area using PCIexpress.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #15 on: November 16, 2020, 02:48:35 pm »
IMO there is little benefit in using a separate FPGA + SoC because you lose that close coupling that the Zynq has.  The ARM on the Zynq is directly writing registers on the FPGA side to influence acquisition, DMA behaviour etc.  That would have to fit over a SPI or small digital link, which would constrain the system considerably.
That is where PCIexpress comes in. This gives you direct memory access both ways; in fact the FPGA could push the acquired data directly into the GPU memory area using PCIexpress.

True, but the FPGA would still need to have some kind of management firmware on it for some parts,  for instance setting up DMA transfer sizes and trigger settings.  You could write that all in Verilog, but it becomes a real pain to debug.  The balance of CPU for easy software tasks and HDL for easy hardware tasks makes the most sense, and some of this stuff is low-latency so you ideally want to keep it away from a non-realtime system like Linux.  (The UltraScale SOC has a separate 600MHz dual ARM Cortex-R5 complex for realtime work - which is an interesting architecture.)  But, having the ability for the Pi to write and read directly from memory space on the Zynq side would be really compelling.  I may need to get the PCI-e reference manual and see what the interface and requirements look like there.
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #16 on: November 16, 2020, 02:59:34 pm »
Very impressive work! I really hope you will succeed in your "quest"!
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #17 on: November 16, 2020, 03:14:02 pm »
IMO there is little benefit in using a separate FPGA + SoC because you lose that close coupling that the Zynq has.  The ARM on the Zynq is directly writing registers on the FPGA side to influence acquisition, DMA behaviour etc.  That would have to fit over a SPI or small digital link, which would constrain the system considerably.
That is where PCIexpress comes in. This gives you direct memory access both ways; in fact the FPGA could push the acquired data directly into the GPU memory area using PCIexpress.

True, but the FPGA would still need to have some kind of management firmware on it for some parts,  for instance setting up DMA transfer sizes and trigger settings.  You could write that all in Verilog, but it becomes a real pain to debug.  The balance of CPU for easy software tasks and HDL for easy hardware tasks makes the most sense, and some of this stuff is low-latency so you ideally want to keep it away from a non-realtime system like Linux.  (The UltraScale SOC has a separate 600MHz dual ARM Cortex-R5 complex for realtime work - which is an interesting architecture.)  But, having the ability for the Pi to write and read directly from memory space on the Zynq side would be really compelling.  I may need to get the PCI-e reference manual and see what the interface and requirements look like there.
The beauty of a PCI interface is that it basically does DMA transfers so Linux doesn't need to get in the way at all. The only thing the host CPU needs to do is setup the acquisition parameters and the FPGA can start pushing data into the GPU. Likely the GPU can signal the FPGA directly to steer the rate of the acquisitions. In the end a GPU has a massive amount of processing power compared to an ARM core for as long as you can do parallel tasks. I have made various realtime video processing projects with Linux and since all the data transfer is DMA based the host CPU is loaded by only a few percent. System memory bandwidth is something to be aware of though.
« Last Edit: November 16, 2020, 03:17:08 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #18 on: November 16, 2020, 06:26:49 pm »
The beauty of a PCI interface is that it basically does DMA transfers so Linux doesn't need to get in the way at all. The only thing the host CPU needs to do is setup the acquisition parameters and the FPGA can start pushing data into the GPU. Likely the GPU can signal the FPGA directly to steer the rate of the acquisitions. In the end a GPU has a massive amount of processing power compared to an ARM core for as long as you can do parallel tasks. I have made various realtime video processing projects with Linux and since all the data transfer is DMA based the host CPU is loaded by only a few percent. System memory bandwidth is something to be aware of though.

It's a fair point. There's still some acquisition control that the FPGA needs to be involved in, for instance sorting out the pre- and post-trigger stuff.

The current architecture roughly works as such:
- Pi configures acquisition mode (ex. 600 pts pre trigger, 600 pts post trigger, 1:1 input divide, 1 channel mode, 8 bits, 600 waves/slice, trigger is this type, delay by X clocks, etc.)
- Zynq acquires these waves into a rolling buffer - the buffer moves through memory space so there is history for any given acquisition (~25 seconds with current memory)
- Pi interrupts before next VSYNC to get packet of waves (which may be less than the 600 waves request)
- Transfer is made by the Zynq over CSI - Zynq corrects trigger positions and prepares DMA scatter-gather list then my CSI peripheral transfers ~2MB+ of data with no CPU intervention

There is close fusion between the Zynq ARM, FPGA fabric, and the Pi - and since the Pi is not hard real time (Zynq ARM is running baremetal) you'd need to be careful there with what latency you introduce into the system.

It would be nice if we could say to the Pi, e.g. find waveforms at this address, and when the Pi snoops in to the PCIe bus, the FPGA fabric intercepts the request and translates each waveform dynamically so we don't have to do the pre-trigger rotation on the Zynq ARM.  Right now, the pre-trigger rotation is done by reading from the middle of the pre-trigger buffer, and then the start, then the post-trigger buffer (though I believe this could be simplified to two reads with some thought.)  Perhaps it's possible by using the SCU on the Zynq - it's got a fairly sophisticated address translation engine.  I'd like to avoid doing a read-rotate-writeback operation, as that triples the memory bandwidth requirements on the Zynq, and already 1GB/s of the memory bandwidth (~60%) is used just writing data from the ADC.  The Zynq ARM has to execute code and read/write data from this same RAM, and although the 512KB L2 cache on the Zynq is generous,  it's not perfect.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #19 on: November 16, 2020, 06:38:53 pm »
I loathe touchscreens, I tolerate one on my phone because of obvious constraints with the form factor but while I've owned several tablets I've yet to find a really good use case for one other than looking at datasheets. Can't stand them on most stuff and it annoys me whenever someone points to something and makes a finger smudge on my monitor. I could potentially make an exception in the case of a portable scope to have in addition to my bench scope although I think in the case of this project my interest is mostly academic, it's a fascinating project and an incredible achievement but not something I'm likely to spend money on. Roughly the same price will get me a 4 channel Siglent in a nice molded housing with real buttons and knobs and support, or a used TDS3000 that can be upgraded to 500MHz. That said, I've heard that Digikey pricing on FPGAs is hugely inflated so you may be able to drop the cost down substantially.
 
The following users thanked this post: nuno

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #20 on: November 16, 2020, 06:41:11 pm »
Another challenge I am working on is how to do the rendering all on the FPGA.

This would free up the CPUs of the Pi and the GPU could be used for e.g. FFTs and 2D acceleration tasks. 

The real challenge is - waveforms are stored linearly, but every X pixel on the display needs a different Y coordinate for a given wavevalue.  So, it is not conducive to bulk write operations at all (e.g. AXI Burst).  The 'trivial' improvement is to rotate the buffer 90 degrees (which is what my SW renderer does) so that your accesses tend to hit the same row at least and will be more likely to be sitting in the cache.  But this is still a non-ideal solution. So the problem has to be broken down into tiles or slices.  Zynq should read, say, 128 waveform values (fits nicely into a burst), and repeat for every waveform (with appropriate translations provided),  write all the pixel values for that into BRAM (~12 bits x 128 x 1024,  for a 1024 height canvas with 12 bits intensity grading = ~1.5Mbits pr about half of all available BlockRAMs),  and write that back into DDR in order to get the most performance with burst operations used as much as possible.

It implies a fairly complex core and that's without considering multiple channels (which introduce even more complexity, because do you handle each as a separate buffer, or accumulate each with a 'key value' or ...?)  The complexity here is that the ADC multiplexes samples,  so in 1ch mode the samples are  A0 .. A7,  but in 2ch mode they are  A0 B0 A1 B1 .. A3 B3 which means you need to think carefully about how you read and write data.  You can try to unpack the data with small FIFOs on the acquisition side, but then you need to reassemble the data when you stream it out. 

This is essentially solving the rotated polygon problem that GPU manufacturers solved 20 years ago, but solving it in a way that can fit in a relatively inexpensive FPGA and doing it at 100,000 waves/sec (60 Mpoints/sec plotted).  And then doing it with vectors or dots between points - ArmWave is just dots for now though there is a prototype slower vector plotter I have written somewhere.

If you look at Rigol DS1000Z then you can see a fairly hefty SRAM chip attached to the FPGA, in addition to a regular DDR2/3 memory device.  It is almost certain that the DDR memory is used just for waveform acquisition and that the waveform is rendered into the SRAM buffer and then streamed to the i.MX processor (possibly over the camera port like I am using.)   Whether the FPGA colourises the camera data or whether Rigol use the i.MX's ISP block to do that is unknown to me.  Rigol likely chose an expensive SRAM because it allows for true random access with minimal penalty in jumping to random addresses.

Current source code for ArmWave, the rendering engine presently used for anyone curious:
https://github.com/tom66/armwave/blob/master/armwave.c

This is about as fast as you will get an ARM rendering engine while using just one core and it has been profiled to death and back again.  4 cores would make it faster although some of the limitation does come from memory bus performance.  It's at about 20 cycles per pixel plotted right now.
« Last Edit: November 16, 2020, 06:45:47 pm by tom66 »
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16647
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #21 on: November 16, 2020, 06:56:14 pm »
I loathe touchscreens, I tolerate one on my phone because of obvious constraints with the form factor but ... roughly the same price will get me a 4 channel Siglent in a nice molded housing with real buttons and knobs and support

Trust me: The knobs are OK for things like adjusting the timebase but a twisty, pushable, multifunction knob is not better for navigating menus, choosing options, etc.

eg. Look at the process of enabling a bunch of on-screen measurement on a Siglent. Does that seem like the best way?

https://youtu.be/gUz3KYp_5Tc?t=2925
« Last Edit: November 16, 2020, 07:11:38 pm by Fungus »
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #22 on: November 16, 2020, 07:15:49 pm »

Look at the process of enabling a bunch of on-screen measurement on a Siglent. Does that seem like the best way?
Best is accurate:
https://www.eevblog.com/forum/testgear/testing-dso-auto-measurements-accuracy-across-timebases/
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline sb42

  • Contributor
  • Posts: 42
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #23 on: November 16, 2020, 07:17:34 pm »
I loathe touchscreens, I tolerate one on my phone because of obvious constraints with the form factor but ... roughly the same price will get me a 4 channel Siglent in a nice molded housing with real buttons and knobs and support

Trust me: The knobs are OK for things like adjusting the timebase but a twisty, pushable, multifunction knob is not better for navigating menus, choosing options, etc.

eg. Look at the process of enabling a bunch of on-screen measurement on a Siglent. Does that seem like the best way?

https://youtu.be/gUz3KYp_5Tc?t=2925

Also, with a USB port it might be possible to design something around a generic USB input interface like this one:
http://www.leobodnar.com/shop/index.php?main_page=product_info&cPath=94&products_id=300
 
The following users thanked this post: tom66

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #24 on: November 16, 2020, 07:35:36 pm »
Another challenge I am working on is how to do the rendering all on the FPGA.

This would free up the CPUs of the Pi and the GPU could be used for e.g. FFTs and 2D acceleration tasks. 
I'm not saying it can't be done but you also need to address (literally) shifting the dots so they match the trigger point.

IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16647
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #25 on: November 16, 2020, 08:13:24 pm »
If you look at Rigol DS1000Z then you can see a fairly hefty SRAM chip attached to the FPGA, in addition to a regular DDR2/3 memory device.  It is almost certain that the DDR memory is used just for waveform acquisition and that the waveform is rendered into the SRAM buffer and then streamed to the i.MX processor (possibly over the camera port like I am using.)   Whether the FPGA colourises the camera data or whether Rigol use the i.MX's ISP block to do that is unknown to me.  Rigol likely chose an expensive SRAM because it allows for true random access with minimal penalty in jumping to random addresses.

I believe the Rigol main CPU can only "see" a window of 1200 samples at a time, as decimated by the FPGA. This is the reason that all the DS1054Z measurements are done "on screen", etc.

1200 samples is twice the screen display (600 pixels).

 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #26 on: November 16, 2020, 08:13:55 pm »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
« Last Edit: November 16, 2020, 08:20:10 pm by tom66 »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #27 on: November 16, 2020, 08:15:54 pm »
If you look at Rigol DS1000Z then you can see a fairly hefty SRAM chip attached to the FPGA, in addition to a regular DDR2/3 memory device.  It is almost certain that the DDR memory is used just for waveform acquisition and that the waveform is rendered into the SRAM buffer and then streamed to the i.MX processor (possibly over the camera port like I am using.)   Whether the FPGA colourises the camera data or whether Rigol use the i.MX's ISP block to do that is unknown to me.  Rigol likely chose an expensive SRAM because it allows for true random access with minimal penalty in jumping to random addresses.

I believe the Rigol main CPU can only "see" a window of 1200 samples at a time, as decimated by the FPGA. This is the reason that all the DS1054Z measurements are done "on screen", etc.

1200 samples is twice the screen display (600 pixels).

Yes it seems likely to me that it is transmitted as an embedded line in whatever is transmitting the video data.  The window is about 600 pixels across so it makes sense that they would be using e.g. the top eight lines for this data, two per channel.  It is also clear that Rigol use a 32-bit data bus instead of my 64-bit data bus as the holdoff/delay counter resolution is half what I support. (My holdoff setting has 8ns resolution due to 125MHz clock; theirs is 4ns/250MHz.)  They use a Spartan-6 with fewer LUTs than my 7014S so it's perhaps a trade off there.

I am almost certain (though have not physically confirmed it) that the Rigol is doing all the render work on the FPGA.  Perhaps they are using the i.MX CPU for the Anti-Alias mode which gets very slow on longer timebases as it appears to be rendering more (all?) of the samples.

The Rigol also does not decimate the data when doing the waveform rendering, so you can get aliasing in some cases although they are fairly infrequent corner cases.
« Last Edit: November 16, 2020, 08:32:58 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #28 on: November 16, 2020, 08:34:06 pm »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Personally I don't have a real need for high waveform update rates. Deep memory is usefull (either as a continuous record or as segmented / history buffer; segmented and history are very much the same). But with deep memory also comes the requirement to be able to process it fast.

Nearly 2 decades ago I embarked on a similar project where I tried to cram all the realtime & post processing into the FPGAs. In the end you only need to fill the width of a screen which is practically 2000 pixels. This greatly reduces the bandwidth towards the display section but needs a huge efford on the FPGA side. The design I made could go through 1Gpts of 10bit data within 1 second and (potentially) produce multiple views of the data at the same time. The rise of cheap Asian oscilloscopes made me stop the project. If I where to take on such a project today I'd go the GPU route and do as little as possible inside an FPGA. I think creating trigger engines for protocols and special signal shapes will be challenging enough already.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #29 on: November 16, 2020, 09:17:40 pm »
The bandwidth of this interface is less critical than it sounds,  for 8Gbit/s ADC (1GSa/s 8-bit) then just 10 LVDS pairs are needed.  A modern FPGA has 20+ on a single bank and on the Xilinx 7 series parts, each has an independent ISEREDESE2/OSERDESE2 which means you can deserialise and serialise as needed on the fly on each pin.   There are routing and timing considerations but I've not had an issue with the current block running at 125MHz,  I think I might run into issues trying to get it above 200MHz with a standard -3 grade part.
As you go into giga samples range, ADC quickly becoming jesd204b-only, which is itself a separate big can of worms. And many of them will happily send 12 Gpbs per lane and even more, for that you will need something more recent than 7 series (or using Virtex-7, I think they can go that high, though no personal experience).

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #30 on: November 16, 2020, 09:31:10 pm »
There's JESD204B support in the Zynq 7000 series, though only via the gigabit transceivers which are on the much more expensive parts.

I've little doubt that I'll cap the maximum performance around the 2.5GSa/s range - at that point memory bandwidth becomes a serious pain.

I've a coy play for how to get up to 2.5GSa/s using regular ADC chips - it'll require an FPGA as 'interface glue' to achieve but it could be a relatively small FPGA.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #31 on: November 16, 2020, 10:16:31 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.

In better news if you're going down an all digital trigger route (probably a good idea) then the vast majority of "trigger" types are simply combinations of 2 thresholds and a one shot timer, which are easy enough. That can then be passed off to slower state machines for protocol/serial triggers. But without going down dynamic reconfiguration or using multiple FPGA images supporting a variety of serial trigger types becomes an interesting problem all of its own.
 

Offline Circlotron

  • Super Contributor
  • ***
  • Posts: 3180
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #32 on: November 16, 2020, 11:28:06 pm »
This takes "home made" to a whole new level!
My suggestion would be to have an A/D with greater than 8 bits. This would set it apart from so many other "me to" scopes. I'm sure there is a downside to this though - price, sample rate limitations etc. Also, if there is to be a hi-res option, maybe have a user adjustable setting for how many averaged samples per final sample or however it is expressed. I love sharp, clean traces. None of this furry trace rubbish!
« Last Edit: November 16, 2020, 11:30:41 pm by Circlotron »
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #33 on: November 16, 2020, 11:47:20 pm »
This takes "home made" to a whole new level!
My suggestion would be to have an A/D with greater than 8 bits. This would set it apart from so many other "me to" scopes. I'm sure there is a downside to this though - price, sample rate limitations etc. Also, if there is to be a hi-res option, maybe have a user adjustable setting for how many averaged samples per final sample or however it is expressed. I love sharp, clean traces. None of this furry trace rubbish!
Part of the fun of open source is you can ignore the entrenched ways of doing things and offer choices to the user (possibly ignoring IP protection along the way). A programmable FIR + CIC + IIR acquisition filter could implement a wide range of useful processing.
 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 73
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #34 on: November 17, 2020, 12:54:57 am »
A suggestion: replace the barrel connector (for power I assume) and the USB type A receptacle with 2 USB-C female receptacles. Both USB-C connectors should support PD (power delivery) allowing up to 20 Volts @ 5 Amps to be sunk through either connector. This assumes that power draw of your project is <= 100 Watts. If the power draw is <= 60 Watts then any compliant USB-C cable could be used to supply power. If the power draw is <= 45 Watts then a product like the Morphie USB-C 3XL battery could be used to make the 'scope portable. Dual role power (DRP) would also be desirable, so if a USB key is connected to either USB-C port then it could source 5 Volts say around 1 Amp. A USB-C (M) to USB-A (F) adapter or short cable could be supplied with the 'scope for backward compatibility. I guess most folks interested in buying this 'scope will own one or more USB-C power adapters, so it frees the OP from needing to provide one (so the price should go down). Many significant semiconductor manufacturers have USB-C offerings (ICs) with evaluation boards available (but not many eval boards do DRP).
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16647
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #35 on: November 17, 2020, 03:18:37 am »
Personally I don't have a real need for high waveform update rates.

I don't recall any discussions here about waveforms/sec, waveform record/playback, etc.

I remember a lot of heated discussions about things like FFT and serial decoders.

 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16647
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #36 on: November 17, 2020, 03:27:56 am »
A suggestion: replace the barrel connector (for power I assume) and the USB type A receptacle with 2 USB-C female receptacles. Both USB-C connectors should support PD (power delivery) allowing up to 20 Volts @ 5 Amps to be sunk through either connector. If the power draw is <= 45 Watts then a product like the Morphie USB-C 3XL battery could be used to make the 'scope portable.

(Seen from another perspective)

You mentioned adding a battery to this but that means:
a) Extra design work
b) A lot of charging circuitry on the PCB
c) Adding a battery compartmentrr/connector
c) A lot of safety concerns
d) Higher price
e) Bigger size/extra weight

Making it work with suitably rated power banks makes a lot more sense.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #37 on: November 17, 2020, 06:42:47 am »
The circuitry required to manage a battery pack would be absolutely trivial compared to what has already been achieved here. This is a well developed area, every laptop for the last 15 years at least has mastered the handling of a li-ion battery pack.

For what it's worth, I have not been impressed with USB-C, my work laptop has it and I have to use dongles for everything. The cables are more fragile and more expensive than USB-3, the standard is still a mess after all this time as IMO it tries to be everything to everybody and the result is just too complex. I have never been a fan of using USB for power delivery, a dedicated DC power jack is much nicer.
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #38 on: November 17, 2020, 08:19:06 am »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.

To get discussion back to the track, let me chime in on some questions here.

For the purposes of nice intensity/colour graded waveform display, having very high display rate is diminishing returns game. Basically, if you look at, let's say, 10 MHz AM modulated with 100 Hz. You will need few thousand WFMs/s to make it smooth, so display will not have moire effect. And also, if you are watching something interactively, it will be faster than human eye and to us full real time.

I consider rettriger time important, but could live with 20-30 us rettriger time (30-50kWfms/s), if sequence mode would be much faster, on the level of 1-2 us. In that mode no data processing is performed and  that should be reachable. Picoscopes are like that. They also capture full data in a buffer, but send fast screen updates of decimated data for display, and full data delayed.

There are many scopes, even cheap ones, that do great job of interactive instrument. What would be groundbreaking is open source analytical scope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #39 on: November 17, 2020, 08:22:40 am »
Why would I use USB-C?

- Power adapters are more expensive and less common
- The connector is more fragile and expensive
- I don't need the data connection back (who needs their widget to talk to their power supply?)
- I need to support a wider range of voltages e.g. 5V to 20V input which complicates the power converter design  (present supported range is 7V - 15V)

The plan for the power supply of the next generation product was to have everything sitting at VBAT (3.4V ~ 4.2V) and all DC-DC converters running off that.  It's within the range that a buck/LDO stage can work to give a 3.2V rail (good enough for 3.3V rated devices) and a boost stage can provide 5V.

Now, I was going to design it so that if you connected a 5V source it could charge the battery, so a simple USB type A to barrel jack cable can be supplied.  That would be inexpensive enough because we still have a buck input stage for single-cell Li-Ion charging (I'm keen to avoid multi-cell designs)  but at a maximum 'safe' limit of 5W from such a source, I doubt the scope could run without slowly discharging its battery.

When charging the battery this device could pull up to 45W (36W charging + 9W application) - that's roughly a 1C charge rate for a 10000mAh cell
« Last Edit: November 17, 2020, 08:24:37 am by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #40 on: November 17, 2020, 10:56:11 am »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #41 on: November 17, 2020, 11:21:36 am »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.

I've got a concept and LTSpice simulation of the attenuator and pre-amp side, but nothing has been tested for real or laid out.  It would be useful to have an experienced analog engineer look at this - I know enough to be dangerous but that's about it.

At the time I was looking at a relay-based attenuator for the -40dB step and then a gain/attenuator block for +6dB to -38dB (think it was a TI part, I'll dig it out) which would get you from +6dB to -78dB attenuation.  Enough to cope with typical demands of a scope (1mV/div to 10V/div).

I was also looking into how to do 20MHz B/W limit and whether it would be practical to vary the varicap voltage with some PWM channels on an MCU to fine tune bandwidth limits.
« Last Edit: November 17, 2020, 11:23:37 am by tom66 »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #42 on: November 17, 2020, 11:33:25 am »
The existing AFE is purely ac-coupled.  Attached schematic.  The ADC needs about 1Vp-p input to get full scale code.

Presently the ADC diffpairs go over SATA cables, they are cheap and (usually) shielded.
 

Offline Zucca

  • Supporter
  • ****
  • Posts: 4308
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #43 on: November 17, 2020, 11:39:02 am »
Personally I don't have a real need for high waveform update rates. Deep memory is usefull

Ditto. Normally on our benches we have already a high waveform rate scope.
I believe many of us have (or would buy) a USB/PC scope to cover application where deep memory is needed.

For a project like this I would put all my poker fiches to get as much memory as possible. All in.
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 
The following users thanked this post: 2N3055

Offline Circlotron

  • Super Contributor
  • ***
  • Posts: 3180
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #44 on: November 17, 2020, 12:04:20 pm »
Over the past year and a half I have been working on a little hobby project to develop a decent high performance oscilloscope, with the intention for this to be an open source project.  By 'decent' I class this as something that could compete with the likes of the lower-end digital phosphor/intensity graded scopes e.g. Rigol DS1000Z,  Siglent SDS1104X-E,  Keysight DSOX1000, and so on. <snip>  I'll welcome any suggestions.
Sounds reminiscent of a newsgroup posting by a certain fellow from Finland some years ago... Lets hope it becomes as big.  :-+
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #45 on: November 17, 2020, 12:12:48 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.

In better news if you're going down an all digital trigger route (probably a good idea) then the vast majority of "trigger" types are simply combinations of 2 thresholds and a one shot timer, which are easy enough. That can then be passed off to slower state machines for protocol/serial triggers. But without going down dynamic reconfiguration or using multiple FPGA images supporting a variety of serial trigger types becomes an interesting problem all of its own.

As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.

I think you could do more if filters are run on post-processed, triggered data.   Total numeric 'capacity' is approx 300MHz * 210 DSPs = 63 GMAC/s.    But at that point it comes down to how fast you can get data through your DSP blocks and they are spread across the fabric, which requires very careful design when crossing columns as that's where the fabric routing resource is more constrained.  I'd also be curious what the power consumption of the Zynq looks like when 63 GMAC/s of number crunching is being done - but it can't be low.  I hate fans with a passion.  This scope will be completely fanless.  It will heatsink everything into the extruded aluminum case. 

Regarding digital (serial) triggers, my thought was around the area of a small configurable FSM that can use the digital comparator outputs from any channel.  The FSM would have a number of programmable states and generate a trigger pulse when it reaches the correct end state. This itself is a big project, it would need to be designed, simulated and tested; hence why I have stuck with a fairly simple edge trigger (and the pulse width, slope, runt and timeout triggers are fairly trivial and the core technically supports them, although they are unimplemented in software for now.)  The FSM for complex triggers could have a fairly large 'program' and the program could be computed dynamically (e.g. for I2C address trigger, it would start with a match for a start condition, then look for the relevant rising edges on each clock and compare SDA at that cycle - the Python application would be able to customise the sequence of states that need to pass through to generate triggers in a -very- basic assembly language.)

Serial decode itself would likely use Sigrok, though its pure-Python implementation may cause performance issues in which case a compiled RPython variant may be usable instead.    There is some advantage to doing this on the Zynq in spare cycles if using e.g. a 7020 with the FPGA accelerating the level comparison stage so the ARM just needs to shift bits out a register to decide what to do with each data bit.
« Last Edit: November 17, 2020, 12:17:32 pm by tom66 »
 

Offline capt bullshot

  • Super Contributor
  • ***
  • Posts: 3033
  • Country: de
    • Mostly useless stuff, but nice to have: wunderkis.de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #46 on: November 17, 2020, 02:06:42 pm »
Nothing to say yet, but joining this quite interesting thread by leaving a post.
BTW, to OP: great work.
Safety devices hinder evolution
 
The following users thanked this post: tom66

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #47 on: November 17, 2020, 02:52:47 pm »
Why would I use USB-C?
Because it's super convenient. You have a single power supply that can provide any voltage *you* (as designer) want as opposed to what power supply can provide, it's fairly easy to implement - Cypress has a fully standalone controller chip which handles everything, you use few resistors to tell it which voltages you need, and it gives you two voltages out - one is going to be in the range you've set up with resistors, another "fallback one" - will be 5V if power supply can't provide you what you want, so you can indicate to the user that he connected wrong supply. Or you can use STM32G0 MCU, which has integrated USB-C PD PHY peripherals. USB-C PD is specifically designed to follow "waterfall" model, when if it supports higher voltage, it must support all standard values of lower voltages. Which is why you can request, say 9 V at 3 Amps, and any PSU that provides more than 27 W of power full be guaranteed to work with your device and provide you said 9 V regardless of their support of higher voltages.

- Power adapters are more expensive and less common
Really? Everybody's got one by now with any smart phone purchased in the last 2-3 years. They are also used with many laptops - these are even better.
- The connector is more fragile and expensive
No it's not more fragile. And not expensive either if you know where to look. Besides - did I just see someone complaining about $1 part in a $200+ BOM?
- I don't need the data connection back (who needs their widget to talk to their power supply?)
That's fine - you can use power-only connector.
- I need to support a wider range of voltages e.g. 5V to 20V input which complicates the power converter design  (present supported range is 7V - 15V)
No you don't - see my explanation above.
« Last Edit: November 17, 2020, 03:19:07 pm by asmi »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #48 on: November 17, 2020, 03:31:46 pm »
Still isn't adding USB-C adding on more complexity to an already complex project? I recall Dave2 having quite a bit of difficulties implementing USB-C power for Dave's new power supply.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: ogden

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #49 on: November 17, 2020, 03:34:47 pm »
asmi, could you link to that Cypress solution?  I will give it a look, but it does (as nctnico says) seem like added complexity for little or no benefit.

In my fairly modern home with a mix of Android and iOS devices I have one USB-C cable and zero USB-C power supplies.  My laptop (a few years old, not ultrabook format) still uses a barrel jack connector.  Girlfriend's laptop is the same and only 1 year old.  I've no doubt that people have power supplies with Type C,  but barrel-jack connectors are more common and assuming this device will ship with a power adapter, it won't be too expensive to source a 36W/48W 12V AC-adapter whereas a USB Type-C adapter will almost certainly cost more.

And there will be that not-insignificant group of people who wonder "why does it not work with -cheap 5W smartphone charger-?"  When you have to qualify it with things like, only use >45W or more rated adapter, then the search-space of usable adapters drops considerably.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #50 on: November 17, 2020, 03:53:08 pm »
asmi, could you link to that Cypress solution?  I will give it a look, but it does (as nctnico says) seem like added complexity for little or no benefit.
https://www.cypress.com/products/ez-pd-barrel-connector-replacement-bcr
I would recommend to buy this eval kit: https://www.cypress.com/documentation/development-kitsboards/cy4533-ez-pd-bcr-evaluation-kit It's very cheap ($25), and allows you to evaluate all features of the chip.
But I find it's hilarious that you already declared it to be complex and provide no benefit without even seeing it :palm:

In my fairly modern home with a mix of Android and iOS devices I have one USB-C cable and zero USB-C power supplies.  My laptop (a few years old, not ultrabook format) still uses a barrel jack connector.  Girlfriend's laptop is the same and only 1 year old.  I've no doubt that people have power supplies with Type C,  but barrel-jack connectors are more common and assuming this device will ship with a power adapter, it won't be too expensive to source a 36W/48W 12V AC-adapter whereas a USB Type-C adapter will almost certainly cost more.
Take a look at Amazon - you can buy 45 W USB-C power supply for like $15-20.
Barrel jacks are good exactly until you connect the wrong one and cause some fireworks.

And there will be that not-insignificant group of people who wonder "why does it not work with -cheap 5W smartphone charger-?"  When you have to qualify it with things like, only use >45W or more rated adapter, then the search-space of usable adapters drops considerably.
I kind of suspect that idiots are not exactly the target audience for DIY oscilloscope project :-DD
But again, nothing stops you from having both options if you really want that stone-age barrel jack. It's trivial to implement.
« Last Edit: November 17, 2020, 04:05:04 pm by asmi »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #51 on: November 17, 2020, 04:24:09 pm »
I didn't regard Cypress solution as too complex.    I'd not seen it until you literally just linked it!  I just have an aversion to USB for power supplies because it's not ideal in many cases and for *this product* it is a complex solution with little obvious benefit.  As nctnico says, what does it add?  It needs to add a good thing to be worth the complexity.

There's a big TVS on the input that clamps at ~17V on the present board.  If you put reverse polarity or too many volts in it either blows the fuse or crowbars the external supply.  A barrel jack is about the most rugged DC connector you can get for the size whereas USB-C port could get contaminated with dust/dirt or have connection pads damaged - after all it is a 20 pin connector.  I've had plenty of headaches with USB connectors before failing in odd, usually somewhat intermittent ways: both the Lightning connector on my old iPhone and the USB Micro connector on my old Samsung S5 failed in intermittent fashion and required replacement.  So personally I see the barrel jack as better from an engineering environment perspective where you have more dust and contaminants than typical. 

And you have a point about the target-market being technical but then you will also have non-technical people that might want to use such an instrument e.g. education or hobbyists.  That USB-C supply is twice the retail price of a comparable Stontronics PSU with a barrel jack output, and it's not clear what it offers over the barrel jack for most users.

Don't get me wrong, nothing is set in stone yet,  it might be the best solution for Mk2 of the product.  I will of course listen to feedback in that regard.

 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 73
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #52 on: November 17, 2020, 05:23:23 pm »
Still isn't adding USB-C adding on more complexity to an already complex project? I recall Dave2 having quite a bit of difficulties implementing USB-C power for Dave's new power supply.

There are lots of 1 chip solutions. Have a look at tindie.com for examples with schematics in most cases. If a USB-C power adapter (or battery, the RP-PB201 is a cheaper + more powerful than the Mophie 3XL that I mentioned previously) doesn't have enough power for the 'scope (you always get at least 5 Volts (Vsafe) at 1.5 Amps) then flash a red LED.

Dave seemed to be intimidated by the 659 page USB PD spec (plus the 373 page USB Type C spec). Dave "spat the dummy" for theatrical effect; either that, or he should get out in the real world more often! For example: write a device driver for product A to talk to product B via transport C (e.g. USB, Ethernet, BT) using OS/environment "D". That may involve thousands of pages across multiple specs. You don't read them like a novel, you use them like a dictionary. And when product A fails to talk in some situation to product B, you contact support for product A (say) and point out that it doesn't comply with the spec they claim to implement with a reference to chapter and verse (of the relevant spec).

I proposed two USB-C ports to replace the barrel connector _and_ the USB Type A receptacle. So either one could be power in, while the other functionally replaced the USB Type A host. In that latter role USB-C is more flexible as it can play either the role of (data) host or device. So your PC connection could be via USB-C where the PC is "host" and the 'scope is the device. OTOH you could connect a USB memory key and the 'scope would play the host role (and source a bit of power).

Whoever suggested connecting a USB-C power adapter to a USB dongle might find that hard to do if the USB-C power adapter has a captive cable. [They would need a USB-C F-F dongle.] If the power adapter didn't have a captive cable (just a USB-C female receptacle) then the connection can be made but nothing bad would happen (i.e. no magic smoke), the dongle would be powered at 5 Volts but the dongle would find no USB host at the other end of the cable to talk to. Maybe the dongle would flash a LED suggesting something was wrong. When you use symmetrical cables then many more stupid combinations are possible (so devices and their users need to be a bit smarter) but you need less (a lot less) cable variants. That is a big win for not much pain .
 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 73
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #53 on: November 17, 2020, 06:23:58 pm »
There's a big TVS on the input that clamps at ~17V on the present board.  If you put reverse polarity or too many volts in it either blows the fuse or crowbars the external supply.  A barrel jack is about the most rugged DC connector you can get for the size whereas USB-C port could get contaminated with dust/dirt or have connection pads damaged - after all it is a 20 pin connector.  I've had plenty of headaches with USB connectors before failing in odd, usually somewhat intermittent ways: both the Lightning connector on my old iPhone and the USB Micro connector on my old Samsung S5 failed in intermittent fashion and required replacement.  So personally I see the barrel jack as better from an engineering environment perspective where you have more dust and contaminants than typical. 

And you have a point about the target-market being technical but then you will also have non-technical people that might want to use such an instrument e.g. education or hobbyists.  That USB-C supply is twice the retail price of a comparable Stontronics PSU with a barrel jack output, and it's not clear what it offers over the barrel jack for most users.

I'm proposing an infinitely cheaper PSU supplied with your 'scope :-) That is, no PSU at all. Sounds like you need 15 Volts in and while that is a common USB-C PSU voltage, the El cheapo ones only supply 5 Volts, and sometimes 9 Volts as well. If you need 15 Watts or less than you could boost a RPi 4 PSU (and they are around $US10). All USB-C PSU schematics that I have seen (that can supply > 5 Volts) have a pass MOSFET that gets switched off in a fault condition. For power only USB-C the 24/22 pin connector can come down to as few as 6 active pins (see https://www.cuidevices.com/blog/an-introduction-to-power-only-usb-type-c-connectors ).
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #54 on: November 17, 2020, 07:11:33 pm »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.

I've got a concept and LTSpice simulation of the attenuator and pre-amp side, but nothing has been tested for real or laid out.  It would be useful to have an experienced analog engineer look at this - I know enough to be dangerous but that's about it.
I have attached a design I created based on earlier circuits. IIRC it is intended to offer a 100MHz bandwidth and should survive being connected to mains (note the date!).
Left to right, top to bottom:
- Input section with attenuators. Note sure whether the capacitance towards the probe is constant.
- Frequency compensation using varicaps. This works but requires an (digitally) adjustable voltage of up to 50V and I'm not sure how well a calibration holds over time. Using trim capacitors might be a better idea for a first version.
- over voltage protection
- high impedance buffer
- anti-aliasing filter. Looking at it I'm not sure whether a 7th order filter is a good idea due to phase shifts.
- single ended to differential amplifier and analog offset
- gain control block and ADC.

Nowadays I'd skip the external gain control and use the internal gain control of the HMCAD1511/20 devices. It could be a nice Christmas project to see how it behaves.

There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66, JohnG, 2N3055

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #55 on: November 17, 2020, 07:55:06 pm »

- anti-aliasing filter. Looking at it I'm not sure whether a 7th order filter is a good idea due to phase shifts.
- single ended to differential amplifier and analog offset
- gain control block and ADC.


What's required  to preserve waveform fidelity is the flattest phase delay, a Bessel-Thompson filter.  I've spent a lot of time studying the subject.  It's not well treated in the literature, but well documented in scopes.  Of course, all high order analog filters are problematic to produce.  Though with an FPGA one need only get close and the FPGA can trim the last bit.

As the order goes up the Bessel-Thompson passband approaches the Gaussian passband.  So the impulse response of a 10th order Bessel-Thompson is approximately a time delayed Gaussian spike.

Tom is the reason I dropped work on hacking the Instek GDS-2000E after I blew the scope. It no longer made sense to replace it.

Have Fun!
Reg
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #56 on: November 17, 2020, 09:55:24 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.
As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.
Not sure the trigger interpolation calculation is a single 8bit lookup when the sample point before and after the trigger could each be any value (restricted by the bandwidth of the front end, so perhaps 1/5 of the full range). Sounds like an area you need to look at much more deeply, as the entire capture needs to be phase shifted somewhere or the trigger will be jittering 1 sample forward/backward when the trigger point lands close to the trigger threshold. Exactly where and how to apply the phase shift is dependent on the scopes architecture. This may not be a significant problem if the acquisition sample rate is always >>> the bandwidth.

Similarly if you think a 60 tap filter isn't very useful recall that Lecroy ERES uses 25 taps to obtain its 2 bits of enhancement. P.S. don't restrict the thinking to DSP blocks as 18x18 multipliers, or that they are the only way to implement FIR filters. Similarly while running decimation/filtering at the full ADC rate before storing to acquisition memory makes for a nice architecture concept suited to realtime/fast update rates, its not the only way and keeping all "raw" ADC samples in acquisition memory (Lecroy style) to be plotted later has its own set of benefits and more closely matches your current memory architecture (from what you explained).

If memory is your cheap resource then some of the conventional assumptions are thrown out.
 

Offline nfmax

  • Super Contributor
  • ***
  • Posts: 1560
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #57 on: November 17, 2020, 10:02:07 pm »

- anti-aliasing filter. Looking at it I'm not sure whether a 7th order filter is a good idea due to phase shifts.
- single ended to differential amplifier and analog offset
- gain control block and ADC.


What's required  to preserve waveform fidelity is the flattest phase delay, a Bessel-Thompson filter.  I've spent a lot of time studying the subject.  It's not well treated in the literature, but well documented in scopes.  Of course, all high order analog filters are problematic to produce.  Though with an FPGA one need only get close and the FPGA can trim the last bit.

As the order goes up the Bessel-Thompson passband approaches the Gaussian passband.  So the impulse response of a 10th order Bessel-Thompson is approximately a time delayed Gaussian spike.

Tom is the reason I dropped work on hacking the Instek GDS-2000E after I blew the scope. It no longer made sense to replace it.

Have Fun!
Reg

There are a class of filters with a linear-phase (Bessel-like) passband and an equiripple stopband, originally described by Feistel & Unbehauen [1], for which Williams [2] gives some limited design tables: I have used these with success as anti-aliasing filters where waveform fidelity is important. There is a conference paper by Huard et al. [3] - which I don't have a copy of, unfortunately - that describes more recent progress in similar filter designs. Huard worked at Tektronix, which may be a clue about the applications being addressed!

[1] Feistel, Karl Heinz, and Rolf Unbehauen. Tiefpässe Mit Tschebyscheff-Charakter Der Betriebsdämpfung Im Sperrbereich Und Maximal Geebneter Laufzeit. Frequenz 19, no. 8 (January 1965). https://doi.org/10.1515/FREQ.1965.19.8.265.

[2] Williams, Arthur Bernard, and Fred J. Taylor. Electronic Filter Design Handbook. 3rd ed. New York: McGraw-Hill, 1995. ISBN 978-0-07-070441-1

[3] Huard, D.R., J. Andersen, and R.G. Hove. Linear Phase Analog Filter Design with Arbitrary Stopband Zeros. In [Proceedings] 1992 IEEE International Symposium on Circuits and Systems, 2:839–42. San Diego, CA, USA: IEEE, 1992. https://doi.org/10.1109/ISCAS.1992.230091.
« Last Edit: November 17, 2020, 10:12:28 pm by nfmax »
 
The following users thanked this post: egonotto

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #58 on: November 17, 2020, 11:25:09 pm »
First of all, as many others have said, very good job! Especially for a one man's band.

I totally agree with ntcnico's view on having more stuff be done on higher level processing, because the lower you go the less and less people will contribute to it. Ideally I would like to see minimal hardware and everything be done in software (I know it's not possible). It's also one way it can distinguish from the current low level entry scopes on the market.

It's totally understandable the use of the compute module, but newer versions seem to be able to break the interfaces.... so why not use the main RPI boards? It would be much more future proof as well as being more upgrade-friendly as new and more powerful RPIs come out. Just food for thought.

I always prefer to have a standalone instrument, because my computer (even phone) is always too busy,like running my development environment, browsing the web, email, etc... as it is it already has too little screen space available.

And although I'm not against touch screens and agree they may have an advantage say, handling (a lot of) menus, I also hate grease on my screens, as they interfere with readability. Touch is also bad at precision and there's no instant haptic feedback. As long as I can easily add some keys to the instrument (even if a custom USB keyboard), I'm OK with it.

I can see people designing their own cases and 3D printing them.
« Last Edit: November 18, 2020, 02:12:27 am by nuno »
 

Offline Spirit532

  • Frequent Contributor
  • **
  • Posts: 487
  • Country: by
    • My website
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #59 on: November 18, 2020, 01:52:56 am »
Have you considered implementing Andrew Zonenberg's glscopeclient for your UI?
 

Offline JohnG

  • Frequent Contributor
  • **
  • Posts: 570
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #60 on: November 18, 2020, 12:50:56 pm »
This is a fantastic bit of work, so I hope it takes off.

I can only contribute a couple requests, since this sort of work is way out of my realm. One request would be that if the front end is modular, it would be nice if it were flexible enough to accomodate some more unusual front ends, like a multi-GHz sampling scope or a VNA front end. The latter may not even make sense within the context of the project.

Again, really nice work!

John
"Reality is that which, when you quit believing in it, doesn't go away." Philip K. Dick (RIP).
 

Offline Zucca

  • Supporter
  • ****
  • Posts: 4308
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #61 on: November 18, 2020, 01:42:44 pm »
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #62 on: November 18, 2020, 02:37:28 pm »
Still isn't adding USB-C adding on more complexity to an already complex project?
Right. Also USB-C will take time and resources off actual scope work! Those who suggest USB-C could start adapter development right now and share with author so he can copy-paste if he finds it useful.

To the *scope* wishlist I would add DDC (digital down converter) with configurable digital IF filter to implement (possibly but not necessarily) realtime spectrum analyzer. For those who wonder why plain FFT is not good enough, answer is: using just FFT you either get frequency resolution or performance, but not both. Calculating gazillion-tap FFT is slow - every user of FFT option of common scopes know. With DDC we downconvert frequency band of interest down to DC (0Hz) to use low order FFT, like 128-1024 taps. Further reading: Tektronix MDO scope info.
« Last Edit: November 18, 2020, 02:39:20 pm by ogden »
 
The following users thanked this post: egonotto

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #63 on: November 18, 2020, 02:45:19 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.
As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.
Not sure the trigger interpolation calculation is a single 8bit lookup when the sample point before and after the trigger could each be any value (restricted by the bandwidth of the front end, so perhaps 1/5 of the full range). Sounds like an area you need to look at much more deeply, as the entire capture needs to be phase shifted somewhere or the trigger will be jittering 1 sample forward/backward when the trigger point lands close to the trigger threshold. Exactly where and how to apply the phase shift is dependent on the scopes architecture. This may not be a significant problem if the acquisition sample rate is always >>> the bandwidth.

Similarly if you think a 60 tap filter isn't very useful recall that Lecroy ERES uses 25 taps to obtain its 2 bits of enhancement. P.S. don't restrict the thinking to DSP blocks as 18x18 multipliers, or that they are the only way to implement FIR filters. Similarly while running decimation/filtering at the full ADC rate before storing to acquisition memory makes for a nice architecture concept suited to realtime/fast update rates, its not the only way and keeping all "raw" ADC samples in acquisition memory (Lecroy style) to be plotted later has its own set of benefits and more closely matches your current memory architecture (from what you explained).

If memory is your cheap resource then some of the conventional assumptions are thrown out.

Interpolation for sample alignment and display infill are fundamentally different even if the operation is identical.

High order analog filters have serious problems with tolerance spreads.  So sensibly, the interpolation to align the samples and the correction operator should be combined using data measured in production.

The alignment interpolation operators are precomputed, so the lookup is into a table of 8 point operators.  Work required is 8 multiply-adds per output sample which is tractable with an FPGA.  Concern about running out of fabric led to a unilateral decision by me to include a 7020 version as the footprints are the same but lots more resources.  I  am still nervous about the 7014 not having sufficient DSP blocks.

Interpolation for display can be done in the GPU where  the screen update rate limitation provides lots of time and the output series are short.  So an FFT interpolator for the display becomes practical.

As for all the other features such as spectrum and vector analysis we have discussed that at great length. 
The To Do list is daunting.

Reg


 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #64 on: November 18, 2020, 03:16:03 pm »
If anyone hasn't figured it yet, rhb (Reg) is the American contributor to this project - he's been a good advisor/counsel through this whole project
 
The following users thanked this post: egonotto, 2N3055

Offline dave j

  • Regular Contributor
  • *
  • Posts: 128
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #65 on: November 18, 2020, 04:46:25 pm »
I don't know how familiar you are with GPU programming but if you're considering using the GPU for rendering it's worth noting that the various Pi GPUs are tile based - which can have significant performance implications depending upon what you are doing.

There's a good chapter on performance turning for tile based GPUs in the OpenGL Insights book that the authors have helpfully put online.

I'm not David L Jones. Apparently I actually do have to point this out.
 
The following users thanked this post: tom66, egonotto, Fungus, 2N3055

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #66 on: November 18, 2020, 06:19:43 pm »
We did have an early prototype using the Pi's GPU but the problem was the performance even at 2048 points was worse than a software renderer.  We (me and a friend who is GL-experienced) looked into using custom software on the VPU/QPU but the complexity of writing for an architecture with limited examples put us off.  There is some documentation from Broadcom but the tools are poorly documented (and most are reverse engineered.)

One approach we considered was re-arranging the data in buffers to more closely represent the tiling arrangement of the GPU - this would be 'cheap' to do on the FPGA using a BRAM lookup.

Thanks for the link though, that is an interesting (and perhaps useful) resource.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #67 on: November 18, 2020, 06:49:05 pm »
Perhaps an alternative could be a Jetson Nano module. Using the SO-DIMM format and costing $129 in single quantities it could be a good alternative with a GPU which is also useful for raw number crunching.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #68 on: November 18, 2020, 06:59:34 pm »
Interesting idea - but it would throw out the window any option for battery operation if that heatsink size is any indication of the power dissipation.

Whether that would be a killer for people I don't know?  I think it's a nice to have especially if the device itself is otherwise portable.
 

Offline nuno

  • Frequent Contributor
  • **
  • Posts: 606
  • Country: pt
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #69 on: November 18, 2020, 07:03:38 pm »
I wouldn't put portability as a priority - most scopes we use are not battery-operated-ready from factory.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #70 on: November 18, 2020, 07:24:01 pm »
Interesting idea - but it would throw out the window any option for battery operation if that heatsink size is any indication of the power dissipation.

Whether that would be a killer for people I don't know?  I think it's a nice to have especially if the device itself is otherwise portable.
Well, it would simply need a bigger battery. Laptops aren't low power either. To me portability is lowest on the list though. Personally I'd prefer a large screen.

With ultra-low power you'll also be throwing out the possibility of creating a scope with >500MHz bandwidth for example. The ADC on your prototype has a full power bandwidth of 700MHz. Use two in tandem and you can get to 2Gs/s in a single channel in order to meet Nyquist. However supporting 500MHz will probably require several chips which will get warm.

The Jetson Nano seems to need about 10W of power while running at full capacity. There is also a (I think) pin compatible Jetson Xavier NX module with vastly better specs but this also needs more power. The use of these modules would create a lot of flexibility to trade performance for money.
« Last Edit: November 18, 2020, 08:27:12 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #71 on: November 18, 2020, 10:50:09 pm »
It's certainly an option, but the current Zynq solution is scalable to about 2.5GSa/s for a single channel operation (625MSa/s on 4ch).  Maximum bandwidth around 300MHz per channel.

If you want 500MHz 4ch scope, the ADC requirements would be ~2.5GSa/s per channel pair (drop to 1.25GSa/s with a channel pair activated) which implies about 5GB/s data rate into RAM.  The 64-bit AXI Slave HPn bus in the Zynq clocked at 200MHz maxes out around 1.6GB/s and four channels gets you up to 6.4GB/s but that would completely saturate the AXI buses leaving no free slots for RAM accesses for readback from the RAM or for executing code/data.

The fastest RAM configuration supported would be a dual channel DDR3 configuration at 800MHz requiring the fastest speed grade, which would get the total memory bandwidth to just under 6GB/s.

Bottom line is the platform caps out around 2.5GSa/s in total with the present Zynq 7000 architecture,   and would need to move towards an UltraScale or a dedicated FPGA capture engine for faster capture rate.

So I suppose the question is -- if you were to buy an open-source oscilloscope -- what would you prefer

1. US$1200 instrument with 2.5GSa/s per channel pair (4 channels, min. 1.25GSa/s), ~500MHz bandwidth, Nvidia core, >100kwfm/s, mains only power
2. US$600 instrument with 1GSa/s multiplexed over 4 channels, ~125MHz bandwidth, RasPi core, >25kwfm/s, portable/battery powered
3. Neither/something else
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #72 on: November 18, 2020, 10:54:47 pm »

So I suppose the question is -- if you were to buy an open-source oscilloscope -- what would you prefer

1. US$1200 instrument with 2.5GSa/s per channel pair (4 channels, min. 1.25GSa/s), ~500MHz bandwidth, Nvidia core, >100kwfm/s, mains only power
2. US$600 instrument with 1GSa/s multiplexed over 4 channels, ~125MHz bandwidth, RasPi core, >25kwfm/s, portable/battery powered
3. Neither/something else
FYI Very close to the specs of a hacked SDS2104X Plus for $1400
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #73 on: November 18, 2020, 10:56:32 pm »
I suspect an instrument like this can't ever compete with a mainstream OEM on bang-per-buck - it has to compete on the uniqueness of being a FOSS product.

That is, you can customise it, you have modularity, you have upgrade routes and flexibility.  But it will always be more expensive in low volumes to produce something like this, that's just an ultimate fact of life.
 
The following users thanked this post: egonotto, nuno

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #74 on: November 18, 2020, 10:59:17 pm »
I suspect an instrument like this can't ever compete with a mainstream OEM on bang-per-buck - it has to compete on the uniqueness of being a FOSS product.

That is, you can customise it, you have modularity, you have upgrade routes and flexibility.  But it will always be more expensive in low volumes to produce something like this, that's just an ultimate fact of life.
Or find a performance/features niche that's not currently being catered for.
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 
The following users thanked this post: Someone

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #75 on: November 18, 2020, 11:21:28 pm »
It's certainly an option, but the current Zynq solution is scalable to about 2.5GSa/s for a single channel operation (625MSa/s on 4ch).  Maximum bandwidth around 300MHz per channel.

If you want 500MHz 4ch scope, the ADC requirements would be ~2.5GSa/s per channel pair (drop to 1.25GSa/s with a channel pair activated) which implies about 5GB/s data rate into RAM.  The 64-bit AXI Slave HPn bus in the Zynq clocked at 200MHz maxes out around 1.6GB/s and four channels gets you up to 6.4GB/s but that would completely saturate the AXI buses leaving no free slots for RAM accesses for readback from the RAM or for executing code/data.

The fastest RAM configuration supported would be a dual channel DDR3 configuration at 800MHz requiring the fastest speed grade, which would get the total memory bandwidth to just under 6GB/s.

Bottom line is the platform caps out around 2.5GSa/s in total with the present Zynq 7000 architecture,   and would need to move towards an UltraScale or a dedicated FPGA capture engine for faster capture rate.

So I suppose the question is -- if you were to buy an open-source oscilloscope -- what would you prefer

1. US$1200 instrument with 2.5GSa/s per channel pair (4 channels, min. 1.25GSa/s), ~500MHz bandwidth, Nvidia core, >100kwfm/s, mains only power
2. US$600 instrument with 1GSa/s multiplexed over 4 channels, ~125MHz bandwidth, RasPi core, >25kwfm/s, portable/battery powered
3. Neither/something else
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

@Tautech: an instrument like this is interesting for people who like to extend the functionality. Another fact is that no oscilloscope currently on the market is perfect. You always end up with a compromise even if you spend US $10k. I'm not talking about bandwidth but just basic features.
« Last Edit: November 18, 2020, 11:41:16 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #76 on: November 19, 2020, 12:17:01 am »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.
As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.
Not sure the trigger interpolation calculation is a single 8bit lookup when the sample point before and after the trigger could each be any value (restricted by the bandwidth of the front end, so perhaps 1/5 of the full range). Sounds like an area you need to look at much more deeply, as the entire capture needs to be phase shifted somewhere or the trigger will be jittering 1 sample forward/backward when the trigger point lands close to the trigger threshold. Exactly where and how to apply the phase shift is dependent on the scopes architecture. This may not be a significant problem if the acquisition sample rate is always >>> the bandwidth.

Similarly if you think a 60 tap filter isn't very useful recall that Lecroy ERES uses 25 taps to obtain its 2 bits of enhancement. P.S. don't restrict the thinking to DSP blocks as 18x18 multipliers, or that they are the only way to implement FIR filters. Similarly while running decimation/filtering at the full ADC rate before storing to acquisition memory makes for a nice architecture concept suited to realtime/fast update rates, its not the only way and keeping all "raw" ADC samples in acquisition memory (Lecroy style) to be plotted later has its own set of benefits and more closely matches your current memory architecture (from what you explained).

If memory is your cheap resource then some of the conventional assumptions are thrown out.

Interpolation for sample alignment and display infill are fundamentally different even if the operation is identical.

High order analog filters have serious problems with tolerance spreads.  So sensibly, the interpolation to align the samples and the correction operator should be combined using data measured in production.

The alignment interpolation operators are precomputed, so the lookup is into a table of 8 point operators.  Work required is 8 multiply-adds per output sample which is tractable with an FPGA.  Concern about running out of fabric led to a unilateral decision by me to include a 7020 version as the footprints are the same but lots more resources.  I  am still nervous about the 7014 not having sufficient DSP blocks.

Interpolation for display can be done in the GPU where  the screen update rate limitation provides lots of time and the output series are short.  So an FFT interpolator for the display becomes practical.

As for all the other features such as spectrum and vector analysis we have discussed that at great length. 
The To Do list is daunting.

Reg
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

So far you've both just said there is a fixed filter/delay, which is a worry when you plan to have acquisition rates close to the input bandwidth.

Equally doing sinc interpolation is a significant processing cost (time/area tradeoff), and just one of the waveform rate barriers in the larger plotting system. Although each part is achievable alone its how to balance all those demands within the limited resources, hence suggesting that software driven rendering is probably a good balance for the project as described.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #77 on: November 19, 2020, 04:04:39 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.
 
The following users thanked this post: splin

Offline shaunakde

  • Contributor
  • Posts: 11
  • Country: us
    • Shaunak's Lab
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #78 on: November 19, 2020, 04:19:53 am »
Quote
- Does a project like this interest you?   If so, why?   If not, why not?
Yes. This is interesting because I don't think there is a product out there that has been able to take advantage of the recent advances in embedded computing. I would love to see a "USB-C" scope that is implemented correctly, with excellent open-source software and firmware (maybe pico scopes come close). Bench-scopes are amazing, but PC scopes that are python scriptable, are just more interesting. Maybe not so much for pure electronics, but for science experiments it could be game-changer.

Quote
- What would you like to see from a Mk2 development - if anything:  a more expensive oscilloscope to compete with e.g. the 2000-series of many manufacturers that aims more towards the professional engineer,  or a cheaper open-source oscilloscope that would perhaps sell more to students, junior engineers, etc.?  (We are talking about $500USD difference in pricing.  An UltraScale part makes this a >$800USD product - which almost certainly changes the marketability.)
This is a tough one. I personally would like it to remain as it is, a low-cost device and perhaps a better version of the "Analog Discovery" - just so that it can enable applications in applied sciences etc. But logically, it feels like this should be a higher-end scope that has all the bells and whistles; however, that does limit the audience (And hence community interest) which is not exactly great for an opensource project.

Quote
Would you consider contributing in the development of an oscilloscope?  It is a big project for just one guy to complete.  There is DSP, trigger engines, an AFE, modules, casing design and so many more areas to be completed.  Hardware design is just a small part of the product.  Bugs also need to be found and squashed,  and there is documentation to be written.  I'm envisioning the capability to add modules to the software and the hardware interfaces will be documented so 3rd party modules could be developed and used.
Yes. My github handle is shaunakde - I would LOVE to help in any way I can (but probably I will be most useful in the documentation for now)


Quote
I'm terrible at naming products.  "BluePulse" is very unlikely to be a longer term name.  I'll welcome any suggestions.
apertumScope - coz latin :P
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #79 on: November 19, 2020, 08:20:25 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

I did consider this when I first embarked on the project, but the reverse-engineering exercise would be very significant.  I'd also be stuck with whatever the manufacturer decided in their hardware design, that might limit future options.  And, you're stuck if that scope gets discontinued, or has a hardware revision which breaks things.  They might patch the software route that you use to load your unsigned binary (if they do implement that) or change the hardware in a subtle and hard to determine way (swap a couple pairs on the FPGA layout).

rhb also discovered the hard way that poking around a Zynq FPGA with 12V DC on the same connector as WE# is not conducive to the long-term function of the instrument.

It's a different story with something like a wireless router because those devices are usually running Linux with standard peripherals - an FPGA is definitely not a standard peripheral.
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #80 on: November 19, 2020, 08:22:49 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

 :horse:

This has been discussed to death. It cannot be done in a time frame needed. Reverse engineering existing scope to the point that you understand everything about it's architecture is much harder than simply designing one from the scratch. There were dozens of attempts, that ended up with loading custom linux on scopes, and no way to talk to acquisition engine. Mainly running Doom on scope.
Scope architecture is interleaved design, hardware acquisition, hardware acceleration, system software, scope software and app side acceleration (gpu etc..)
Design decision have been made by manufacturers how to implement it. You end up with something that, in the end, won't have much more capabilities that what it had in the start. You just spent 20 engineer years to have same capabilities and different fonts...
Only way that maybe would make sense to try would be if existing manufacturer would open their design (publish all internal details) as a starting point. Which will happen, well, never.

FOSS scope for the sake of FOSS is a waste of time. GOOD FOSS scope is not... If good FOSS scope existed, it would start to spread through academia, hobby, industry and if really used, that would fuel it's progress. Remember Linux and Kicad... When did they take off? When they became useful and people started using them, not when they became free. They existed and were free for many years and happily ignored by most.

And then you would have many manufacturers that would make them for nominal price. It would be easy for them. like Arduino and clones. Someone did the hard work of hardware design. And someone else will take care of software... That is a dream for manufacturing.. Very low cost... I bet you that what we estimate now to be 600 USD could be had for 300 USD (or less) if mass manufacturing happens...
 
The following users thanked this post: Someone, Kean

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #81 on: November 19, 2020, 08:26:21 am »
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

There is no need to use the fractional position when in the situation where 1 wave point is plotting across 1 pixel or less.  At that point any fractionality is lost to the user unless you do antialiasing.  I prototyped antialiasing on the GPU renderer, and the results were poor because a waveform never looked "sharp".  I suspect no scope manufacturer makes a true antialiased renderer, it just doesn't add anything.

With that in mind your sinx/x problem becomes only an issue at faster timebases (<50ns/div) and that is where you start plotting more pixels than you have points so your read input data is slower than your write output data.  Your sinx/x filter is essentially a FIR filter with sinc coefficients loaded in and every nth parameter set to zero (if I understand it correctly!)  That makes it prime territory for the DSP blocks on the FPGA.

There isn't any reason why, with an architecture like the Zynq you can't do a hybrid of the two - software renderer configures hardware rendering engine for instance by loading list of pointers and pixel offsets for each waveform.  While it would probably end up being the ultimate bottleneck,  the present system is doing over 80,000 DMA transfers per second to achieve the waveform update rate, and that is ALL controlled by the ARM.  So if that part is eliminated and driven entirely by the FPGA logic and the ARM is just processing the sinx/x offset and trigger position then the performance probably won't be so bad.
« Last Edit: November 19, 2020, 08:28:03 am by tom66 »
 

Offline sb42

  • Contributor
  • Posts: 42
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #82 on: November 19, 2020, 08:33:22 am »
What I'd love to see is this kind of talent go into a really top notch open source firmware for one of the existing inexpensive scopes. The firmware is usually where most of the flaws are in these things, of course the problem is the difficulty of reverse engineering someone else's hardware which invariably has no documentation whatsoever these days. When aftermarket firmwares came out for consumer internet routers it was a game changer and I'm betting the same thing could be done with a scope.

Internet router: mass-market consumer product, cheap commodity hardware, Linux does everything.
Oscilloscope: completely the opposite in every way? ;)

The only way I see this happening would be for one of the A-brand manufacturers to come up with an open-platform scope that's explicitly designed to support third-party firmware, kind of like the WRT54GL of oscilloscopes. I suspect that the business case isn't very good, though.

Manufacturers of inexpensive scopes iterate too quickly for reverse engineering to be practical.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #83 on: November 19, 2020, 09:12:04 am »
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

The present Zynq solution is scalable to about a max. of 2.5GSa/s across all active channels.   That would use the bandwidth of two 64-bit AXI Master ports and require a complex memory gearbox to ensure the data got assembled correctly in the RAM.  It would require a minimum dual channel memory controller, as the present memory bandwidth is ~1.8GB/s, so you just wouldn't have enough bandwidth to write all your samples without losing them.

You can't get to 5GSa/s across all active channels (i.e. 1.25GSa/s per channel in 4ch mode or 2.5GSa/s in ch1,ch3 mode) without having a faster memory bus, AXI bus, and fabric, so this platform can't make a 4ch 500MHz scope.    Which pushes the platform towards the UltraScale part, or a very fast FPGA front end (e.g. Kintex-7) connected to a slower backbone (e.g. a Zynq or a Pi) if you want to be able to do that.

If the platform is modular though then there is an option.  The product could "launch" with a smaller FPGA/SoC solution and the motherboard could be replaced at a later date with the faster SoC solution.  There would be software compatibility headaches, as the platforms would differ considerably, but it would be possible to share most of the GUI/Control/DSP stuff, I think.

The really big advantage with keeping it all within something like an UltraScale is that things become memory addressable.  If that can also be done with PCI-e then that could be a winner.  It seems that the Nvidia card doesn't expose PCI-e ports though which is a shame.  You'd need to do it over USB3.0.

With UltraScale though, it would be possible to fit an 8GB SODIMM memory module and have ~8Gpts of waveform memory available for long record mode. 

That would be a pretty killer feature.  You could also upgrade it at any time (ships with a 2GB DDR4 module, install any laptop DDR4 RAM that's fast enough.)
« Last Edit: November 19, 2020, 09:13:59 am by tom66 »
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #84 on: November 19, 2020, 09:35:06 am »
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.
There is no need to use the fractional position when in the situation where 1 wave point is plotting across 1 pixel or less.
I disagree, if you zoom in then it will become visible (even before display "fill in" interpolation appears).

Between ADC sample rate, acquisition memory sample rate, and display points/px, there may be differences between any of those and opportunities for aliasing and jitter to appear.

While the rendering etc is all the exciting/pretty stuff, I agree with many posters above that the starting point to get people working on the project would be a viable AFE and ADC capture system. There are many ways to use that practically and trying to lock down the processing architecture/concept/limitations at the start might be putting the cart before the horse.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #85 on: November 19, 2020, 09:38:54 am »
I disagree, if you zoom in then it will become visible (even before display "fill in" interpolation appears).

Between ADC sample rate, acquisition memory sample rate, and display points/px, there may be differences between any of those and opportunities for aliasing and jitter to appear.

While the rendering etc is all the exciting/pretty stuff, I agree with many posters above that the starting point to get people working on the project would be a viable AFE and ADC capture system. There are many ways to use that practically and trying to lock down the processing architecture/concept/limitations at the start might be putting the cart before the horse.

Well yes, but at that point you are no longer doing 1 pixel to 1 wavepoint or more.   So you can recompute the trigger position if needed when zooming in.  The fractional trigger point can then be used, if needed.  In the present implementation, the trigger point is supplied as a fixed point integer with a 24-bit integer component and 8-bit fractional component.

Reinterpreting data when zooming in seems to be pretty common amongst scopes.  When I test-drove the SDS5104X you had a sinx/x interpolation 'bug' which only became visible at certain timebases once stopped.  So the scope would be reinterpreting the data in RAM as needed.

« Last Edit: November 19, 2020, 09:41:46 am by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #86 on: November 19, 2020, 10:18:02 am »
I think the same design philosophy can support both options. Maybe even on 1 PCB design with assembly options for as long as the software is portable between platforms.

BTW for 500MHz 1.25 Gs/s per channel is enough to meet Nyquist for as long as the anti-aliasing filter is steep enough and sin x/x interpolation has been implemented properly. Neither is rocket science. But probably a good first step would be a 4 channel AFE board for standard High-z (1M) probes with a bandwidth of 200MHz.

The present Zynq solution is scalable to about a max. of 2.5GSa/s across all active channels.   That would use the bandwidth of two 64-bit AXI Master ports and require a complex memory gearbox to ensure the data got assembled correctly in the RAM.  It would require a minimum dual channel memory controller, as the present memory bandwidth is ~1.8GB/s, so you just wouldn't have enough bandwidth to write all your samples without losing them.

You can't get to 5GSa/s across all active channels (i.e. 1.25GSa/s per channel in 4ch mode or 2.5GSa/s in ch1,ch3 mode) without having a faster memory bus, AXI bus, and fabric, so this platform can't make a 4ch 500MHz scope.    Which pushes the platform towards the UltraScale part, or a very fast FPGA front end (e.g. Kintex-7) connected to a slower backbone (e.g. a Zynq or a Pi) if you want to be able to do that.

I've also done some number crunching on bandwidth. A cheap solution to get to higher bandwidths is likely using multiple (low cost) FPGAs with memory attached and use PCI express to transfer the data (at a lower speed) to the processor module. With PCIexpress you can likely get rid of the processor inside the Zync as well. Multiple small FPGAs are always cheaper compared to using 1 big FPGA.

The Intel (formerly Altera) Cyclone 10 FPGA's look like a better fit compared to Xilinx' offerings where it comes to memory bandwidth, I/O bandwidth and price. The memory interface on the Cyclone 10 has a peak bandwidth of nearly 15GB/s (64bit wide but 72 bit is possible), there is a hard PCIexpress gen2 IP block (x2 on the smallest device and x4 on the larger ones) and 12.5Gb/s transceivers which also support the JESD204B ADC interface. On top of that it seems Intel supports OpenCL for FPGA development which could offer a relatively easy path to migrate code between GPU and FPGA (seeing is believing though). The price look doable; the 10CX085 (smallest part) sits at around $120 in single quantities.
« Last Edit: November 19, 2020, 12:17:55 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66, ogden

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #87 on: November 19, 2020, 12:34:38 pm »
I'll give the Cyclone 10 devices a look.  I'd initially ruled them out due to cost but if we're looking at UltraScale it makes sense to consider similar devices.

I do think we want an ARM of some kind on the FPGA.  It makes the system control so much easier to implement and maintain.  FSMs everywhere for system/acquisition control do not a happy debugger make.

There are enough FSMs in the present design to deal with acquisition, stream out, DMA, trigger, etc. and getting those to behave in a stable fashion was quite the task.  Keeping a small real-time processor on the FPGA makes a lot of sense. Of course there are options with soft-cores here but a hard core is preferred due to performance and it doesn't eat into your logic area.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #88 on: November 19, 2020, 01:11:18 pm »
I'll give the Cyclone 10 devices a look.  I'd initially ruled them out due to cost but if we're looking at UltraScale it makes sense to consider similar devices.

I do think we want an ARM of some kind on the FPGA.  It makes the system control so much easier to implement and maintain.  FSMs everywhere for system/acquisition control do not a happy debugger make.
For that stuff a simple softcore will do just fine (for example the LM32 from Lattice) but it is also doable from Linux. Remember that a lot of hardware devices on a Linux system have realtime requirements too (UART for example) so using an interrupt is perfectly fine. A system is much easier to debug if there are no CPUs scattered allover the place. One of my customer's projects involves a system which has a softcore inside an FPGA and a processor running Linux. In order to improve / simplify the system a lot of functionality is being moved from the softcore to the main processor.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #89 on: November 19, 2020, 02:31:54 pm »
The Intel (formerly Altera) Cyclone 10 FPGA's look like a better fit compared to Xilinx' offerings where it comes to memory bandwidth, I/O bandwidth and price. The memory interface on the Cyclone 10 has a peak bandwidth of nearly 15GB/s (64bit wide but 72 bit is possible), there is a hard PCIexpress gen2 IP block (x2 on the smallest device and x4 on the larger ones) and 12.5Gb/s transceivers which also support the JESD204B ADC interface. On top of that it seems Intel supports OpenCL for FPGA development which could offer a relatively easy path to migrate code between GPU and FPGA (seeing is believing though). The price look doable; the 10CX085 (smallest part) sits at around $120 in single quantities.
You forgot to mention that you will have to pay 4k$ per year in order to be able to use them :palm:

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #90 on: November 19, 2020, 02:32:56 pm »
I'll give the Cyclone 10 devices a look.
Don't bother. It's garbage.

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #91 on: November 19, 2020, 02:33:32 pm »
Sigh.  At least I can build for Zynq using Vivado WebPACK.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #92 on: November 19, 2020, 02:38:22 pm »
I'll give the Cyclone 10 devices a look.
Don't bother. It's garbage.
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #93 on: November 19, 2020, 02:54:39 pm »
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
LP subfamily, which can be used with free version of their tools, doesn't even have memory controllers  :palm:
GX subfamily is behind a heavy paywall (4k$ a year).
You are better off using Kintex-7 devices from Xilinx, lower end ones (70T and 160T) can be used with free tools, license for 325T can be purchased with a devboard and subsequently used for your own designs (Xilinx device-locked license allows using that part in any package and speed grade, not necessarily the one that's on a devboad, and it's a permanent license, not subscription), the cheapest Kintex-7 devboard that I know of that ships with a license is Digilent's Genesys 2 board for $1k, and you can find 325T devices in China for 200-300$ a pop, as opposed to Digikey prices of 1-1.5k$ a pop. Or you can talk to Xilinx directly, and they typically provide deep discounts - it won't be as cheap as you can get them in China, but it will be a fully legit devices and you can be sure you can always buy them at that price, while sources in China tend to be ad-hoc - they appear on a market, they sell their stock, and they disappear forever. These devices provide up to 64/72bit 933MHz DDR3 interface (~14.6 GBytes/s of bandwidth), up to 16 transceivers which can go as high as 12.5 Gbps (depending on package and speed grade), and all of that in convenient 1 mm pitch BGA packages with 400 or 500 user IO balls so you can connect a lot of stuff to it.
But most importantly, Kintex-7 fabric is significantly faster than even Artix-7/Spartan-7 one, which is faster than anything Intel offers in Cyclone family.
« Last Edit: November 19, 2020, 03:00:37 pm by asmi »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #94 on: November 19, 2020, 03:01:06 pm »
I'll take you word for it but still I wonder if you can elaborate a bit more about why these devices are a bad choice.
LP subfamily, which can be used with free version of their tools, doesn't even have memory controllers  :palm:
GX subfamily is behind a heavy paywall (4k$ a year).
You are better off using Kintex-7 devices from Xilinx, lower end ones (70T and 160T) can be used with free tools, license for 325T can be purchased with a devboard and subsequently used for your own designs (Xilinx device-locked license allows using that part in any package and speed grade, non necessarily the one that's on a devboad, and it's a permanent license, not subscription), the cheapest Kintex-7 devboard that I know of that ships with a license is Digilent's Genesys 2 board for $1k, and you can find 325T devices in China for 200-300$ a pop, as opposed to Digikey prices of 1-1.5k$ a pop. Or you can talk to Xilinx directly, and they typically provide deep discounts - it won't be as cheap as you can get them in China, but it will be a fully legit devices and you can be sure you can always buy them at that price, while sources in China tend to be ad-hoc - they appear on a market, they sell their stock, and they disappear forever. These devices provide up to 64/72bit 933MHz DDR3 interface (~14.6 GBytes/s of bandwidth), up to 16 transceivers which can go as high as 12.5 Gbps (depending on package and speed grade), and all of that in convenient 1 mm pitch BGA packages with 400 or 500 user IO balls so you can connect a lot of stuff to it.

For a moment assume the $4k for the Cyclone 10 license drops to 0. Are there any technical problems with the Cyclone 10 FPGAs? The Kintex device you are proposing is about 3 times more expensive (up to 10 times when comparing Digikey prices). Even with the $4k subscription it doesn't take selling a lot of boards to break even.
« Last Edit: November 19, 2020, 03:04:11 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #95 on: November 19, 2020, 03:18:44 pm »
For a moment assume the $4k for the Cyclone 10 license drops to 0. Are there any technical problems with the Cyclone 10 FPGAs? The Kintex device you are proposing is about 3 times more expensive (up to 10 times when comparing Digikey prices).
I'm not interested in discussing spherical horses in the vacuum. I prefer practical approach. And it tells me for 4k$ a year I can buy quite a bit of Kintex devices, which can be used with free tools. Besides even top of the line GX device fall short of what 325T offers, while being priced similarly to 325T from China, with it's relatively obtainable license. If we stick to free versions of tools, 160T offers quite a bit of resources - up to 8 12.5 Gbps MGTs, same 64/72bit 933 DDR3 interface, up to x8 PCIE-express 2 link (PCIE Gen 3 is possible, but you have to roll your own, or buy commercial IP) and all other goodies of 7 series family.

Even with the $4k subscription it doesn't take selling a lot of boards to break even.
Did you forget the part that it's an open source project? That means tools must be free.

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16647
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #96 on: November 19, 2020, 03:46:27 pm »
I did consider this when I first embarked on the project, but the reverse-engineering exercise would be very significant.

And the manufacturer could decide to stop production of that model at any moment and you'll be back to square one.
« Last Edit: November 19, 2020, 03:55:10 pm by Fungus »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #97 on: November 19, 2020, 08:33:07 pm »
Did you forget the part that it's an open source project? That means tools must be free.
Some tradeoff needs to be made here. If requiring free tools increases the price of each unit with several hundreds of US dollars then that will be an issue for adoption of the platform in general. For example: one of my customers has invested nearly 20k euro in tooling to be able to work on an open source project. Something else to consider is risk versus free tools. For example: the commercial PCB package I use can do impedance and cross-talk simulation of a PCB and has extensive DRC checks for high speed designs. Free tools like Kicad are not that advanced so they need more time for manual checking and/or pose a higher risk of an error in the board design which needs an expensive re-spin. At some point software which costs several $k is well worth it just from a risk management point of view. Developing hardware is expensive. IIRC I have sunk about 3k euro in my own USB oscilloscope project (which wasn't a waste even though I stopped the project).

Anyway,  I think your suggestion for the Kintex 70T (XC7K70T) is a very good one. I have overlooked that one. Price wise it seems to be on par with the Cyclone 10CX085 and it can do the job as well.
« Last Edit: November 19, 2020, 08:38:51 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #98 on: November 19, 2020, 09:01:57 pm »
The Zynq 7014S has 40k LUTs + ARM processor + hard DDR controller fabric.  1 unit starting from $89 USD.

IMO that wins over the 65k LUTs + no hardcore CPU + no DDR3 controller in the Kintex 70T -- and in my experience the MIG eats up easily 20% of the logic area for just a single channel DDR3 controller, I'm not convinced that would be a worthwhile trade off here. By the time you've implemented everything that the Zynq gives you "for free" (including 256KB of fast on-chip RAM) you're stepping up several grades of 'pure FPGA' to get there.
 
« Last Edit: November 19, 2020, 09:05:39 pm by tom66 »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #99 on: November 19, 2020, 09:10:04 pm »
The present oscilloscope fits in ~20% of the logic area of the 7014S.  That's including a System ILA which uses about 10% of logic area. I imagine the 'render engine' would use about 10-15% of logic area if implemented on the FPGA fabric.  We're not constrained by the fabric capacity right now - but more by the speed of the logic.  And a lot of that is down to experience in timing optimisation which is an area I have a lot to learn about.

Moving up a speed grade may be more beneficial than moving to a bigger device.
« Last Edit: November 19, 2020, 09:12:12 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #100 on: November 19, 2020, 09:28:18 pm »
The Zynq 7014S has 40k LUTs + ARM processor + hard DDR controller fabric.  1 unit starting from $89 USD.

IMO that wins over the 65k LUTs + no hardcore CPU + no DDR3 controller in the Kintex 70T -- and in my experience the MIG eats up easily 20% of the logic area for just a single channel DDR3 controller, I'm not convinced that would be a worthwhile trade off here. By the time you've implemented everything that the Zynq gives you "for free" (including 256KB of fast on-chip RAM) you're stepping up several grades of 'pure FPGA' to get there.
Well, that was the old MIG and that made me roll my own (way more resource efficient) DDR2 controller a long time ago. The hard IP DDR3 controllers in modern Xilinx FPGA devices however don't eat any logic. Realistically you can't create a DDR3 controller running at hundreds of MHz from generic IOB cells anyway. The timing needs to be trained etc. And since the Kintex 7 series is related to the Zync series it has exactly the same (hard IP) memory controller as the Zync has.

The distributed oscilloscope design I have made for one of my customers based on a Spartan6 LX45T (which has 44k logic cells) contains an LM32 soft-core, timing logic, network interface + hardware firewall (don't ask me why that is in there), various peripherals and ofcourse the oscilloscope part. This design uses about half of the logic without doing much on optimisation. The smallest Kintex part has 65k logic cells. I really don't think running out of logic resources is going to be an issue.
« Last Edit: November 19, 2020, 09:39:30 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #101 on: November 19, 2020, 09:33:17 pm »
The Kintex-7 doesn't have a hard DDR3 controller!

Spartan-6 and such do, but that was migrated over to the fabric in the later series of devices.

I haven't prototyped it on a Kintex-7 but on an Artix-7, the MIG used 20% of the device. Admittedly that would have been a smaller 7A35T, but that is still 10% of logic used on the big Kintex device assuming it maps similarly - I don't know what the effect of moving to a 32-bit controller would be but I imagine it would further increase device utilisation.  There is hardware acceleration for it on the FPGA fabric - the SERDES drivers for instance are optimised for DDR controllers - but it's still a 'soft IP' at heart.

The biggest limitation of Zynq (when it comes to RAM) is in the standard speed grade the max DDR3 freq is 533MHz.  You have to go up speed grade to get to 667MHz (or write PLL registers and overclock, but that is asking for trouble.)
« Last Edit: November 19, 2020, 09:37:02 pm by tom66 »
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #102 on: November 19, 2020, 09:38:18 pm »
IMO that wins over the 65k LUTs + no hardcore CPU + no DDR3 controller in the Kintex 70T -- and in my experience the MIG eats up easily 20% of the logic area for just a single channel DDR3 controller, I'm not convinced that would be a worthwhile trade off here. By the time you've implemented everything that the Zynq gives you "for free" (including 256KB of fast on-chip RAM) you're stepping up several grades of 'pure FPGA' to get there.
You missed the part that Kintex fabric is significantly faster than Atrix fabric (close to 50% in my experience, one design that barely closes at 180 MHz on Artix fabric, easily closes at 280 MHz on Kintex). And MIG in 160T part in FF package can implement 64bit 933MHz DDR3 interface, while Zynq only does 32bit 533 MHz. So not only you're getting 2 times wider data bus, you also get 400 MHz more frequency too. All it all, it's about 3.5x memory bandwidth. You also get 10G transceivers as opposed to 6G.
That said, you can get both in Zynq-030 - it's got same 2 cores as your device, and you get 125K LUTs of fast Kintex fabric, can implement 64bit DDR3 *in addition* to what Zynq provides, and you get 4 10G transceivers. And it's also included in free version of Vivado.
« Last Edit: November 19, 2020, 09:41:46 pm by asmi »
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #103 on: November 19, 2020, 09:50:44 pm »
I haven't prototyped it on a Kintex-7 but on an Artix-7, the MIG used 20% of the device.
MIG typically consumes 6 to 8k LUTs for DDR3 (quite a bit less for DDR2), and it obviously doesn't scale with device. Just for the hell of it I just created an absolute monster of controller with dual channel SODIMM controller for S100 device, and it took 24K LUTs. That is freaking 128 bit wide data bus!
I personally think that K160T is the best midrange device - it can be used with free tools, it allows to implement a 64/72bit DDR3 interface via both discrete components and SODIMM module, and it can run it pretty fast - up to 933 MHz for a single-rank SODIMM module. You also get up to 8 10G transceivers either for talking to really high-end ADCs via JESD204B, or to connect to your main processing block, or both.
« Last Edit: November 19, 2020, 09:57:48 pm by asmi »
 
The following users thanked this post: tom66

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #104 on: November 19, 2020, 09:53:45 pm »

Internet router: mass-market consumer product, cheap commodity hardware, Linux does everything.
Oscilloscope: completely the opposite in every way? ;)

The only way I see this happening would be for one of the A-brand manufacturers to come up with an open-platform scope that's explicitly designed to support third-party firmware, kind of like the WRT54GL of oscilloscopes. I suspect that the business case isn't very good, though.

Manufacturers of inexpensive scopes iterate too quickly for reverse engineering to be practical.

Obviously there are large differences, but the benefit is the same. Open source firmware has the ability to be improved over time as deficiencies are corrected and new features added. The market of oscilloscopes is vastly smaller than that of consumer routers, but the market for high end DIY open source designed from scratch oscilloscopes is a tiny fraction of the already small market for oscilloscopes overall.

The main thing that would appeal to me about a commercial scope is the packaging, it's a nice tidy form factor with nice buttons and knobs and everything in a professional molded housing, and all of the front end hardware is there, the place they are usually most lacking is in the software side of things. Ultimately I suppose this whole project is really only interesting to me from an academic standpoint, it's interesting to see the inner workings of a modern DSO and it's a truly impressive achievement, but if I'm going to spend $500+ I'd buy a TDS3000 and hack it to 500 MHz or a Siglent or Rigol and put up with potentially buggy firmware. Seems like this debate was also beat to death not too long ago, very few people are going to pay as much or more for an incomplete open source device than it costs to buy a ready made off the shelf instrument that is ready to use right out of the box just for the sake of it being open source. The main advantage of open source is cost, anyone can duplicate an open source project so competition drives cost down while innovation continues to result in incremental improvements to the design. If the cost is not significantly lower than a commercial product of similar performance then the market is very, very limited to that tiny segment of the population with the knowledge and desire to tinker and improve upon it. Most people don't care, most of the users of open source projects do not tinker with the source themselves even if they like the idea that they technically could if they wanted to.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #105 on: November 19, 2020, 10:00:58 pm »
The Kintex-7 doesn't have a hard DDR3 controller!

Spartan-6 and such do, but that was migrated over to the fabric in the later series of devices.

I haven't prototyped it on a Kintex-7 but on an Artix-7, the MIG used 20% of the device. Admittedly that would have been a smaller 7A35T, but that is still 10% of logic used on the big Kintex device assuming it maps similarly - I don't know what the effect of moving to a 32-bit controller would be but I imagine it would further increase device utilisation.  There is hardware acceleration for it on the FPGA fabric - the SERDES drivers for instance are optimised for DDR controllers - but it's still a 'soft IP' at heart.
After reading Xilinx UG586 I think you are right-ish (the implementation seems to be a hybrid) but still I think that a lot of the logic the MIG is creating can be removed especially if data is read/written (mostly) sequentially and not random. I'm not sure what the difference in complexity is between the Wishbone bus (which is simple and I know very well) versus AXI (which I know nothing about).
« Last Edit: November 19, 2020, 10:02:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #106 on: November 19, 2020, 10:02:40 pm »
Well, that was the old MIG and that made me roll my own (way more resource efficient) DDR2 controller a long time ago. The hard IP DDR3 controllers in modern Xilinx FPGA devices however don't eat any logic. Realistically you can't create a DDR3 controller running at hundreds of MHz from generic IOB cells anyway. The timing needs to be trained etc. And since the Kintex 7 series is related to the Zync series it has exactly the same (hard IP) memory controller as the Zync has.
1. As was said, there is no hard memory controllers in 7 series (except in Zynqs).
2. Fabric in 7 series *is* fast enough to implement "soft" memory controller with the little help of some HW blocks like phasers to implement write/read leveling.
3. Different 7 series devices have different fabric, and the difference is quite drastic. Spartan-7 and Artix-7 have the same fabric, Kintex-7 has faster one, I don't know about Virtex-7 but suspect it's even faster than K7.
For Zynqs, devices -020 and below have Artix fabric, while -030 and above - Kintex one. So they are not all the same either.
 
The following users thanked this post: tom66, nctnico

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #107 on: November 19, 2020, 10:09:47 pm »
Does anyone know what fabric is in the Zynq UltraScale?
It doesn't seem to be the same as the other 7 series devices, being on a 20nm process (Spartan/Artix/Kintex/Virtex-7 are all 28nm)

I'll have to give the DDR3 MIG a second thought.  But,  I don't think memory bandwidth is the ultimate limit here unless we were looking at sampling rates above 2.5GSa/s and those start requiring esoteric ADC parts with large BOM figures attached to them.

Could build a $3000 oscilloscope but would people really buy that in enough volume to make it worthwhile?
« Last Edit: November 19, 2020, 10:11:42 pm by tom66 »
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #108 on: November 19, 2020, 10:13:56 pm »
Does anyone know what fabric is in the Zynq UltraScale?
It doesn't seem to be the same as the other 7 series devices, being on a 20nm process (Spartan/Artix/Kintex/Virtex-7 are all 28nm)
As far as I know it's the same as it Kintex UltraScale+, so it should be super-fast. Overall Zynq MPSoC'es are great devices, the only two problems with them are price and packages (they by and large are very big, requiring 10 layer PCBs for full breakout). I would love to use them in my projects, but the price...
Which is why I'm seriously looking at Zynq-030 - 2 cores up to 1GHz, Kintex fabric and 10G transceivers is a great combination. And you can find them too in China for reasonable amount of money.

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #109 on: November 19, 2020, 10:20:48 pm »
Does anyone know what fabric is in the Zynq UltraScale?
It doesn't seem to be the same as the other 7 series devices, being on a 20nm process (Spartan/Artix/Kintex/Virtex-7 are all 28nm)

I'll have to give the DDR3 MIG a second thought.  But,  I don't think memory bandwidth is the ultimate limit here unless we were looking at sampling rates above 2.5GSa/s and those start requiring esoteric ADC parts with large BOM figures attached to them.

Could build a $3000 oscilloscope but would people really buy that in enough volume to make it worthwhile?
You also need to think about how long it will take to process the data. In my own USB design data came in at 200Ms/s but it could process acquired data at over 1000Ms/s. Say you have 4 channels with 500Mpts of memory with a maximum samplerate of 250Ms/s. Having a memory bandwidth of 1Gs/s would be enough for acquisition purposes. However in such a case you don't want the memory bandwidth between the processing part (whether inside the FPA or external) to become a bottleneck. Especially if the memory bandwidth needs to be shared between sampling and processing (think about double buffering here). Otherwise things like decoding and full record math will become painfully slow.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #110 on: November 19, 2020, 10:21:23 pm »
Indeed, but the shame about the Zynq-030 is it's the only package option there in FBG484.

So, if you go for the cheapest 484 ball part, you are stuck with the -030, no route to upgrade.

You can go for the 676 ball part which also has -035 and -045 series variants. But now the BOM is larger, and you might not need the extra IO.  (The 400 ball part with the extra bank of the 7020 is enough for 2 x ADC interfaces + plenty of IO to spare, with an 8 layer board to route it all out.)

I'm still not convinced I need a Kintex part.  The current 7014S solution has been tested up to 1.25GSa/s which is the max I could get from HMCAD1511 before it lost PLL lock.  At least some of that is down to the PLL not having sufficient amplitude to meet the specification of the HMCAD ADC even at 1GHz.   That is also without any line training on the SERDES inputs and with the internal logic running at 180MHz,  ADC frontend running at 1/8th sample clock.  I do encounter timing issues above 200MHz causing periodic AXI lockup conditions though, I believe that is down to the lack of timing optimisation.  (Currently have a -4ns worst negative slack)

Really it depends on the goals for this project,  I never considered much more than 2.5GSa/s which could be achieved with a second ADC port and 128-bit internal bus on a Zynq 7020.    Moving to a Kintex might allow that to run at 250MHz with a 64-bit port,   but it's not as if there is a lack of fabric capacity now for a large bus.
« Last Edit: November 19, 2020, 10:23:50 pm by tom66 »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #111 on: November 19, 2020, 10:27:52 pm »
Does anyone know what fabric is in the Zynq UltraScale?
It doesn't seem to be the same as the other 7 series devices, being on a 20nm process (Spartan/Artix/Kintex/Virtex-7 are all 28nm)

I'll have to give the DDR3 MIG a second thought.  But,  I don't think memory bandwidth is the ultimate limit here unless we were looking at sampling rates above 2.5GSa/s and those start requiring esoteric ADC parts with large BOM figures attached to them.

Could build a $3000 oscilloscope but would people really buy that in enough volume to make it worthwhile?
You also need to think about how long it will take to process the data. In my own USB design data came in at 200Ms/s but it could process acquired data at over 1000Ms/s. Say you have 4 channels with 500Mpts of memory with a maximum samplerate of 250Ms/s. Having a memory bandwidth of 1Gs/s would be enough for acquisition purposes. However in such a case you don't want the memory bandwidth between the processing part (whether inside the FPA or external) to become a bottleneck. Especially if the memory bandwidth needs to be shared between sampling and processing (think about double buffering here). Otherwise things like decoding and full record math will become painfully slow.

Yes so that is the goal: 32-bit DDR3 interface @ 667MHz, assuming a standard Zynq 7020 in enhanced speed grade, gets us up to 5.3GB/s. 

100,000 wfm/s at ~600 points per waveform is only a read back bandwidth of 60MB/s.  It's mostly the write bandwidth you need.  (You need the write bandwidth for the pre-trigger, assuming you want a (pre-trigger*nwaves) bigger than the blockRAM supports.)

Read bandwidth starts trending higher, strangely enough as you go to a longer timebase and the blind time reduces as a fraction of the active acquisition time.  At which point the current limitation is the CSI-2 bus to the Pi,  and the Pi itself has memory bandwidth issues.

I have the capability to implement a 4 lane CSI-2 peripheral,  which doubles bandwidth to around 3.2Gbit/s (400MB/s).  At which point we are nearing the capacity of PCI-e or USB3, although only in one direction.

One reason to go to 32-bit interface is that we then have the performance available to do a write-read-DSP-write-read cycle -- we can start using the DSP blocks to work on the waveform data we just acquired,  and then render it for the next frame.  That would allow the DSP fabric to be used in a (psuedo-)pipelined manner.    I'm working on a concept for the render-engine on the FPGA to see how practical it would be.
« Last Edit: November 19, 2020, 10:31:04 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #112 on: November 19, 2020, 10:55:38 pm »
Really it depends on the goals for this project,  I never considered much more than 2.5GSa/s which could be achieved with a second ADC port and 128-bit internal bus on a Zynq 7020.    Moving to a Kintex might allow that to run at 250MHz with a 64-bit port,   but it's not as if there is a lack of fabric capacity now for a large bus.
Maybe it is just a cost versus benefit (future growth) question. A Kintex would open the option to go for a design which has 4x 1Gs/s (maybe 1.25Gs/s to break the magic 500MHz barrier) without doing a major re-design of the PCB and internal FPGA logic.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8517
  • Country: us
    • SiliconValleyGarage
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #113 on: November 19, 2020, 11:39:38 pm »
cool ! . couple of ideas
-memory as dimm so i is user expandable
-cm4 , which you already mentioned.
-each acquisition channel as a pci card. ( not necessarly form factor. ) the beauty of that is that you can make a machine that has 1,2,3,4,5,6,7,8,9 whatever channels you need. just plug in more cards. there are pci hub chips avaialble.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #114 on: November 19, 2020, 11:49:20 pm »
The thought of using PCIexpress to aggregate more channels over seperate boards has crossed my mind too but you'll need seperate FPGAs on each board and ways to make triggers across channels happen. You quickly end up with needing to time-stamp triggers and correlate during post processing because the acquisition isn't fully synchronous.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #115 on: November 20, 2020, 12:11:59 am »
Indeed, but the shame about the Zynq-030 is it's the only package option there in FBG484.
Why limit to 484? You can fully breakout FFG676 package on a 6 layer PCB as it's 1 mm pitch package, so you can fit two traces between pads and breakout vias. Take a look at device diagram. 3 rows go out on a top layer, 3 more - on the first internal signal layer, 2 more - on the second internal signal layer, and 2 final rows - on the bottom layer (since two last rows are only partially populated, you will have enough space for decoupling caps). You only need 0.09 mm traces and spacings for the top layer - because the pads are oversized to 0.53 mm as per Xilinx recommendations, but you can get away with 0.5 mm pads, this will allow using 0.1 mm traces/spacings. For other layers it's even easier because you can use 0.2/0.4 mm vias which will leave you 0.6 mm of space for 2 traces. The fab I use for 6 layer board - WellPCB - can do even 0.08 mm traces, so you aren't even at their limit - even though I've done enough boards with them to be confident that they will deliver 0.08 mm traces with no issues.

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #116 on: November 20, 2020, 12:33:04 am »
One reason to go to 32-bit interface is that we then have the performance available to do a write-read-DSP-write-read cycle -- we can start using the DSP blocks to work on the waveform data we just acquired,  and then render it for the next frame.  That would allow the DSP fabric to be used in a (psuedo-)pipelined manner.    I'm working on a concept for the render-engine on the FPGA to see how practical it would be.
Maybe you should consider adding another DDR3 memory interface on PL side. This way you will have acquisition memory separated from processing memory, data comes from ADC straight into acquisition RAM, and then you create a processing pipeline from acquisition RAM into your main RAM. This is how most commercial oscilloscopes work, if you will look closely on teardowns, you will see those separate memory devices.
This will also allow you to create more sophisticated triggers because they can work with potentially a lot of samples right in acquisition RAM, and in doing so they won't be consuming bandwidth of your PS-side memory, which - as you said - is a fixed quantity which can't really be easily scaled, while PL-side memory bandwidth can (CLG484 package has fully bonded out 33,34 and 35 banks, this will allow you to create 64bit memory interface). Think about stuff like triggering on receiving a certain byte(s) via some peripheral bus like SPI or UART.
« Last Edit: November 20, 2020, 12:40:18 am by asmi »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #117 on: November 20, 2020, 04:05:39 am »
Again, its all dependent on the specific (as yet undisclosed/unclear) architecture. But (some, not all) scopes are dynamically aligning the channels based on the [digital] trigger, interpolating the trigger position. Which requires first determining the fractional trigger position (not trivial calculation), and then using that fractional position (at some point in the architecture, could be on sample storage or at plotting) to increase the time resolution of the digital trigger and reduce trigger jitter. This is something which is quite significant in the architecture and can't be easily bolted on later.

So far you've both just said there is a fixed filter/delay, which is a worry when you plan to have acquisition rates close to the input bandwidth.

Equally doing sinc interpolation is a significant processing cost (time/area tradeoff), and just one of the waveform rate barriers in the larger plotting system. Although each part is achievable alone its how to balance all those demands within the limited resources, hence suggesting that software driven rendering is probably a good balance for the project as described.

I consider time shifting data by fractional samples very trivial. And sinc(t) is not that expensive if you know how.

Data *must* be acquired at the fastest sample rate and downsampled in the FPGA.  If this is not done the data will be aliased, a depressingly common issue at all price tiers.

Certain operations must be done at acquisition sample rate.  Display is rather leisurely at 30-120 fps.

I have been over the DSP pipeline so many times I've lost count.  The only thing I am sure of is I will think of another improvement the next pass.

Reg
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #118 on: November 20, 2020, 06:31:41 am »
Certain operations must be done at acquisition sample rate.  Display is rather leisurely at 30-120 fps.
Now you're just linking concepts which have almost nothing to do with each other. Offline (CPU or otherwise) processing can be done at any rate, very few things in a digital oscilloscope have to be done at the full throughput of the ADCs, but triggering is one of them and you're both constantly talking away from that point.
 
The following users thanked this post: rf-loop, tautech

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 4104
  • Country: fi
  • Born in Finland with DLL21 in hand
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #119 on: November 20, 2020, 07:32:07 am »
Very intersting discussion and project.

Without going details and complex explanation.

Before start anything, imho, there must be clear and designed that whole digital side trigger engine need be just after ADC and always using ADC full native samplerate and capable to do it repeatedly with full designed capturing speed and this is really busy place. It need HW and it need not so complex but fast brute force to do it well.
Example Rohde&Schwarz have done small miracles in this in they prime RTO models.

This "trigger engine" need include also as perfect fine interpolation as possible between ADC raw samples as possible in real time. Fine adjusting position for display it is simple secondary thing. This architecture selection and decision need do and keep before so much more. Later it is extremely difficult or even impossible in practice. 
This kind of trigger engine need include many functions of course starting from simplest possible edge trigger going to very complex advanced intelligent "shape" recognize trigger... it can be even self learning for anomalies in signal. Clever intelligent full speed trigger is only road to do The advanced oscilloscope.

Trigger is key for intelligent and powerful glitch hunting.... not so much this over advertised wfm/s speed what was originally launched perhaps by Keysight because they find how to advertise and all is put for rare "glitch hunting".  If clever people have clever scope and work is hunting some rare glitch he do not need more than one time per second capturing scope IF scope is just only waiting this glitch occurs when scope know what he is hunting (waiting). Intelligence is imho better than brute force like enormous  repeating capturing speed. Put scope to work and leave humans do more important things than watching desperately scope screen and waiting.

If trigger engine find match it need trig but it need still do it fast. Not find these from acquisition memory... it need do realtime from ADC continuous stream because it is way to eliminate blind time trap in glitch hunting. Of course there need be some knowledge first what kind of anomaly we are hunting... just for one example about it difficult and why whole thing need be direct ADC out stream and capable of handling this stream... there is no free lounges if want do good.
Of course it is natural there need do some compromises. More and more compromises and soon it is dropped to elcheapo scopes level.

If trigger full engine is not in this position and extremely well made, all rest is useless playing and walking with road from problems to problems. Whole good oscilloscope main heart is just this trigger engine what is fist priority to design. Things after then, acquisition memory... displaying ans so on... they are important but if all these are nice and then trigger engine is "easy made" whole end is to garbage collection... was nice project what teached lot of.

So or so... imho, trigger engine just after ADC is heart of whole scope. This make good scope and this make poor or bad scope. First time this come on the board tens of years ago between Tek and HP.  This do not go so that first do some working nice image scope and after then start thinking oh... it need this trig... oh it need also this trig...  No, it is FIRST thing what need design and deeply.  First need be well done trigger high performance trigger engine (partially software and partially hardware) what can do all what later need.

But what I have seen now here about this project... amazing one mans work, really amazing.
I drive a LEC (low el. consumption) BEV car. Smoke exhaust pipes - go to museum. In Finland quite all electric power is made using nuclear, wind, solar and water.

Wises must compel the mad barbarians to stop their crimes against humanity. Where have the wises gone?
 
The following users thanked this post: tom66, Someone, nuno, 2N3055, jxjbsd, YetAnotherTechie

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16647
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #120 on: November 20, 2020, 08:06:13 am »
This "trigger engine" need include also as perfect fine interpolation as possible between ADC raw samples as possible in real time.

Not true. All you need to know is that one sample is below the trigger level and the next sample is above it (for simple rising edge trigger).

You can do all the fine interpolation much later when you go to display the trace on screen.

Which approach is better? That's harder to say.

 
« Last Edit: November 20, 2020, 08:11:36 am by Fungus »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #121 on: November 20, 2020, 08:10:21 am »
Certain operations must be done at acquisition sample rate.  Display is rather leisurely at 30-120 fps.
Now you're just linking concepts which have almost nothing to do with each other. Offline (CPU or otherwise) processing can be done at any rate, very few things in a digital oscilloscope have to be done at the full throughput of the ADCs, but triggering is one of them and you're both constantly talking away from that point.

The trigger is running at the full sample rate (1GSa/s) on this prototype.   Every sample is capable of generating a trigger.

Realignment is not done for every trigger because as stated that can be done once the waveform is captured - based on the difference between the sample and the ideal trigger point.  For instance, if you want to trigger at 8'h7f but you actually got a trigger at 8'h84 then you know it is 5 counts off, so look up in your table for that given timebase for the pixel offset.

This operation only needs to be done at the waveform rate of the scope - e.g. 20k/100k times a second - and is part of the render engine, not the capture engine.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #122 on: November 20, 2020, 08:41:38 am »
All you need to know is that one sample is below the trigger level and the next sample is above it (for simple rising edge trigger).

Not generally granted, e.g. for signals close to Nyqust. If two adjacent samples are e.g. 0.0 and 0.1 then the analog ADC input signal can still raise up to 1.0 and go down again between the samples, while still not violating the sampling theorem. In this case your algorithm would miss an edge trigger at level 0.5 completely although the original signal does cross this level.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #123 on: November 20, 2020, 08:52:15 am »
Certain operations must be done at acquisition sample rate.  Display is rather leisurely at 30-120 fps.
Now you're just linking concepts which have almost nothing to do with each other. Offline (CPU or otherwise) processing can be done at any rate, very few things in a digital oscilloscope have to be done at the full throughput of the ADCs, but triggering is one of them and you're both constantly talking away from that point.

The trigger is running at the full sample rate (1GSa/s) on this prototype.   Every sample is capable of generating a trigger.

Realignment is not done for every trigger because as stated that can be done once the waveform is captured - based on the difference between the sample and the ideal trigger point.  For instance, if you want to trigger at 8'h7f but you actually got a trigger at 8'h84 then you know it is 5 counts off, so look up in your table for that given timebase for the pixel offset.

This operation only needs to be done at the waveform rate of the scope - e.g. 20k/100k times a second - and is part of the render engine, not the capture engine.
You keep talking about offset in counts, but trigger interpolation is in time. There isn't a lookup because the slope of the signal isn't known a-priori, interpolating the trigger point on the time axis needs at least 2 points (more is preferable) which is already an impractical size for a LUT. Yes it can be done offline (as mentioned by Fungus above), but it still needs access to the raw ADC samples that created the trigger, which are not always what is stored in the acquisition buffer.

And yes, the shift can be applied at render time, which complicates that processing (and how the acquisition filtering further changes alignment) while needing the offset value forwarded with the matching acquisition. Closely related to channel skew adjustment/trimming (which can destroy many of the naive assumptions of memory access patterns in a multichannel scope).
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #124 on: November 20, 2020, 09:08:31 am »
You keep talking about offset in counts, but trigger interpolation is in time. There isn't a lookup because the slope of the signal isn't known a-priori, interpolating the trigger point on the time axis needs at least 2 points (more is preferable) which is already an impractical size for a LUT. Yes it can be done offline (as mentioned by Fungus above), but it still needs access to the raw ADC samples that created the trigger, which are not always what is stored in the acquisition buffer.

Yes, and every raw sample is piped into RAM and stored; nothing about the trigger data is thrown away.  1GSa/s sampling rate and 1GSa/s memory write rate.  The samples that generate the trigger are stored as well as pre- and post- trigger data.  No filtering, no downsampling at this point.

The principle is, if the input filter has the correct response (it needs to roll off before Nyquist so that you avoid the described Nyquist headaches) you can calculate slope from the delta from the presumed trigger point (which is at t=0 - centre of the waveform) and the actual trigger point at t=? ... the actual trigger point will be offset from the point that the oscilloscope triggered at by a fraction of the sample rate (0-1ns).  It will never be more than that fraction because then the next comparator would have generated the trigger instead.    When you are at the described mode where 1ns < 1pixel,  you can ignore this data because the waveform points aren't going to be plotted fractionally on the screen.  This is only needed for sinx/x modes.   You'd need LUTs for different timebases or channel configurations (1-4ch), but this is something that could be generated at 'production time' as part of the calibration process.
   
A more complex trigger engine could use several samples to inform a slope calculation.  Since the Zynq ARM has full visibility of the samples, it's possible to do a calculation like this before each waveform is sent to the interpolator or the rendering engine.  I don't think that would be terribly difficult to do either,  although it would be more complex than what I've suggested it might be less sensitive to noise on the trigger point.

I would say that once you are operating at an input frequency >> than the rating of the scope, you can't rely on the trigger being reliable, just as you can't rely on the amplitude being reliable.  My DS1000Z falls over on Fin > 130MHz, even though the amplitude is stable.
« Last Edit: November 20, 2020, 09:10:17 am by tom66 »
 
The following users thanked this post: Zucca

Offline Zucca

  • Supporter
  • ****
  • Posts: 4308
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #125 on: November 20, 2020, 09:34:03 am »
You guys are too smart for me, it's hard to follow.

Anyway some inputs from my side

1) Avoid anything which has Broadcom or Qualcomm chips = sporadic pain in the ass guaranteed.
2) As much RAM as possible
3) SATA port for proper SSD?

Great work!
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 
The following users thanked this post: egonotto

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #126 on: November 20, 2020, 10:03:30 am »
You keep talking about offset in counts, but trigger interpolation is in time. There isn't a lookup because the slope of the signal isn't known a-priori, interpolating the trigger point on the time axis needs at least 2 points (more is preferable) which is already an impractical size for a LUT. Yes it can be done offline (as mentioned by Fungus above), but it still needs access to the raw ADC samples that created the trigger, which are not always what is stored in the acquisition buffer.

Yes, and every raw sample is piped into RAM and stored; nothing about the trigger data is thrown away.  1GSa/s sampling rate and 1GSa/s memory write rate.  The samples that generate the trigger are stored as well as pre- and post- trigger data.  No filtering, no downsampling at this point.

The principle is, if the input filter has the correct response (it needs to roll off before Nyquist so that you avoid the described Nyquist headaches) you can calculate slope from the delta from the presumed trigger point (which is at t=0 - centre of the waveform) and the actual trigger point at t=? ... the actual trigger point will be offset from the point that the oscilloscope triggered at by a fraction of the sample rate (0-1ns).  It will never be more than that fraction because then the next comparator would have generated the trigger instead.    When you are at the described mode where 1ns < 1pixel,  you can ignore this data because the waveform points aren't going to be plotted fractionally on the screen.  This is only needed for sinx/x modes.   You'd need LUTs for different timebases or channel configurations (1-4ch), but this is something that could be generated at 'production time' as part of the calibration process.
I'm afraid this approach is too simplistic. The trigger comparator needs to have threshold levels to filter out noise which also means you need to use multiple points (at least 4 but more is better) to determine the actual trigger point (=where the signal crossed the trigger point). A problem on many digital trigger oscilloscopes (Siglent is a good example) is that the trigger point becomes a focal point in the centre and smears out the edges of a signal.

I think in the end it may turn out that doing the comparator part digitally and the positioning in software (based on seperately stored samples around the trigger point) is necessary.
« Last Edit: November 20, 2020, 10:14:05 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #127 on: November 20, 2020, 10:09:27 am »
The principle is, if the input filter has the correct response (it needs to roll off before Nyquist so that you avoid the described Nyquist headaches) you can calculate slope from the delta from the presumed trigger point (which is at t=0 - centre of the waveform) and the actual trigger point at t=? ... the actual trigger point will be offset from the point that the oscilloscope triggered at by a fraction of the sample rate (0-1ns).
This is getting silly, you still can't provide a mathematical example of your proposed method. How can the slope be known a-priori? With an ideal AFE filter, any frequency (slope) less than the cutoff could be occurring around the trigger point.

Even with the trivial example of a perfect sine wave of constant frequency being sampled perfectly (and below Nyquist) shifting it with DC while keeping the trigger threshold static would present different slopes at the trigger point. Just the phasing of the points when the frequency isn't rational with the sampling frequency causes significant shifts and jitter as the waveform approaches the Nyquist rate.
 
The following users thanked this post: rf-loop, 2N3055

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #128 on: November 20, 2020, 12:11:15 pm »
I think there's a misunderstanding here as to how this would work,  there's no need to do any 'a-priori' calculation as *everything is done* after we have the *whole waveform captured and stored in RAM*.  We trigger 'roughly' and then correct the trigger point using the data we have around the trigger point.  That latter stage is software, performed well after the samples are gathered, just before any interpolation or rendering/plotting is performed.   I believe it could be done using one or at most two data points to calculate the slope of the signal at that point.  Let me go away and model it and see how wrong I am ... I've only done the calculations on paper so far so I'm prepared to admit I could be wrong here.

The present application does, in fact, support noise filters on the trigger, but the high and low thresholds are calculated beforehand as centred around the ideal trigger point.  Hysteresis can be set to any value within reason (1..max_sample).   So on a rising edge we trigger on the high trigger level,  and only when the signal goes below the low trigger level do we generate a falling edge.  Therefore, we can work out the level based on the type of edge that we intended to trigger on;  again, we don't need to store anything other than the waveform data (and the trigger edge that we used, in case we have a trigger engine that alternates edge types.)   

There isn't currently any bandwidth filtering on the trigger samples (i.e. LF/HF/AC);  I'm not certain of the best way to implement that yet.  They may have an effect on the jitter of the scope trigger, but the bandwidth of those filters tends to be quite low (~50kHz or so) which means trigger jitter of 1ns (uncorrected) should be insignificant in those cases.  I would like to see a scope that had adjustable filtering for the trigger signal (set the -3dB point for the trigger), I've been discussing this with Reg for some time and we have a few ideas on how to go about it even on realtime data.

Don't let perfect be the enemy of good.   If this gets realistic trigger jitter ~100ps or less for a 1ns ADC clock, then I'm prepared to accept it as a perfectly sufficient way to correct when interpolation is performed; the jitter would then be less than 1 pixel visible to the user.

The waveforms I have captured so far show a "visibly" jitter free capture down to 50ns/div without any trigger jitter correction thus far, on a variety of complex waveforms as well as simple sine waves.    ~100ps or less jitter would require the jitter correction to work down to roughly 10x interpolation (5ns/div).  And, the good news is, this is scalable with the sampling rate.  If the ADC is faster, then you can still do up to 10x interpolation without too many headaches.  I'm not too convinced that there is a great deal of benefit in going beyond 10x interpolation (at which point your scope is *really* lying to you about the signal, rather than just misleading you), but if so, there may need to be more thought as the ADC noise could start influencing the trigger slope detection,  which may require an adjusted algorithm.
« Last Edit: November 20, 2020, 12:13:50 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #129 on: November 20, 2020, 01:42:24 pm »
A few things to look out for:
- the trigger point may be outside the acquisition data so you can't use the acquired data to calculate the trigger point
- it should be possible to trigger on very slow edges as well. Think tens of micro-Volts per second

There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: egonotto

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #130 on: November 20, 2020, 01:49:35 pm »
I would say that once you are operating at an input frequency >> than the rating of the scope, you can't rely on the trigger being reliable, just as you can't rely on the amplitude being reliable.  My DS1000Z falls over on Fin > 130MHz, even though the amplitude is stable.
:o
2-3* rated BW for stable triggering is not an unrealistic expectation IME.

I'd certainly be setting my sights higher with a new design.
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline tmbinc

  • Frequent Contributor
  • **
  • Posts: 250
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #131 on: November 20, 2020, 02:47:22 pm »
tom66, thank you, this is super impressive work!

A few years ago I've worked a bit on the Siglent SDS1x0xX-E (https://github.com/360nosc0pe/fpga) reverse-engineer/hack. We've got it to a level where we can control the frontends (coupling, attenuation, BW), capture the ADC data, and push that to memory on the PL. On the PS, we had Linux and some test code to pull data out of RAM and display it; eventually the goal was to render into accumulation buffers in blockram (which is what Siglent does, hence the crappy resolution to make it fit), but didn't get that far - we never got further than basically driving the hardware correctly, but that part worked well.

Without going too much into the topic of creating new hardware vs. hacking existing hardware, I think the design shares a lot of the same choices so for an open source oscilloscope, I would be very interested in cooperating and/or potentially porting your code to this platform.

Also, nice work on the CSI-2 interface! How does your CSI-2 Phy look on the FPGA side? Do you need to implement LP support or only high-speed? This is a very elegant, cheap and fast solution to capture lot of data into a RPI. (I've so far always used an FT2232H in FIFO mode, but it adds significant cost and especially on a RPi3, the USB alone eats a full CPU core due to the bad USB controller design.) I assume receiving data on CSI-2 doesn't take up a lot of CPU resources on the RPi if you can DMA large blocks.
 
The following users thanked this post: tom66

Offline tv84

  • Super Contributor
  • ***
  • Posts: 3221
  • Country: pt
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #132 on: November 20, 2020, 03:08:39 pm »
Getting interesting...  :popcorn:
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #133 on: November 20, 2020, 04:15:50 pm »
With 100MHz and 1GHz sampling, it is customary to have 5ns/div and no visible triggering jitter. Picoscope with those specs has 3 ps RMS trigger jitter specified.. That should be target.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #134 on: November 20, 2020, 05:39:46 pm »
- the trigger point may be outside the acquisition data so you can't use the acquired data to calculate the trigger point

The trigger point will always be within acquisition data.  That is a limitation of this approach if you want to correct the trigger point using data available.  You'll notice that on a Rigol DS1000Z the pre-trigger is limited to roughly the waveform length (so the trigger point is at the far right hand side of the display.)  Now, that seems to be a self-imposed limit (they might be using BlockRAM for the pre-trigger or have some software limitation) and no such limit will exist in this design, but there is still the requirement to have the trigger within the data set, as that is the transition point from pre- to post-trigger state.  Post-trigger could be set to only a couple of samples - the current engine supports a minimum of 2 words or 16 samples for either the pre- or post-trigger buffers.   But, you will have to have some data around the trigger point to be able to correct it.

Even the Agilent DSOX2012A I have has a limit of -250us pre-delay in 1GSa/s sampling mode (2ch active) ... coincidentally (or not) exactly 500kpt of data?  The limit only changes when the timebase requires the ADC sample rate to drop.  The pre-trigger window stops exactly at the moment of the trigger plus a few samples.

In all cases you should have data from around the trigger ... I can't think of a DSO that does not have such a limitation.     I suppose it would be plausible to record 16 words of data either side the trigger if it so happens that the trigger is outside of the acquisition window,  but I'm not sure if this additional complexity would be worth it for a fairly unusual use case.  I will consider it, though.

- it should be possible to trigger on very slow edges as well. Think tens of micro-Volts per second

That shouldn't be an issue.  This trigger correction only starts to have an effect when the rise time is <50ns or so.  Outside of that window the naive assumption that first triggered word = trigger point is more than adequate.  The current hardware supports DC to >100MHz triggering, although the AC coupled front end obviously limits the lower end.

2-3* rated BW for stable triggering is not an unrealistic expectation IME.

I'd certainly be setting my sights higher with a new design.

Remember, this prototype runs at 1GSa/s with a rated 100MHz bandwidth.  In multiplexed mode, it has a Nyquist bandwidth of just 125MHz (4 channels enabled).  That's essentially the same as a Rigol DS1104Z or Siglent SDS1104X-E.  If the ADC is faster and you have more data, then the trigger could reliably go beyond the rated B/W of the scope,  but the B/W of a scope is an upper bound.  Most signals will have a fundamental far below the rated bandwidth.  You wouldn't look at a 50MHz square wave on a 100MHz oscilloscope and expect perfect reconstruction.  Looking at a 300MHz sine wave on a 100MHz scope and complaining that the trigger is a bit jittery would be silly, in my opinion.

I'd be curious how the competition performs here.  I may get my Zynq board to output a 200MHz clock to see how well my Rigol can trigger on it with a rated B/W limit of 100MHz.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #135 on: November 20, 2020, 05:40:51 pm »
With 100MHz and 1GHz sampling, it is customary to have 5ns/div and no visible triggering jitter. Picoscope with those specs has 3 ps RMS trigger jitter specified.. That should be target.

5ns/div implies (assuming a 1920-wide canvas and 12 divisions, is that fair?) about 31ps per 'virtual' sample.  How could you determine if the trigger jitter was any better than 31ps in that case?  AFAIK Picoscope doesn't plot sub-pixels (same as most scopes.)
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #136 on: November 20, 2020, 05:54:55 pm »
- the trigger point may be outside the acquisition data so you can't use the acquired data to calculate the trigger point

The trigger point will always be within acquisition data.  That is a limitation of this approach if you want to correct the trigger point using data available.  You'll notice that on a Rigol DS1000Z the pre-trigger is limited to roughly the waveform length (so the trigger point is at the far right hand side of the display.)  Now, that seems to be a self-imposed limit (they might be using BlockRAM for the pre-trigger or have some software limitation) and no such limit will exist in this design, but there is still the requirement to have the trigger within the data set, as that is the transition point from pre- to post-trigger state.  Post-trigger could be set to only a couple of samples - the current engine supports a minimum of 2 words or 16 samples for either the pre- or post-trigger buffers.   But, you will have to have some data around the trigger point to be able to correct it.

Even the Agilent DSOX2012A I have has a limit of -250us pre-delay in 1GSa/s sampling mode (2ch active) ... coincidentally (or not) exactly 500kpt of data?  The limit only changes when the timebase requires the ADC sample rate to drop.  The pre-trigger window stops exactly at the moment of the trigger plus a few samples.

In all cases you should have data from around the trigger ... I can't think of a DSO that does not have such a limitation.
Well, I can not think of a DSO which limits the pre-trigger range to the length of the acquisition record  ;) For example: My GW Instek allows me to set the pre-trigger point far outside the acquisition record. It is pretty much a requirement for being able to do jitter measurements so I'm rather surprised there are DSOs out there which have limited pre-trigger abilities.
« Last Edit: November 20, 2020, 06:00:26 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #137 on: November 20, 2020, 06:06:54 pm »
- the trigger point may be outside the acquisition data so you can't use the acquired data to calculate the trigger point

The trigger point will always be within acquisition data.  That is a limitation of this approach if you want to correct the trigger point using data available.  You'll notice that on a Rigol DS1000Z the pre-trigger is limited to roughly the waveform length (so the trigger point is at the far right hand side of the display.)  Now, that seems to be a self-imposed limit (they might be using BlockRAM for the pre-trigger or have some software limitation) and no such limit will exist in this design, but there is still the requirement to have the trigger within the data set, as that is the transition point from pre- to post-trigger state.  Post-trigger could be set to only a couple of samples - the current engine supports a minimum of 2 words or 16 samples for either the pre- or post-trigger buffers.   But, you will have to have some data around the trigger point to be able to correct it.

Even the Agilent DSOX2012A I have has a limit of -250us pre-delay in 1GSa/s sampling mode (2ch active) ... coincidentally (or not) exactly 500kpt of data?  The limit only changes when the timebase requires the ADC sample rate to drop.  The pre-trigger window stops exactly at the moment of the trigger plus a few samples.

In all cases you should have data from around the trigger ... I can't think of a DSO that does not have such a limitation.
Well, I can not think of a DSO which limits the pre-trigger range to the length of the acquisition record. My GW Instek allows me to set the pre-trigger point far outside the acquisition record. It is pretty much a requirement for being able to do jitter measurements.

Both the Rigol DS1074Z and the Agilent DSOX2012A I have do this.

The Rigol limits it to the current memory setting (on Auto, it would be 600 pts at 50ns/div).
The Agilent limits it to the total memory of the scope (~500kpts/channel).

See video from my 1000Z:


This is a necessary function of a scope with pre-trigger, since you don't know when the trigger will occur your pre trigger going further back in time requires more memory.

Nothing I am suggesting here is unusual ... it seems pretty much every DSO manufacturer has come across similar limitations.

Now where there is a difference is post-trigger.  That can be done without memory, so I expect the manufacturers are saving some segment of the trigger samples to do the trigger de-jitter.  In which case, I retract a bit of what I said before about this being an edge case, that is wrong,  it is a normal use case and it will need to be supported.
 
The following users thanked this post: egonotto

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #138 on: November 20, 2020, 06:07:28 pm »
With 100MHz and 1GHz sampling, it is customary to have 5ns/div and no visible triggering jitter. Picoscope with those specs has 3 ps RMS trigger jitter specified.. That should be target.

5ns/div implies (assuming a 1920-wide canvas and 12 divisions, is that fair?) about 31ps per 'virtual' sample.  How could you determine if the trigger jitter was any better than 31ps in that case?  AFAIK Picoscope doesn't plot sub-pixels (same as most scopes.)

Pico 3406D supports up to 20GS/s in ETS mode, so needs triggering that can cope with that.
I'm afraid I don't understand what you mean by "doesn't plot sub-pixels (same as most scopes.)"?
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #139 on: November 20, 2020, 06:27:22 pm »
tom66, thank you, this is super impressive work!

A few years ago I've worked a bit on the Siglent SDS1x0xX-E (https://github.com/360nosc0pe/fpga) reverse-engineer/hack. We've got it to a level where we can control the frontends (coupling, attenuation, BW), capture the ADC data, and push that to memory on the PL. On the PS, we had Linux and some test code to pull data out of RAM and display it; eventually the goal was to render into accumulation buffers in blockram (which is what Siglent does, hence the crappy resolution to make it fit), but didn't get that far - we never got further than basically driving the hardware correctly, but that part worked well.

Without going too much into the topic of creating new hardware vs. hacking existing hardware, I think the design shares a lot of the same choices so for an open source oscilloscope, I would be very interested in cooperating and/or potentially porting your code to this platform.

Also, nice work on the CSI-2 interface! How does your CSI-2 Phy look on the FPGA side? Do you need to implement LP support or only high-speed? This is a very elegant, cheap and fast solution to capture lot of data into a RPI. (I've so far always used an FT2232H in FIFO mode, but it adds significant cost and especially on a RPi3, the USB alone eats a full CPU core due to the bad USB controller design.) I assume receiving data on CSI-2 doesn't take up a lot of CPU resources on the RPi if you can DMA large blocks.

Interesting project! I am impressed someone managed to do that. 

There may be some 'scope' for collaboration, so let's keep talking and see if we can help each other out.  Not sure how much would be reusable, but maybe some would be.

Regarding CSI-2.  I was able to write a PLL register on an authentic Pi camera to get clock down to 12MHz ... image goes bad (too dark because shutter times etc wrong) but you can then switch the camera into a test pattern mode.  Using this, you can reverse-engineer the protocol on as little as a Rigol DS1074Z.  I built a board to allow me to do this - it sits between a Pi camera and a Pi and allows me to 'snoop' on the bus between the two (see attached)

The CSI-2 Phy on the FPGA side is an implementation of Xilinx XAPP894 using the Passive circuit they suggest with custom Verilog driving a pair of OSERDESE2 blocks and a bloody complex FSM to manage the whole process of generating packets and data streams.  I prototyped this on a smaller PCB in the first run and spent a few months reverse engineering the protocol using what documentation I could find.    It is something I really need to re-engineer at some point.  It was initially designed with a BlockRAM interface i.e. data would be copied into BRAM and output from there.  That was sufficient for testing but eventually I ended up bolting on an AXI stream interface.  So you set up a transfer of X lines of video data each with 2048 bytes and the AXI DMA manages the rest of this.  To simplify things, the two lanes terminate at the same moment (i.e. odd data lengths are fundamentally unsupported.)  But I want to add the capability (as CSI-2 supports) for odd line lengths and jumbo packets at some point.

Annoyingly with a Pi it is 'all or nothing'... if you don't get it all right it doesn't work at all.

One consequence of this design choice is all packets have to be 2048 byte multiples - if they are not they are padded with null bytes.  So not useful for small packets - those are sent over the SPI bus right now. But the protocol is fairly robust.  I can reliably transfer 180MB/s from Zynq RAM to the Pi for hours on end with zero bit errors.

I don't implement the true LP protocol as the Pi camera doesn't use it so I don't support e.g. lane turnaround or low speed communication over that.  I do of course implement the start-of-transmission and end-of-transmission signals, and the small packet header format for SoF/EoF and the larger packet format.  Presently the CRC is set to all zeroes ... the Pi doesn't seem to use this and it makes the logic easier.

I also implement start and end on the clock lane, putting the clock lane into LP when not transmitting.  There is no need in the specification to do this, but it improves reliability if the Pi failed to sync onto the first SoT packet it would never see any data for the duration of operation.  It also saves power (about 0.1W).

One interesting way you can determine if a device is actually utilising the checksum is to deliberately degrade the link.  In my case I added some 10pF to D1+ D1- pair,  I had plenty of bit corruption,  but all lines were appearing on the data and the frame was otherwise intact.  That told me Pi ignores the checksum (or sets an ignorable error flag/increments some counter) which meant I could avoid implementing that part of the specification.

You are correct that on the Pi side this is all DMA driven so the data essentially arrives in memory at a given point and you can read it from there.  You need to be careful of a few things:
- Pi and transmitter need to both know how big the packet is (so if you send 2045 lines, set the receiver to 2045 lines) otherwise an odd effect where the first few lines get offset with garbage occurs
- The Pi needs to be 'ready' to receive before the FPGA starts otherwise the CSI core gets into an error state

At present only the process that uses MMAL can access the data at a given pointer, which creates a few headaches.  That would be good to solve.  If you want to share the data between processes, it requires a memcpy :( because the MMAL data is private to a given process.  There is a Linux-kernel solution to this that a friend was looking into for me, but I need to awaken that. 
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #140 on: November 20, 2020, 06:38:43 pm »
Pico 3406D supports up to 20GS/s in ETS mode, so needs triggering that can cope with that.
I'm afraid I don't understand what you mean by "doesn't plot sub-pixels (same as most scopes.)"?

Right - OK, I didn't consider ETS.  I'm not planning on implementing it, I don't see a major benefit from it.  It may be possible to do it at lower wfm/s but at higher rates it would require the PLL to hop frequency too often. 

But, even if you have ETS, at the end of the day, when you have a sample to plot, say, at 50ps in time... it is going to land on exactly one pixel.  Fractional plotting does not appear to be implemented by any mainstream OEM, I have tried Tek 3000 series, Siglent 5000X, Agilent/Keysight 2000X and 3000X,  and various Rigol scopes. 

If you have a 50ns/div timebase (12 divs so 600ns span), and 1920 pixels to plot your waveform points on, then each pixel would represent 31ps of time.  You cannot represent finer than this: you do not have the pixels to do so. So, there is no benefit to achieving any better than pixel-perfect representation, so in this case, anything better than 31ps jitter is no better information.

This applies for sinx/x too, as a sinx/x interpolator works like a regular FIR filter with most of its inputs set to zero. (10x interpolator would have 9 samples at zero and 1 sample at your input value) so you can only shift by interpolated-sample intervals. 

« Last Edit: November 20, 2020, 06:42:06 pm by tom66 »
 
The following users thanked this post: 2N3055

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #141 on: November 20, 2020, 07:22:02 pm »
With 100MHz and 1GHz sampling, it is customary to have 5ns/div and no visible triggering jitter. Picoscope with those specs has 3 ps RMS trigger jitter specified.. That should be target.

5ns/div implies (assuming a 1920-wide canvas and 12 divisions, is that fair?) about 31ps per 'virtual' sample.  How could you determine if the trigger jitter was any better than 31ps in that case?  AFAIK Picoscope doesn't plot sub-pixels (same as most scopes.)

Pico 3406D supports up to 20GS/s in ETS mode, so needs triggering that can cope with that.
I'm afraid I don't understand what you mean by "doesn't plot sub-pixels (same as most scopes.)"?
At 100ps/div you'll definitely see a 31ps difference in delay. But then again having a 3ps RMS trigger jitter is pretty impressive. That is >US$20k oscilloscope territory. I'm not sure whether extremely low trigger jitter specs are something to aim for right now. At some point noise of the system is going to contribute a lot to the trigger jitter and it may need external circuitry to produce a clean, low jitter trigger.

Regarding pre/post trigger. It may be that I got those the wrong way around (semantics) but the point is that it should be possible to move the trigger point way to the left and have data AFTER the trigger after a very long delay.
« Last Edit: November 20, 2020, 07:23:37 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #142 on: November 20, 2020, 07:37:42 pm »
Yes, for post-trigger it will need to be implemented.   I will get the present implementation working with just post-trigger in memory but will consider how to enable this to work for long post-trigger delays.  The same principle should be usable for both cases, just need to keep a local record of samples around the trigger point if they are outside of the memory depth.

I have some DIY to catch up to on the weekend, so maybe won't get that much time to look at this specifically, but will still give it some "brain time".
 

Offline dave j

  • Regular Contributor
  • *
  • Posts: 128
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #143 on: November 20, 2020, 08:14:24 pm »
If you have a 50ns/div timebase (12 divs so 600ns span), and 1920 pixels to plot your waveform points on, then each pixel would represent 31ps of time.  You cannot represent finer than this: you do not have the pixels to do so. So, there is no benefit to achieving any better than pixel-perfect representation, so in this case, anything better than 31ps jitter is no better information.
Just because you can only plot using pixels doesn't mean you don't need to store waveform points to a higher resolution. Consider the attached image. The white lines are at five times higher pitch than the orange ones. If you were only storing points at the lower pitch the orange lines would appear identical. Not a problem for horizontal traces but for nearly but not quite vertical ones, such as fast edges, you could clearly see a difference.
I'm not David L Jones. Apparently I actually do have to point this out.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #144 on: November 20, 2020, 08:41:22 pm »
If you have a 50ns/div timebase (12 divs so 600ns span), and 1920 pixels to plot your waveform points on, then each pixel would represent 31ps of time.  You cannot represent finer than this: you do not have the pixels to do so. So, there is no benefit to achieving any better than pixel-perfect representation, so in this case, anything better than 31ps jitter is no better information.
Just because you can only plot using pixels doesn't mean you don't need to store waveform points to a higher resolution. Consider the attached image. The white lines are at five times higher pitch than the orange ones. If you were only storing points at the lower pitch the orange lines would appear identical. Not a problem for horizontal traces but for nearly but not quite vertical ones, such as fast edges, you could clearly see a difference.

Right - but here's the thing - the data is there.  Nothing is being lost -- it's just not being reconstructed, if that makes sense.

This is relating to how the data is reconstructed into a real signal.    At any given zoom level there is little benefit in going beyond the display resolution of your display device (you cannot get more pixels than there are actually on the panel.)  So, there is no point in showing <31ps jitter, for instance, if the minimum display resolution is 31ps, because nothing will ever make that usefully visible to the user.

If you zoom in one step then, yes, you do want to make that visible at that stage.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #145 on: November 20, 2020, 09:32:41 pm »
Fractional plotting does not appear to be implemented by any mainstream OEM, I have tried Tek 3000 series, Siglent 5000X, Agilent/Keysight 2000X and 3000X,  and various Rigol scopes.
The reconstruction (plotting) filter in the megazoom IV is matched to the expected bandwidth of the front end so it may be difficult to see with the slower models. But on faster models the plotting is most certainly not hard aligned to the trigger, and can be seen to move with at least 1 px of precision at 2ns/div (64px, 31ps). Noting that scope uses an analog trigger so there is additional jitter from that hardware which isn't eliminated as would be with a digital trigger.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #146 on: November 21, 2020, 01:39:30 am »
The principle is, if the input filter has the correct response (it needs to roll off before Nyquist so that you avoid the described Nyquist headaches) you can calculate slope from the delta from the presumed trigger point (which is at t=0 - centre of the waveform) and the actual trigger point at t=? ... the actual trigger point will be offset from the point that the oscilloscope triggered at by a fraction of the sample rate (0-1ns).
This is getting silly, you still can't provide a mathematical example of your proposed method. How can the slope be known a-priori? With an ideal AFE filter, any frequency (slope) less than the cutoff could be occurring around the trigger point.

Even with the trivial example of a perfect sine wave of constant frequency being sampled perfectly (and below Nyquist) shifting it with DC while keeping the trigger threshold static would present different slopes at the trigger point. Just the phasing of the points when the frequency isn't rational with the sampling frequency causes significant shifts and jitter as the waveform approaches the Nyquist rate.

Old military electronics tech trick:    Measure the harmonics of a square wave on the spectrum analyzer to determine the slew rate.

All of this is a basic application of the Fourier transform and causality.

We have given much thought to triggering and I think we can do better than anyone else.  Not implemented yet, but nothing difficult to do.  I spent quite a bit of time on the subject. The limitation is free time to devote to the project.

If anyone wants to commit their time to working on the task I shall be pleased to advise.  It's actually quite easy if you know how.

Have Fun!
Reg
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #147 on: November 21, 2020, 01:53:46 am »
It would be better to just explain the math in detail so people know what they are getting into instead of pulling up smoke screens. Open source means full disclosure  ;D
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Someone, egonotto

Offline snoopy

  • Frequent Contributor
  • **
  • Posts: 767
  • Country: au
    • Analog Precision
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #148 on: November 21, 2020, 06:51:05 am »
Pico 3406D supports up to 20GS/s in ETS mode, so needs triggering that can cope with that.
I'm afraid I don't understand what you mean by "doesn't plot sub-pixels (same as most scopes.)"?

Right - OK, I didn't consider ETS.  I'm not planning on implementing it, I don't see a major benefit from it.  It may be possible to do it at lower wfm/s but at higher rates it would require the PLL to hop frequency too often. 

But, even if you have ETS, at the end of the day, when you have a sample to plot, say, at 50ps in time... it is going to land on exactly one pixel.  Fractional plotting does not appear to be implemented by any mainstream OEM, I have tried Tek 3000 series, Siglent 5000X, Agilent/Keysight 2000X and 3000X,  and various Rigol scopes. 

If you have a 50ns/div timebase (12 divs so 600ns span), and 1920 pixels to plot your waveform points on, then each pixel would represent 31ps of time.  You cannot represent finer than this: you do not have the pixels to do so. So, there is no benefit to achieving any better than pixel-perfect representation, so in this case, anything better than 31ps jitter is no better information.

This applies for sinx/x too, as a sinx/x interpolator works like a regular FIR filter with most of its inputs set to zero. (10x interpolator would have 9 samples at zero and 1 sample at your input value) so you can only shift by interpolated-sample intervals.

Tek TDS7XX, TDS7XXX, TDS5XXX all offer ETS. The TDS7XXX, and TDS5XXX also offer real time sinx/x interpolation probably because it has much more computational power compared to the earlier TDS7XX scopes. The ETS works extremely well on these scopes. I don't think any other vendor does it as well as Tek does. The downside to ETS is that it requires a repetitive waveform :(

cheers
« Last Edit: November 21, 2020, 06:53:39 am by snoopy »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #149 on: November 21, 2020, 10:24:51 am »
The trade off with ETS is that your waveform rate has to fall because you need to hop the PLL frequency often.

The ADF4351 I'm using takes about 80us to lock in "Fast Lock Mode" which is intended for fast channel changes, not including the time required to write the registers on the device over SPI.  In the most optimistic case, that sets your acquisition rate at 12,500 wfm/s.   Faster devices do exist but they would still end up being the ultimate limit in the system. 

ETS is making up for poor sinx/x interpolation,  you can do everything ETS does, and arguably more accurately, with a good interpolator.  (Assuming your input is correctly bandlimited for the normal ADC sampling rate.)

Working on the Python sampling model now.
« Last Edit: November 21, 2020, 10:26:38 am by tom66 »
 

Offline snoopy

  • Frequent Contributor
  • **
  • Posts: 767
  • Country: au
    • Analog Precision
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #150 on: November 21, 2020, 11:04:07 pm »
The trade off with ETS is that your waveform rate has to fall because you need to hop the PLL frequency often.

The ADF4351 I'm using takes about 80us to lock in "Fast Lock Mode" which is intended for fast channel changes, not including the time required to write the registers on the device over SPI.  In the most optimistic case, that sets your acquisition rate at 12,500 wfm/s.   Faster devices do exist but they would still end up being the ultimate limit in the system. 

ETS is making up for poor sinx/x interpolation,  you can do everything ETS does, and arguably more accurately, with a good interpolator.  (Assuming your input is correctly bandlimited for the normal ADC sampling rate.)

Working on the Python sampling model now.

Yes ETS requires many bites of the cherry to reconstruct the waveform so therefore waveform update rate suffers and the incoming waveform needs to be stable during this time. However am I right in saying that with ETS you don't get any phase shift from an interpolation filter which is never perfect ?
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #151 on: November 21, 2020, 11:30:03 pm »
ETS is making up for poor sinx/x interpolation,  you can do everything ETS does, and arguably more accurately, with a good interpolator.  (Assuming your input is correctly bandlimited for the normal ADC sampling rate.)
Actually the whole point of ETS is to go (far) beyond the Nyquist limit of the ADC. One could even envision adding a sampling head which together with the 14bit version of the ADC could result in a very unique device. But implementing ETS and sampling in itself isn't that interesting. The real challenge is to be able to trigger accurately on a signal which has a very high frequency (several GHz). Several of the older high frequency DSOs (Tektronix TDS820 and Agilent 54845a for example) can't trigger on high frequency signals.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #152 on: November 22, 2020, 08:52:55 am »
That's true, at that point you'd essentially be implementing something similar to a sampling scope, to achieve ~500MHz repetitive bandwidth.

I think it's a different project, but there's no practical reason an ETS-capable variant (perhaps a software change, with hardware lacking B/W filters) could be developed.

Not the focus for the first version but one of the goals for this project is upgradeability and customisability. 
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #153 on: November 23, 2020, 09:07:32 pm »
I modelled the triggering prototype using a Python script and Numpy/matplotlib.  See attached for the script, for anyone interested. 

Overall quite impressed - triggering jitter was relatively low but I need to tweak the search range and coefficients somewhat to get performance to be similar across all trigger levels.  I think I should move towards a sinc interpolator for the trigger predictor, but this simple linear predictor (using the error at the trigger point and the local slope based on 4 samples) gets to ~360ps jitter for a 1ns sample period with 1.5LSB ADC noise simulated.

The biggest problem seems to be that my predictor is much less accurate at certain trigger levels - it seems to be some kind of quantisation effect.  I will have to continue tweaking to see if I can improve this.

This implementation could be performed with less than 16 bytes memory per trigger point (5 samples + 1 timestamp) and so should be practical for long post-trigger delays where samples are normally outside of acquisition memory.  The slope approximation should be possible to do with an 8-bit LUT.
 

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 4104
  • Country: fi
  • Born in Finland with DLL21 in hand
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #154 on: November 24, 2020, 06:03:51 am »
Think  if you are scope manufacturer and you need write datasheet.

Imho, this jitter what you show do not look good at all. Without further knowledge and analyze  but 300ps or 600ps numbers give just first feel: horrible.

Here some pick-ups from some scopes data sheets, note also sampling interval and adjust these for 1ns interval sampling.

These are just random examples. Some are good, some are "acceptable".
All these are 8bit ADC scopes.  Least these numbers, without further thinking, looks bit different and better or more nice.

Some  1Gsa/s 0.5k$ scope
Trigger Jitter: < 100 ps 


Some 1k$ 2Ch and 1.5k$ 4Ch,  2Gsa/s scope
Trigger Jitter: CH1 - CH4: <10 ps rms, 6 divisions pk-pk, 2 ns edge
(EXT trig: <200 ps rms ) My note: This EXT trig is simplest old traditional analog pathway / comparator trigger.


Some 5Gsa/s scope 
Trigger Jitter: <9ps RMS (typical) for ≥300MHz sine and ≥6 divisions peak to peak amplitude for vertical gain settings from 2.5mV/div to 10V/div.
Trigger Jitter: <5ps RMS (typical) for ≥500MHz sine and ≥6 divisions peak to peak amplitude for vertical gain settings from 2.5mV/div to 10V/div.

Some expensive 5 Gsa/s scope
Trigger jitter: full-scale sine wave of frequency set to –3 dB bandwidth < 1 ps (RMS) (meas.)

Some expensive 10Gsa/s scope
Trigger and Interpolator Jitter: ≤ 3.5 ps RMS (typical)


Low trigger jitter is one extremely important in scopes. How can measure example signal time jitter if scope own trigger jitter is horrible.
Good scopes may also have measurement functions what can measure and display jitter distribution over more or less long time. It is waste of time if scope own jitter is bad.

As I told my opinion in my previous message, whole trigger engine is one of most important part of good scope. All can draw nice looking images to screen but all can not do High Performance trigger engine.

Here topic name include  "High performance".  Why.

Now you tell "Overall quite impressed - triggering jitter was relatively low..." and show this some kind of simulation about trigger jitter with horrible looking numbers.

When you go forward with this trigger engine and things relative it...  you can do some training with this: imagine you are manufacturer and you need reach Trigger engine performance you can write Trigger jitter least <20ps  rms   to data sheet.
 
What ever nice things and nice looking waveform draw scope have and even high performance this and that but if it do not have High Performance trigger engine it is just more or less nice looking thing for decorate lab. And same for analog front end quality and sampling quality. All know... garbage in - garbage out... 


Still it looks nice project... but there read "High performance"... so try keep it. Least related to trigger, what is perhaps most difficult parts of oscilloscope, when try "High performance" < "High End" < "State of art" class where need be more or much more better than normal.
I drive a LEC (low el. consumption) BEV car. Smoke exhaust pipes - go to museum. In Finland quite all electric power is made using nuclear, wind, solar and water.

Wises must compel the mad barbarians to stop their crimes against humanity. Where have the wises gone?
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #155 on: November 24, 2020, 08:23:47 am »
The trigger prototype here is intending to demonstrate the concept of a trigger based on data around the digital trigger - and it is not necessarily the final implementation.  In the best cases the trigger jitter is <20psrms but there is presently an unresolved dependency on the trigger level.

You are not wrong that 300ps would not be ideal for a 'real scope'. The initial goal of the prototype is to replicate the performance of a 1GSa/s oscilloscope,  at a ~$500 price point, so 100ps or less is a fair goal, and "High Performance" in this regard refers to the *state-of-the-art* for existing open-source oscilloscope projects, many of which are based around PC oscilloscope platforms or sample at 10 MSA/s, not 1GSa/s. 

If I am to aim for something around the 2.5GSa/s oscilloscope benchmark, then I need to make the jitter better, of course.

31ps is the level at which the jitter becomes indistinguishable on the display surface, assuming a 1080p display so there is little point (at a minimum timebase of 5ns/div) in achieving anything better, for a 1GSa/s oscilloscope.  If a 2.5GSa/s oscilloscope has a 2ns/div or 1ns/div setting, then the requirements drop to around 5-10ps.
« Last Edit: November 24, 2020, 08:27:30 am by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #156 on: November 24, 2020, 08:51:09 am »
The trigger prototype here is intending to demonstrate the concept of a trigger based on data around the digital trigger - and it is not necessarily the final implementation.  In the best cases the trigger jitter is <20psrms but there is presently an unresolved dependency on the trigger level.

You are not wrong that 300ps would not be ideal for a 'real scope'. The initial goal of the prototype is to replicate the performance of a 1GSa/s oscilloscope,  at a ~$500 price point, so 100ps or less is a fair goal, and "High Performance" in this regard refers to the *state-of-the-art* for existing open-source oscilloscope projects, many of which are based around PC oscilloscope platforms or sample at 10 MSA/s, not 1GSa/s. 
I agree. AFAIK the R&S RM3000 doesn't even specify trigger jitter and from judging how fat a trace gets around the trigger point it isn't very good. But then again this oscilloscope isn't made for jitter analysis.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2732
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #157 on: November 24, 2020, 07:07:03 pm »
"High Performance" in this regard refers to the *state-of-the-art* for existing open-source oscilloscope projects, many of which are based around PC oscilloscope platforms or sample at 10 MSA/s, not 1GSa/s. 
Oh, so that explains some things. I also had a question in my mind of what exactly is "high performance" about 1 GSa scope. Initially I thought that it was ETS with very high analog bandwidth, but now it appears that it simply means "less of a crap compared to what's already out there in the open source sphere".

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #158 on: November 24, 2020, 08:36:04 pm »
Perhaps you should read the opening post?    :-//
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #159 on: November 25, 2020, 12:26:11 am »
"High Performance" in this regard refers to the *state-of-the-art* for existing open-source oscilloscope projects, many of which are based around PC oscilloscope platforms or sample at 10 MSA/s, not 1GSa/s. 
Oh, so that explains some things. I also had a question in my mind of what exactly is "high performance" about 1 GSa scope. Initially I thought that it was ETS with very high analog bandwidth, but now it appears that it simply means "less of a crap compared to what's already out there in the open source sphere".

This started as an open source version of a product that Micsig now has on the market at a price point a bit higher than Tom's initial goal.  We certainly won't try to undercut the Chinese.   My aborted "Scope Wars" thread was an attempt at documenting the results of my market research of what we viewed as competitive product:  Rigol, Instek, Siglent.  At the time Micsig was not on the radar.

Once I got involved it morphed into a "beat the crap out of HPAK, Tek & R&S" project for me. And I *think* I have almost sold Tom on that.

The current goal is under active discussion.  A lot has changed in the last 18 months.

The canonical response to "I want "high performance" is, "How much money would you spend?"

Have Fun!
Reg
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8517
  • Country: us
    • SiliconValleyGarage
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #160 on: November 25, 2020, 12:55:22 am »
The thought of using PCIexpress to aggregate more channels over seperate boards has crossed my mind too but you'll need seperate FPGAs on each board and ways to make triggers across channels happen. You quickly end up with needing to time-stamp triggers and correlate during post processing because the acquisition isn't fully synchronous.
PCI for data dump only. there would be a dedicated ribbon cable carrying the 'qualifier signals' for trigger. kinda like what they do with graphics cards.  the realtime stuff does not go over pci. it is fpga to fpga comms. you would not need too many signals. could use wired-or  wired-and principle. i'm ok with one fpga per baords. would be smaller. you could do a 2 channel acquisition board. then you can do 2 4 6 8 input machines.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #161 on: November 25, 2020, 09:01:53 am »
In the present implementation the data is stored in a 68-bit wide FIFO.  The 64 bits are ADC samples, 1 bit is a trigger indicator and 3 bits are the trigger index.

I only store the 64 bits of data in actual RAM presently, though.  There is only one trigger pointer for any given sample so that goes into a state register which is latched into the acquisition linked list on an interrupt.  I want to replace the acquisition control on the CPU with an FPGA control engine, as the interrupts and slow AXI-slave interface limit the total waveform rate.  I could get a very minimal blind time if the FPGA is doing everything (if I'm very clever - and I like to delude myself into thinking I am - it might be zero lost samples between individual waveforms with frequent triggers.)
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #162 on: November 28, 2020, 10:20:34 pm »
Hi all,

I made the decision to release all the FPGA designs, software (including Python application) and hardware designs under the MIT licence now. 

scopy-fpga contains the application code, FPGA design, IP repositories and STM32F0 firmware for the system. 
https://github.com/tom66/scopy-fpga

scopeapp is the Python application that runs on the Raspberry Pi that provides the UI.  It contains the rendering engine and rawcam capture libraries.
https://github.com/tom66/scopeapp

scopy-hardware contains hardware designs (schematics, gerbers, STEP file) for the design in CircuitMaker (interested parties can get an invite to the project on CM too, just PM me.)
https://github.com/tom66/scopy_hardware

What I want to do now is build a community of interested individuals and see where we can take this project as I think from the interest here it clearly 'has legs'.   The existing hardware platform is quite capable but I would like to do more and want to flesh out the modular capability and investigate higher performance tiers.  There is obviously debate over where this project can go and I think there are many interested parties who would use it and would be interested in contributing.  There is also a commercial aspect to be considered.

I will release a survey/Google Forms tomorrow, to gather some thoughts and then see where to go from there.

And if it goes nowhere, that's fine.  It's been a fun project to work on,  and maybe what I've developed so far can help others.
 
The following users thanked this post: egonotto, DEV001

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #163 on: November 28, 2020, 10:26:16 pm »
As I wrote before: I want to spend some time on creating a 1M Ohm analog frontend that is compatible with the HMCAD1520 / HMCAD1511 ADCs. This will need some number crunching first.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #164 on: November 29, 2020, 06:49:24 pm »
It would of course be very interesting to see what you come up with nctnico.  In the meantime I am focused on the digital systems engineering parts of this project.  I am presently designing the render-acquisition engine which would replace the existing render engine in software on the Pi. 
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #165 on: November 29, 2020, 06:50:17 pm »
Questionnaire for those interested in the project.  

I'd appreciate any responses to understand what features are a priority and what I should focus on.

https://docs.google.com/forms/d/e/1FAIpQLSdm2SbFhX6OJlB834qb0O49cqowHnKiu7BEsXmT3peX4otOIw/formResponse

All responses will be anonymised and a summary of the results will be posted here (when sufficient data exists.)
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #166 on: November 29, 2020, 07:09:02 pm »
HMCAD1520

Analog really doesn't seem too interested in selling these. Those lead times ...

In an ADC market filled with boutique rip off prices these always stood out a bit too much.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #167 on: November 29, 2020, 09:52:54 pm »
HMCAD1520

Analog really doesn't seem too interested in selling these. Those lead times ...

In an ADC market filled with boutique rip off prices these always stood out a bit too much.

Other customers?  Perhaps?
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #168 on: November 30, 2020, 08:09:37 am »
Analog really doesn't seem too interested in selling these. Those lead times ...

In an ADC market filled with boutique rip off prices these always stood out a bit too much.

Yes, it is an odd part, but I've heard from a few people familiar with the market for ADCs and if you know who you are talking to, you can get inexpensive Chinese parts with surprisingly decent performance that easily beat Western equivalent parts in terms of performance per buck.  A great deal of that has been driven by the budget oscilloscope and test equipment market, as well as cheaper RF SDR and amateur radio kit.  Digi-Key and the likes only tend to capture mainstream parts that are worth stocking.

Fundamentally there's not much that's too specialised about ADC design now - these designs are decades old and we have audio cards with 24-bit ADCs running at 192kHz ... this is sort of like the opposite end of the performance spectrum - it's a process problem, not a design problem.

The HMCAD1520 is available on Digi-Key, they have decent stock (~299 parts) and a 14 week lead time for more, which seems OK to me.     I had no issue buying the HMCAD1511 when building the first prototypes, though I only bought two.

I'd imagine ADI only keep these parts and don't develop additional variants because they have existing customers that are happy with them from when they bought Hittite (the part actually comes from Arctic Silicon's "Blizzard" family of ADCs.  They are/were a Norwegian firm that Hittite acquired before ADI acquired them.)  But, it would be nice to see more lower cost parts.

My plan is to figure out a multiplexing arrangement where two ADC chips could be used to sample at 2.5GSa/s.  I have already managed to get a HMCAD1511 stable at >1.2GSa/s.  That would enable a realistic 2.5GSa/s oscilloscope (400ps sample period) with say 350MHz per channel B/W in 1 channel mode.  I also suspect that the '1520 features might only be lasered out (or they may not be disabled at all!) on the '1511,  as the two ADCs seem to use very similar cores/structures - though I am yet to confirm this. 
« Last Edit: November 30, 2020, 08:11:33 am by tom66 »
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #169 on: November 30, 2020, 07:06:38 pm »
From HMCAD1520 datasheet:
Quote
High speed Modes (12-bit / 8-bit)
Quad Channel Mode: Fsmax = 160 / 250 MsPs
Dual Channel Mode:  Fsmax  = 320 / 500 MsPs
Single Channel Mode: Fsmax = 640 / 1000 MsPs

I'm wondering whats up with the 640 MSPS?
The AC specifications of the HMCAD1520 are only given up to 640 MSPS, but not for 1000.
And Max. Conversion Rate is specifed as 640 as well (1 ch).
When do the 1000 apply? Do they only apply in HMCAD1511 compatibility mode (8-bit)?
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #170 on: November 30, 2020, 07:12:11 pm »
I'm wondering whats up with the 640 MSPS?
The AC specifications of the HMCAD1520 are only given up to 640 MSPS, but not for 1000.
And Max. Conversion Rate is specifed as 640 as well (1 ch).
When do the 1000 apply? Do they only apply in HMCAD1511 compatibility mode (8-bit)?

Yes, it's 1GSa/s in 8-bit mode, 640MSa/s in 12-bit and 160MSa/s in 14-bit mode.
IMO 14-bit mode is a bit useless but probably a consequence of the internal 14-bit core (which HMCAD1511 shares)
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #171 on: November 30, 2020, 09:26:06 pm »
I guess you mean 105 MSa/s in precision mode, do you? (160 is obviously for 12-bit high speed @4ch)

I don't think that precicion mode is really useless.
The main point of precision mode are IMO not the 14 bits, but the following:

Quote
the high speed modes all utilize interleaving to achieve high sampling speed. Quad channel mode interleaves 2 ADC branches, dual channel mode interleaves 4 ADC branches, while  single  channel  mode  interleave all 8 ADC branches. In precision mode interleaving is not required and each ADC channel uses one ADC branch only.

This eliminates interleaving spurs, leading to a significantly better SFDR and SINAD.

The cost is a maximum sampling rate of 105 MSa/s - but with all 4 channels enabled.
So with 4 channels enabled, the precision mode sampling rate is only by a factor ~1.5 lower than the 160 MSa/s for 12-bit 2-fold interleaved high speed mode. I find this trade-off not too bad.
« Last Edit: November 30, 2020, 09:29:23 pm by gf »
 
The following users thanked this post: tom66

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #172 on: November 30, 2020, 09:36:02 pm »
I guess you mean 105 MSa/s in precision mode, do you? (160 is obviously for 12-bit high speed @4ch)

That's what I get for not double checking the datasheet and quoting from memory.

Quote
the high speed modes all utilize interleaving to achieve high sampling speed. Quad channel mode interleaves 2 ADC branches, dual channel mode interleaves 4 ADC branches, while  single  channel  mode  interleave all 8 ADC branches. In precision mode interleaving is not required and each ADC channel uses one ADC branch only.

This eliminates interleaving spurs, leading to a significantly better SFDR and SINAD.

The cost is a maximum sampling rate of 105 MSa/s - but with all 4 channels enabled.
So with 4 channels enabled, the precision mode sampling rate is only by a factor ~1.5 lower than the 160 MSa/s for 12-bit 2-fold interleaved high speed mode. I find this trade-off not too bad.

OK, that's actually a really good point and one I didn't consider. Thanks! 
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #173 on: December 05, 2020, 11:43:42 am »
Thanks for all the comments so far and for those who have filled out the survey.  For anyone who has missed it, please submit your response here:

https://docs.google.com/forms/d/e/1FAIpQLSdm2SbFhX6OJlB834qb0O49cqowHnKiu7BEsXmT3peX4otOIw/viewform

All responses are appreciated - I am looking to make an announcement in the new year regarding the direction of this project.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #174 on: December 05, 2020, 12:18:50 pm »
It would of course be very interesting to see what you come up with nctnico.  In the meantime I am focused on the digital systems engineering parts of this project.  I am presently designing the render-acquisition engine which would replace the existing render engine in software on the Pi.
I'd advise against that. With the rendering engine fixed inside the FPGA you'll loose a lot of freedom in this part. Lecroy scopes do all their rendering in software to give them maximum flexibility for analysis. A better way would be to finalise the rendering in software first and then see what can be optimised where using the FPGA is the very very last resort. IMHO it would be a mistake to put the rendering inside the FPGA because it will fixate a lot of functionality and lock many people out of being able to help improve this design.
« Last Edit: December 05, 2020, 12:22:01 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #175 on: December 05, 2020, 04:33:43 pm »
All the FPGA should be doing is digital phosphor accumulation.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #176 on: December 05, 2020, 04:38:52 pm »
All the FPGA should be doing is digital phosphor accumulation.
No, not at this point in the project. This can be done in software just fine.

If you look at Siglent's history you'll notive they have rewritten their oscilloscope firmware at least 3 times from scratch before getting where they are now. Creating oscilloscope firmware is hard and it is super easy to paint yourself into a corner. The right approach is to get the basic framework setup first (going through several iterations for sure) and then optimise. IMHO the value of this project is going to be in flexibility to make changes / add new features. If people want crazy high update rates they can buy an exisiting scope and be done with it.

For example: if the open source platform allows to add a Python or C/C++ based protocol decoder in a couple of hours then that is a killer feature. Especially if the development environment already runs on the oscilloscope so no software installation for cross compiling or whatever is needed. If OTOH you'd need to get a Vivado license first and spend a couple of days on understanding the FPGA code then nobody will want to do this.

A good example is how the Tektronix logic analyser software can be extended by decoders: https://xdevs.com/guide/tla_spi/
« Last Edit: December 05, 2020, 04:43:28 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno, JPortici

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #177 on: December 05, 2020, 04:39:45 pm »
It would of course be very interesting to see what you come up with nctnico.  In the meantime I am focused on the digital systems engineering parts of this project.  I am presently designing the render-acquisition engine which would replace the existing render engine in software on the Pi.
I'd advise against that. With the rendering engine fixed inside the FPGA you'll loose a lot of freedom in this part. Lecroy scopes do all their rendering in software to give them maximum flexibility for analysis. A better way would be to finalise the rendering in software first and then see what can be optimised where using the FPGA is the very very last resort. IMHO it would be a mistake to put the rendering inside the FPGA because it will fixate a lot of functionality and lock many people out of being able to help improve this design.

The problem with software rendering is you can't do as much with software as you can do with dedicated hardware blocks. The present rendering engine achieves ~23k wfms/s and is about as optimised as you can achieve on a Raspberry Pi ARM processor taking maximum advantage of cache design and hardware hacks.  And that is without vector rendering, which currently approximately halves performance.

An FPGA rendering engine should easily be able to achieve over 200k wfm/s and while raw waveforms rendered per second is a case of diminishing returns (there probably is not much benefit with the 1 million waves/sec scopes from Keysight - feel free to disagree with me here) there is still some advantage to achieving e.g. 100k wfm/s which is where many US$900 - 1500 oscilloscopes seem to be benchmarking.

This also frees the ARM on the Pi to be used for more useful things - while theoretically 100kwfm/s might be possible if all four ARMs were busy would this be a good thing? The UI would become sluggish and features like serial decode would depend on the ARM processor too, in all likelihood, and therefore would suffer in performance.

As for maintainability, that shouldn't be as much of a concern. Sure, it is true that the raw waveform engine may not be maintained as much (it is a 'get it right and ship' thing in my mind), but the rest of the UI and application will be in userspace, including cursors, graticule, that sort of thing.  In fact, it is likely that all the FPGA renderer will do is pass out a rendered image of the waveform for a given channel which the Pi or other applications processor can plot at any desired location.  Essentially, as Marco states, the FPGA is doing the digital phosphor part which is the thing that needs to be fast.  The applications software will always have access to the waveform data too.
« Last Edit: December 05, 2020, 04:41:38 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #178 on: December 05, 2020, 04:44:57 pm »
Trust me, nobody cares about waveforms per second! It is not a good idea to just pursue a crazy high number just for the sake of achieving it. There are enough readily made products out there for sale for the waveforms/s aficionado. IIRC the Lecroy Wavepro 7k series tops at a couple of thousand without any analysis functions enabled.

You have to define the target audience. What if someone has a great idea on how to do color grading differently but if that is 'fixed' inside the FPGA there is no way to change it. Also, with rendering fixed inside the FPGA you basically end up with math traces for anything else and you can't make a waveform processing pipeline (like GStreamer does for example) that easely.

I'm 100% sure that the software and GPU approach offers the best flexibility and is the way to the future (also for oscilloscope manufacturers in general). A high end version can have a PCIexpress slot which can accept a high end video card to do the display processing. The waveforms/s rate goes up immediately and doesn't take any extra development effort. Again, look at the people upgrading their Lecroy Wavepro 7k series with high end video cards.
« Last Edit: December 05, 2020, 04:55:01 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Zucca

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #179 on: December 05, 2020, 04:52:58 pm »
No, not at this point in the project.
For a minimum functional prototype to get some hype going that makes sense, high capture rate digital phosphor and event detection is a high end feature. Budgeting some room/memory for it in the FPGA costs very little time though.
« Last Edit: December 05, 2020, 04:56:02 pm by Marco »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #180 on: December 05, 2020, 04:57:27 pm »
No, not at this point in the project.
For a minimum functional prototype to get some hype going that makes sense, high capture rate digital phosphor is a high end feature. Budgeting some room/memory for it in the FPGA costs very little time though.
But it is just one trick you don't really need and it seriously hampers the rest of the functionality. Look at how limited the Keysight oscilloscopes are; basically one trick ponys. If you use a trigger then the chance of capturing a specific event is 100% and you don't need to stare at the screen without blinking your eyes. At this moment time is better spend on getting the trigger system extended so it can trigger on specific features and sequences of a signal.
« Last Edit: December 05, 2020, 05:14:08 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #181 on: December 05, 2020, 05:44:59 pm »
Digital phosphor is more to get a general idea how the circuit is behaving, using it for detecting if a signal goes beyond bounds by eye seems kinda silly. High capture rates are also valuable for fault detection and also benefit from being implemented in the FPGA. The two features are orthogonal ... but for a minimum prototype FPGA implementation for both could be delayed, even if the latter has higher priority.
« Last Edit: December 05, 2020, 05:48:34 pm by Marco »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #182 on: December 05, 2020, 06:23:23 pm »
The point is that you can do 'digital phospor' just fine in software; doing it in FPGA right now just hampers progress of the project and it doesn't add much in terms of usefulness. Look at high end signal analysis oscilloscopes; none of them have high waveform update rates. It is just that Keysight has been hyping this to be a useful feature on their lower end gear while it isn't. Also realise that the highest waveform update rates happens at a very specific time/div setting only. A high update rate has never helped me to solve a problem. Deep memory and versatile trigger methods are much more useful.
« Last Edit: December 05, 2020, 06:26:59 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #183 on: December 05, 2020, 06:38:57 pm »
Well, there's always the option for both.  The present Python application architecture has support for different rendering engines - ArmWave is just the only one presently implemented but FPGAWave would also be an option.  In that case, the user would have an option to select their preference, and the Zynq SoC would select the required data stream and mode for the CSI transfer engine.

Personally one of the benefits I find from high waveform render rates is that jitter and ringing is more clearly understandable - I know how frequent an event is.

Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer for digital phosphor to avoid too much stairstepping implying a necessarily higher capture/render rate. More so potentially at higher zoom levels where there is much more than 1 wave point per displayed X column.
« Last Edit: December 05, 2020, 06:41:39 pm by tom66 »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #184 on: December 05, 2020, 06:49:17 pm »
When you're initiating the capture of a waveform based on stuff happening hundreds/thousands of samples after a simple/protocol trigger, I'm not sure calling it flexible triggering does that justice.

It's high capture rate pass/fail testing. A feature which can really still wait for a minimum viable prototype, just stick to simple triggers for the moment.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #185 on: December 05, 2020, 07:30:53 pm »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction. Secondly you can't see very dark colors so the intensity has to start somewhere half way. So at the hardware level you are limited to 100 levels. And then there is the limit of what the human eye can distinguish. If you have 32 or maybe 64 different levels you have more than enough to draw a meaningfull picture. However, intensity grading is just mimicing analog oscilloscope behaviour; it doesn't add much in terms of usefullness. Color grading or reverse intensity (see my RTM3000 review) are far more usefull to look at a signal compared to 'simple' intensity grading. Having 8 levels of intensity grading is likely to be more informative in terms of providing meaningfull information; with just 8 levels there will be a clear binning effect of how often a trace hits a spot.
« Last Edit: December 05, 2020, 08:17:24 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Someone, JamesLynton, JPortici

Offline JamesLynton

  • Contributor
  • Posts: 35
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #186 on: December 05, 2020, 08:53:41 pm »
Very sensible on the intensity binning idea, I like that idea *a lot* :)
May also help to have an adjustable +/- exponential tracking curve to assign binning transition spread on the fly when you are trying to tease 'data' out that frequently isn't quite statistically linear in its repetition rate.

Also, awesome project, after being rather disappointed by the UI, Features & Performance of all the commercial pc based dongle scopes I've seen so far, this immediately is looking really nice.
« Last Edit: December 05, 2020, 08:56:24 pm by JamesLynton »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #187 on: December 06, 2020, 09:20:42 am »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction.

That's not really true - not on any modern TFT LCD at least.  Gamma correction is done in the analogue DAC that supplied with gamma reference levels. The DACs for each pixel column interpolate (linear, but it's a close approximation) between these channels.  The resulting effect is that all 256 codes have a useful and distinct output and the output is linear.   This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.   

A typical big LCD panel has 16 gamma channels, 8 for each drive polarity.  Cheaper panels use 6 or 8 channels, with dithering used to interpolate further between these levels.

Secondly you can't see very dark colors so the intensity has to start somewhere half way. So at the hardware level you are limited to 100 levels. And then there is the limit of what the human eye can distinguish.
If you have 32 or maybe 64 different levels you have more than enough to draw a meaningfull picture. However, intensity grading is just mimicing analog oscilloscope behaviour; it doesn't add much in terms of usefullness. Color grading or reverse intensity (see my RTM3000 review) are far more usefull to look at a signal compared to 'simple' intensity grading. Having 8 levels of intensity grading is likely to be more informative in terms of providing meaningfull information; with just 8 levels there will be a clear binning effect of how often a trace hits a spot.

Many people would say the human eye can distinguish between at least 10 bits of resolution but possibly more.   Obviously not all that useful on an 8 bit panel but it is a bit of a fallacy to say the human eye is the limit here.   It is true that totally dark colours are not as useful but this is what the intensity control on most oscilloscopes does - it adjusts the minimum displayed brightness.  It is still probably fair to say at least 200 codes of the displayed codes are useful.  You could always turn up the intensity control to see those darker values, even if the brighter values now saturate.  But you need to have the depth of the intensity bins large enough to store this data to then make use of this function.

I would agree that colour grading is really useful and perhaps more useful than regular intensity grading though it depends on the application.  Really what we're looking at here is having enough resolution in the internal buffers to then use this data, either with simple intensity grading or with arbitrary colour grading. The present ArmWave renderer supports regular intensity grading, inverted, and rainbow/palette modes.

Edit: fixed typo
« Last Edit: December 06, 2020, 10:46:21 am by tom66 »
 
The following users thanked this post: rf-loop

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #188 on: December 06, 2020, 10:58:42 am »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction.

That's not really true - not on any modern TFT LCD at least.  Gamma correction is done in the analogue DAC that supplied with gamma reference levels. The DACs for each pixel column interpolate (linear, but it's a close approximation) between these channels.  The resulting effect is that all 256 codes have a useful and distinct output and the output is linear.   This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.   

A typical big LCD panel has 16 gamma channels, 8 for each drive polarity.  Cheaper panels use 6 or 8 channels, with dithering used to interpolate further between these levels.
Well, I'm doing a lot with TFT panels in all shapes and sizes but I have never seen one which has gamma correction inside the panel. The panel typically uses 8 bit LVDS data which comes from a controller which does gamma correction. But what goes into the panel is still 8 bit.

And there is also a difference between being able to see different shades and how many different shades you can actually interpret. Sometimes less is more. If you look at the Agilent 54835A for example you'll see that the color grading uses binning. Every color is assigned a specific bin which says how many waveforms have been captured inside that bin. IMHO you have to be very careful not to hunt for eye candy (or worse: analog scope emulation which hides part of the signal by definition) but think about ways to show a signal on screen in a way which provides meaningfull information about the signal.
« Last Edit: December 06, 2020, 11:48:44 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #189 on: December 06, 2020, 02:28:50 pm »
Many inexpensive panels generate the gamma voltages internally in the source drivers to reduce cost of having external references, but this absolutely is a thing:

https://www.ti.com/lit/ds/symlink/buf12800.pdf  as an example.  When I was a student I made fair bank replacing AS15-F gamma reference ICs on LCD T-con boards for LCD televisions. They would common fail causing a badly distorted or inverted image.

The voltages steer the output codes for the DAC.  The panel data indeed is 8-bit input and the DAC has only 256 valid output codes, but the output is nonlinear.  An additional signal from the T-con flips the output from 7.5V - 15V range to 7.5V - 0V for pixel inversion (maintaining zero net bias). This is common amongst most LCD panels, although there are some older/cheaper panels that use 6-bit DACs with looser gamma correction and dithering.

You could do an experiment:  put a 256-level gradient on a display of choice, provided it is wide enough you should be able to see distinct stair-stepped bands.  If the gradient has nonlinear steps, then the gamma correction is done before the DACs.  If it has linear bands, then there is no gamma correction applied to the digital output. 
« Last Edit: December 06, 2020, 02:33:16 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #190 on: December 06, 2020, 03:11:53 pm »
It is still probably fair to say at least 200 codes of the displayed codes are useful.  You could always turn up the intensity control to see those darker values, even if the brighter values now saturate.
The problem with this approach is that you basically are displaying something which is not quantifiable. When testing oscilloscopes people often use AM modulated signals to create a pretty picture. But that picture doesn't say anything about the signal. OTOH if you use fixed level binning then the number of visible levels actually says something about the AM modulation depth.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #191 on: December 06, 2020, 04:05:47 pm »
Quote
Well, I'm doing a lot with TFT panels in all shapes and sizes but I have never seen one which has gamma correction inside the panel. The panel typically uses 8 bit LVDS data which comes from a controller which does gamma correction. But what goes into the panel is still 8 bit.

If a panel takes an input signal with a color depth of only 8 bits per channel, then it is necessary that the signal is gamma-encoded (i.e. not linear), otherwise a single quantization step would be clearly visible in dark regions, and one could not display smooth gradients. Human vision is not linear. Uniform luminance spacing is not perceptually uniform as well, but the human vision can distinguish smaller luminance steps in dark regions than in bright regions.

Regarding discernable shades of gray: The human vision can adapt to several decades of luminance (e.g. outoor bright sunlight vs. indoor candle light), but at a particular adaptation state it cannot distinguish more than about 100 gray levels (with perceptually unifirm spacing from black to white). If I'd want to be able to distinguish adjacent bins clearly, then I'd not use more than 32 bins.

Quote
This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.

The aim is that the display outputs linear luminance. So the LCD column driver needs to undo the gamma encoding of the input signal, and additionally compensate any non-linearily of the LC cell's voltage to optical transmittance transfer function.

Instead of using a non-linear DAC, this could be also done with a LUT in the digital domain. Then the DAC could be linear, but it would need to have more bits (and most of the levels were unused).
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #192 on: December 06, 2020, 05:50:45 pm »
FWIW In grad school I created  256 step color and gray scale plots on an $80K Gould-Dianza graphics system attached to a VAX 11/780.  The steps were not visible.

There is a lot of folk lore about the sensitivity of the human eye which may be readily disproved by simple experiment.  While the eye is very sensitive to color,  that sensitivity does not extend to the intensity of arbitrary color scales.

Have Fun!
Reg
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #193 on: December 06, 2020, 06:52:22 pm »
FWIW In grad school I created  256 step color and gray scale plots on an $80K Gould-Dianza graphics system attached to a VAX 11/780.  The steps were not visible.

There is a lot of folk lore about the sensitivity of the human eye which may be readily disproved by simple experiment.  While the eye is very sensitive to color,  that sensitivity does not extend to the intensity of arbitrary color scales.

Have Fun!
Reg

You are very correct on this. That is why all kinds colour grading displays were invented.

Nico is right.. if you're displaying pixel retrace frequency/distribution and encoding it in pixel intensity, there has to be compression of all values from minimum clearly visible (but obviously dimmed) to full intensity of pixel. So there is obvious nothing, something obviously visible meant to be only one repetition  and maximum brightness for pixels that get lit up all the time.  You cannot go from 0. It probably has to be nonlinear. What people are used to is simply response characteristics of phosphorus. That will compress on high side, once you get bright enough it won't be brighter, the dot will start to bloom.

I also agree with Nico about colour grading. I cannot comprehend why more manufacturers use reverse grading (to highlight rare events, not frequent ones, you want to see the outliers..).

Regards,
 
The following users thanked this post: tom66

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #194 on: December 06, 2020, 09:20:16 pm »
Reverse grading seemed obvious to me.  Hence the present code supports it although it's not exposed on the UI.

The rendering engine presently has a 16-bit accumulator as 8-bit was insufficient without saturating arithmetic. In reality I think something like a 12-bit buffer would be sufficient.    The resulting 16-bit values are taken through a palette lookup process to produce the resulting pixel value. So inverting the palette is pretty simple, just flip the table (just want to exclude the zeroth value so you don't write pixels everywhere.)

It really depends on what you want to achieve from intensity grading.  I think there's a mix of uses:

- Some users just want more detail than just 'hit' or 'not hit' and to see the approximate intensity of a pixel indicating the energy in that area (I suspect this is the primary category of user.)  These users expect their DSO to behave ~roughly the same as every other DSO, although obviously there are opportunities to improve this behaviour.

- Some users are doing things like eye diagram or jitter analysis and setting a threshold where you can say <10% of events hit this bin could be useful.  In this case I suspect the users benefit from either reverse intensity grading or rainbow/custom palette grading.

- Others are just expecting a DSO to behave like an analog scope, especially so when in XY mode.  I suspect this is a relatively small category of user, and this user drives the inclusion of 'variable persistence' modes in most modern oscilloscopes.

« Last Edit: December 06, 2020, 09:25:07 pm by tom66 »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #195 on: December 06, 2020, 09:29:59 pm »
One other thing:-

The present prototype when in 'Auto' memory depth (which is currently the only memory depth exposed to the user and otherwise behaves similarly to the Rigol DS1000Z 'Auto' function) uses all available RAM as a history buffer. With 256MB of RAM and at 50ns/div (~23k wfm/s, 610 pts), this gives approximately 17 seconds of history buffer that is recorded in real time.  In my mind, this is far more useful than any infinite or variable persistence feature, and as far as I can tell, only Siglent expose this in normal use - which led to Dave complaining about it as it was turned on by default.    As far as I can see, there is no reason not to enable this function by default, as it is just a case of walking through memory pointers.  If the user selects a larger memory size, then the instrument will have less record time, but should always have the amount of memory available that the user requests.

Most people know this function as segmented memory.  The only difference is it's a continuously active segmented memory function, which adapts to current settings to make the most use of the memory available.  It avoids that headache of pressing the 'STOP' button and missing the trigger by a few milliseconds.

This is one time the user might want to turn down the waveform rate as e.g. reducing the update rate to 1k wfm/s would increase the memory time to over 6 minutes.  Giving the user that trade off is valuable (this is pretty much always found on scopes with segmented memory).  Depending on the future platform choice,  I expect a later version of the scope to support at least 1GB of RAM which would give around 900 Mpts of usable waveform memory.  So at 23k wfm/s, instrument could record ~1 minute of waveform history and select any one of those timestamped frames or analyse any single given capture.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #196 on: December 06, 2020, 09:47:36 pm »
One other thing:-

The present prototype when in 'Auto' memory depth (which is currently the only memory depth exposed to the user and otherwise behaves similarly to the Rigol DS1000Z 'Auto' function) uses all available RAM as a history buffer. With 256MB of RAM and at 50ns/div (~23k wfm/s, 610 pts), this gives approximately 17 seconds of history buffer that is recorded in real time.  In my mind, this is far more useful than any infinite or variable persistence feature, and as far as I can tell, only Siglent expose this in normal use - which led to Dave complaining about it as it was turned on by default.    As far as I can see, there is no reason not to enable this function by default, as it is just a case of walking through memory pointers.  If the user selects a larger memory size, then the instrument will have less record time, but should always have the amount of memory available that the user requests.
There are a few remarks to be made here:

1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.

2) Having a history buffer running in the background is standard on Yokogawa and R&S oscilloscopes as well. The memory left after the user's memory depth selection (which can be set to auto meaning to use just enough memory to fill the screen) is used as a history buffer.

3) Segmented recording is close to history mode but the user selects a specific record length and number of records instead of the oscilloscope doing this automatically. The distinction is between the oscilloscope determining something automatically versus the user being very specific in order to tailor the oscilloscope configuration to a particular measurement. Having a history buffer with 100k segments while the user is only interested in 5 is counter productive.

4) Variable and infinite persistence are required on a DSO. I regulary use infinite persistence for tests which take hours to weeks. I just want to see the extends of where a signal goes (and it doesn't need crazy high update rates).

Another nice feature to have is detailed mask testing. Again it seems oscilloscope makers aim for high update speeds but in doing so they throw the baby out with the bathwater. To give an example: I have a product which outputs a low and high frequency signal during several seconds. A 10Mpts oscilloscope can sample this signal with enough detail however it turns out that mask testing seems to use peak-detect and decimates the data to a couple of hundred points. It would be nice to be able to compare traces with a length of 10Mpts (or more). It doesn't matter if it is slow; it will always be faster and more accurate compared to checking a signal visually.
« Last Edit: December 06, 2020, 10:46:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #197 on: December 06, 2020, 11:16:00 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).

I think best way to explain this is to try call it that LeCroy is sample rate defined, sample buffer length is calculated in time (not samples) and it is same as displayed time base, with defined maximum.
That means it will keep sample rate and retrigger rate as high as possible at all times, until it reaches max memory allowed, and only then it will start dropping sample rate.

That is very good strategy for general purpose scope because it maximises retrigger rate, and captures only data needed for time span we are interested in. It is simple to think about from operators standpoint: I have 120ns of data. It was taken at 5GS/s so I know there is no aliasing on my 200 MHz signal...

It is not so good for FFT, where we want exact control over sample buffer size and sample rate...
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #198 on: December 06, 2020, 11:29:30 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).
Let's keep it at me not being able to explain it.  8) I know perfectly how it works and why it is bad in which situation. It is based on my own hands-on experience; I have owned a Siglent oscilloscope in the past and also own a Lecroy oscilloscope (I don't think there is any DSO brand left from which I have not used/owned a DSO myself; yes including Picoscope).
« Last Edit: December 06, 2020, 11:32:43 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: 2N3055

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #199 on: December 06, 2020, 11:34:28 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).

I think best way to explain this is to try call it that LeCroy is sample rate defined, sample buffer length is calculated in time (not samples) and it is same as displayed time base, with defined maximum.
That means it will keep sample rate and retrigger rate as high as possible at all times, until it reaches max memory allowed, and only then it will start dropping sample rate.

That is very good strategy for general purpose scope because it maximises retrigger rate, and captures only data needed for time span we are interested in. It is simple to think about from operators standpoint: I have 120ns of data. It was taken at 5GS/s so I know there is no aliasing on my 200 MHz signal...

It is not so good for FFT, where we want exact control over sample buffer size and sample rate...
Maybe just maybe he will one day understand just why these different strategies are used but maybe not as wfps has never been of high concern for him....no guesses as to why.  ::)

3 choices, ASIC, ADC allowing for large captures and ADC with optimised wfps...pick your poison and understand its limitations.
« Last Edit: December 07, 2020, 12:12:40 am by tautech »
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #200 on: December 07, 2020, 12:02:11 am »
I never said high trigger rates (waveforms/s) are never necessary. Sometimes they are and in that case I simply select a shorter memory length to speed up the acquisition process. However aiming for insanely high waveforms/s quickly lands you in an area where there are diminishing returns. The oscilloscope manufacturers tend to claim a high waveform update rate makes it more likely to catch glitches but in the end they never get to 100% due to blind time (which can be avoided BTW at the cost of ending up with a weirdly drawn signal). However measuring is about 100% certainty so if you want to capture a glitch with 100% percent certainty during a given interval the only way out is deep memory (+analysis) or triggering (combined with infinite persistence and/or saving a screendump).
« Last Edit: December 07, 2020, 12:25:33 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #201 on: December 07, 2020, 12:29:50 pm »
There's no reason you can't take the Rigol approach (likely the same on other instruments as well) and give the user a choice of 'Auto' vs a memory depth selection.

In 'Auto' the scope always optimises for waveform rate which IMO is the good, default optimisation (I think this is what most people expect unless they are using the scope for a special application.) 

If you select say 120k points but are on a short timebase, then the update rate drops appropriately and the available capture exceeds that of the visible window.  In fact, all the timebase control does in this instance, is inform the oscilloscope what 'auto' mode it should use and how many points it should apply.  In essence, there is no actual difference in a capture at 120k points at say 10us/div and one at 50ns/div,  they both capture the same data.   It is just a matter of how it is displayed to the user and the timebase control is more of a horizontal zoom control.

In all modes, if there is free waveform RAM, use that RAM to store a history buffer.  120k points gets 2000 waveforms for instance. 
 

Offline Zucca

  • Supporter
  • ****
  • Posts: 4308
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #202 on: December 07, 2020, 12:51:57 pm »
Trust me, nobody cares about waveforms per second!

+1
If I want high waveforms per second I do not search a device in the Open Source jungle.
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #203 on: December 07, 2020, 02:58:24 pm »
I agree that it is not needed to have 1 MWfrms/sec, but in normal interactive mode it should have enough for fluid display. From what was said previously that is already OK.. 20-25 kWfms per second is more than enough for interactive work..
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #204 on: December 07, 2020, 05:30:05 pm »
Do note the present performance is dot mode only.  Rigol achieve 50 kwfm/s on dot mode on the DS1000Z series,  the headline 25 kwfm/s figure is given in vector mode (a refreshing change that they don't quote the absolute fastest, unrealistic figure!)

I expect vector mode will be a bit slower, it depends on how many vectors need to be drawn.  I've an optimal algorithm in mind but it's limited to 2 pixels/cycle due to the ARM ALU size.  Maybe with NEON I can do more (64-bit add with 4 or 8 terms if using 16 or 8-bit saturating arithmetic) but it would require carefully hand coded assembly.  That's if I decide to further optimise ArmWave which as I've indicated here I'm not certain is the best route yet. 
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #205 on: December 07, 2020, 07:06:38 pm »
if you want to capture a glitch with 100% percent certainty during a given interval the only way out is deep memory (+analysis) or triggering (combined with infinite persistence and/or saving a screendump).
It's much easier to compare a capture against bounds relative to a reference signal on the fly than doing digital persistence on the fly. Linear memory access vs. defacto random access.
 

Offline tv84

  • Super Contributor
  • ***
  • Posts: 3221
  • Country: pt
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #206 on: December 07, 2020, 07:24:22 pm »
It's much easier to compare a capture against bounds relative to a reference signal on the fly than doing digital persistence on the fly.

Can you elaborate on what you mean by "digital persistence on the fly"?
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #207 on: December 07, 2020, 08:25:01 pm »
I feel persistence will be quite easy to implement.  In infinite persistence, pixels are only updated if the value is greater than the previous - this can be adjusted at the final framebuffer stage so there are relatively few pixel values to compare.  For variable persistence a moving average filter could be used although that would have a non-linear decay function (not sure if this is a problem.) Alternatively N buffers (~1024x256x16) would need to be stored and summed together although this would get computationally very expensive for longer persistence periods.

It seems that Tek use an interesting approach for variable persistence on their newer scopes. They apply a random noise function to the previous buffer, which models the approximate desired persistence.  The disadvantage of this method is that the trace constantly looks noisy.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #208 on: December 08, 2020, 12:07:42 am »
Can you elaborate on what you mean by "digital persistence on the fly"?
Trying to updating the bucket counts for persistence at full sample rate is pretty much impossible, determining if it's within a given bound of a reference signal fairly trivial.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #209 on: December 08, 2020, 08:44:11 pm »
Trying to updating the bucket counts for persistence at full sample rate is pretty much impossible, determining if it's within a given bound of a reference signal fairly trivial.

What do you mean by this?
Testing every sample against a reference signal is still fairly expensive.

Mask testing after the waveform is captured is relatively easy and could be done in the rendering engine.  The mask could be defined by some % of the signal e.g. 99% of all samples which could be gathered after say ~30 seconds of persistence data is collected.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #210 on: December 09, 2020, 12:53:41 am »
Testing every sample against a reference signal is still fairly expensive.
Instead of linearly storing a byte per sample, you need to also retrieve two for the upper and lower bounds and do two comparisons. it's fairly expensive, but not unreasonably expensive like it gets for digital phosphor.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #211 on: December 09, 2020, 07:20:10 pm »
What would the upper and lower bounds here be?  Surely the lower bound is always going to be zero?  You could store the peak min/max value for each horizontal pixel in the post-processing stage, but I'm not sure what value this would have?
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #212 on: December 10, 2020, 04:06:02 am »
They are part of the mask for a reference signal for pass/fail testing. The mask will be computed based on an area around the current sample, so you can't really determine that on the fly just from the reference signal, so you need the two values per sample to compare against.
« Last Edit: December 10, 2020, 04:09:09 am by Marco »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #213 on: December 11, 2020, 02:05:16 am »
I have proposed computing statistics so as to be able to trigger on "trace outside x.x sigma bound" and even histograms. This is not a "start of sweep" trigger, but a data event trigger.  I've given careful thought to the resource requirements and it seems quite tractable to me for an Ultrascale implementation.

It's important to distinguish between things which must be done in real time and which simply need to appear to be done in real time.  Most of what a DSO does does not need to be done in hard real time.  A screen refresh delay is of no concern.  Trigger point alignment, AFE correction, anti-alias filtering, downsampling and a few other things must be done in hard real time, but once the data are in the format needed for the selected data view, the time constraints become quite relaxed.

I have the view that a DSO should do everything it is possible to do with the resources available.

My primary concern now is the AFE input filter.  It should be a high order Bessel-Thomson filter to provide accurate waveform shape.  I've got every reference I can find, but unfortunately, the maximally flat phase gets skimpy treatment and I've still not figured out how to analyze and design one from first principles.  I can do a design by hand or with software, but I can't write the derivation on a whiteboard.  More work required.
I'd very much like to see threads discussing how to time synchronize waveforms, implement advanced triggers, do signal processing operations e.g. FFT, etc.

I keep reading a lot of "you can't do this", "you have to do that", but precious little, "this is how you implement that". It would be nice to have more of the latter and less of the former.

Have Fun!
Reg
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #214 on: December 11, 2020, 08:47:16 am »
Indeed.  That was the key realisation for this project, that most of the work can be done 'after the fact',  once  you have captured the data.  Provided you have a sufficiently large buffer and data rate between your capture engine and display engine you can do quite a lot with non-realtime processors.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #215 on: December 11, 2020, 08:59:30 am »
I'm working on an AFE filter. Right now I've arrived at a 5th order Bessel with -3dB at 200MHz. Assuming a samplerate of 500Ms/s it could be a bit steeper (higher order) but then the parts get to unrealistic values. But there will be a 1st order roll-off as well so the -3dB point might need some further tweaking. I think other oscilloscopes use steeper filters at the cost of introducing more phase shift.

I've also recalculated the attenuator part of the schematic I posted earlier. It seems quite usefull and ticks all the boxes (including having a constant capacitance towards the probe); better than I remember.
« Last Edit: December 11, 2020, 09:04:49 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #216 on: December 13, 2020, 05:07:09 pm »
I'm working on an AFE filter. Right now I've arrived at a 5th order Bessel with -3dB at 200MHz. Assuming a samplerate of 500Ms/s it could be a bit steeper (higher order) but then the parts get to unrealistic values. But there will be a 1st order roll-off as well so the -3dB point might need some further tweaking. I think other oscilloscopes use steeper filters at the cost of introducing more phase shift.

I've also recalculated the attenuator part of the schematic I posted earlier. It seems quite usefull and ticks all the boxes (including having a constant capacitance towards the probe); better than I remember.

The -3 dB point needs to be around 125 MHz to produce a good step response.  At 80% of Nyquist the edge rings badly.  Also there is no way for a 5th order Bessel to prevent significant aliasing.  With a 50% of Nyquist corner, a 5th order filter will only be about -30 dB at Nyquist whereas you need -42 dB for an 8 bit ADC.

An 80% corner,  5th order filter will be about -7.5 dB at Nyquist with the consequence that FFT displays will be hopelessly borked in certain cases.

Reg
 
The following users thanked this post: 2N3055

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #217 on: December 13, 2020, 08:45:37 pm »
First see how it behaves and go from there. As already stated: the Bessel filter won't be the only part limiting the frequency response. Analog filters also wrap around in the digital domain so you don't need to get to -48dB at Nyquist.
« Last Edit: December 13, 2020, 08:48:32 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #218 on: December 13, 2020, 09:01:45 pm »
[snip]
 Analog filters also wrap around in the digital domain so you don't need to get to -48dB at Nyquist.

WTF?  This is so basic I'm speechless!

Edit: To make clear, an 8 bit ADC can digitize a <7 bit signal range.  Hence the -42 dB stated previously.  This is 80 year old mathematics.  If you want to argue with that, I'll just wander off.
« Last Edit: December 13, 2020, 09:07:17 pm by rhb »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #219 on: December 13, 2020, 09:08:52 pm »
[snip]
 Analog filters also wrap around in the digital domain so you don't need to get to -48dB at Nyquist.

WTF?  This is so basic I'm speechless!
Just think about it and look at it from a practical point of view. Frequency continues to roll off, signals consist of harmonics and at 200MHz you are already over the limit of what can be measured with a standard hi-impedance probe.  The probe itself will already cause a significant high frequency attenuation.

There is a ton of information available on this forum about anti-aliasing filters and DSOs. But since this thread is about an open source design you are free to fit whatever filter you like. I will go for what is the standard approach (which is to have a bandwidth of fs/2.5) for now.

In a nutshell:
From an error perspective: 1% is more than 2 bits (2 bits = 12dB). So if the attenuation is 3dB at 0.4fs, 48 - 12 = 36dB at Nyquist (0.5 fs) and 48dB at 0.6 fs then the amplitude error is less than 1% due to aliasing. Another issue to factor in is that in order to show the shape of a waveform you will at the very least want to see the first 2 (base and 1st) and preferably at least 3 of the harmonic frequencies. For an aliasing error to occur a harmonic frequency would need to be between 0.5 fs and 0.6 fs (and be closer to .5 fs to have the biggest impact). Remember that an oscilloscope isn't a precision instrument nor a data acquisition device and at the -3dB point the amplitude error is already near 30% !

In the end it is all about compromises; getting the highest bandwidth with the least horrible step response. And there is always the option to include two filters; one with the best step response and one with the highest bandwidth.
« Last Edit: December 13, 2020, 10:40:08 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #220 on: December 14, 2020, 05:24:52 pm »
Does raise the question of how to use the 12-bit 640MSa/s mode.  I had considered limiting that to 500MSa/s as that fits in an even multiple of 16-bit samples with 4 bits unused.   But that makes the 4ch Nyquist 62.5MHz and if you have a 12 bit ADC with ~10.5 ENOB after AFE noise, then you need filter to be rolling off to -63dB with say a -3dB bandwidth of 40-50MHz.  Even enabling the full 640MSa/s is still only 80MHz Nyquist so practical upper B/W limit is still ca 50MHz.

I don't think that is practical so 12-bit mode will always have some risk of aliasing if used on 4ch mode.  Switching filters (other than a simple 20MHz Varicap filter) seems impractical and in any case a single pole filter driven by a varicap is unlikely to roll off quickly enough to be useful for 12-bit mode.

So what do you do?
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #221 on: December 14, 2020, 07:42:06 pm »
My primary concern now is the AFE input filter.  It should be a high order Bessel-Thomson filter to provide accurate waveform shape.  I've got every reference I can find, but unfortunately, the maximally flat phase gets skimpy treatment and I've still not figured out how to analyze and design one from first principles.
I know nothing about it but what google told me and a history of stuff from college I forgot after the exam, but I found the Incomplete Gaussian Filter from this paper rather elegant. Looks easy to implement and doesn't try to reflect a ton of energy into your buffer.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #222 on: December 14, 2020, 07:57:35 pm »

Just think about it and look at it from a practical point of view. Frequency continues to roll off, signals consist of harmonics and at 200MHz you are already over the limit of what can be measured with a standard hi-impedance probe.  The probe itself will already cause a significant high frequency attenuation.

There is a ton of information available on this forum about anti-aliasing filters and DSOs. But since this thread is about an open source design you are free to fit whatever filter you like. I will go for what is the standard approach (which is to have a bandwidth of fs/2.5) for now.

In a nutshell:
From an error perspective: 1% is more than 2 bits (2 bits = 12dB). So if the attenuation is 3dB at 0.4fs, 48 - 12 = 36dB at Nyquist (0.5 fs) and 48dB at 0.6 fs then the amplitude error is less than 1% due to aliasing. Another issue to factor in is that in order to show the shape of a waveform you will at the very least want to see the first 2 (base and 1st) and preferably at least 3 of the harmonic frequencies. For an aliasing error to occur a harmonic frequency would need to be between 0.5 fs and 0.6 fs (and be closer to .5 fs to have the biggest impact). Remember that an oscilloscope isn't a precision instrument nor a data acquisition device and at the -3dB point the amplitude error is already near 30% !

In the end it is all about compromises; getting the highest bandwidth with the least horrible step response. And there is always the option to include two filters; one with the best step response and one with the highest bandwidth.

Here is the log magnitude spectrum of Bessel filters from order 1 to 10 from .


Handbook of Filter Synthesis
Anatol I. Zverev
Wiley 1967 & 2005

As can be seen from the attached figure, with Fc = 200 MHz,  at 250 MHz  there is less than -6 dB of attenuation even for a 10th order Bessel filter.  At 1 GHz, a 5th order Bessel is down about -50 dB.

With that I shall leave you to figure out what you got wrong.

Have Fun!
Reg
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #223 on: December 14, 2020, 09:47:53 pm »
@rhb: There must be something wrong with the tables you are using. The filter generator & simulator tool I'm using shows a decent attenuation at Nyquist. Unfortunately I'm on the road so I don't have access to it so that has to do for now.

Does raise the question of how to use the 12-bit 640MSa/s mode.  I had considered limiting that to 500MSa/s as that fits in an even multiple of 16-bit samples with 4 bits unused.   But that makes the 4ch Nyquist 62.5MHz and if you have a 12 bit ADC with ~10.5 ENOB after AFE noise, then you need filter to be rolling off to -63dB with say a -3dB bandwidth of 40-50MHz.  Even enabling the full 640MSa/s is still only 80MHz Nyquist so practical upper B/W limit is still ca 50MHz.

I don't think that is practical so 12-bit mode will always have some risk of aliasing if used on 4ch mode.  Switching filters (other than a simple 20MHz Varicap filter) seems impractical and in any case a single pole filter driven by a varicap is unlikely to roll off quickly enough to be useful for 12-bit mode.

So what do you do?
I have been thinking about this a bit. I think a good approach would be to have several filter banks and use a mux to switch between them to select a different roll-off for a different bit-width and maximum samplerate. A higher order filter using passive components is not very difficult and not very expensive to implement.
« Last Edit: December 14, 2020, 09:53:42 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #224 on: December 14, 2020, 10:35:16 pm »
I have been thinking about this a bit. I think a good approach would be to have several filter banks and use a mux to switch between them to select a different roll-off for a different bit-width and maximum samplerate. A higher order filter using passive components is not very difficult and not very expensive to implement.

It is quite expensive to do for 100MHz+ and across 4 channels, though.  Remember, these things also need to be tested during production with a sweep generator, and possibly need manual adjustment.  There are a few manual adjustment varicap points on even cheap oscilloscopes, which appears to be for matching input capacitance.  I'd really like to avoid doing that with the filters, and I don't think there's the grunt to do real time DSP on the ADC samples and correct AFE response there (unless it was very limited in response, and ~all the DSP blocks were used.)

I have tested my Rigol DS1000Z, and it aliases in 4 channel mode.  I think it's just something that's rather difficult to avoid, and the operator just needs to be careful not to exceed the parameters of their instrument.  If the measurement is really crucial, then an external analog filter could be used.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #225 on: December 14, 2020, 10:38:49 pm »
I have been thinking about this a bit. I think a good approach would be to have several filter banks and use a mux to switch between them to select a different roll-off for a different bit-width and maximum samplerate. A higher order filter using passive components is not very difficult and not very expensive to implement.
It is quite expensive to do for 100MHz+ and across 4 channels, though.  Remember, these things also need to be tested during production with a sweep generator, and possibly need manual adjustment.  There are a few manual adjustment varicap points on even cheap oscilloscopes, which appears to be for matching input capacitance.  I'd really like to avoid doing that with the filters, and I don't think there's the grunt to do real time DSP on the ADC samples and correct AFE response there (unless it was very limited in response, and ~all the DSP blocks were used.)

I have tested my Rigol DS1000Z, and it aliases in 4 channel mode.  I think it's just something that's rather difficult to avoid, and the operator just needs to be careful not to exceed the parameters of their instrument.  If the measurement is really crucial, then an external analog filter could be used.
I think the production can be greatly automated (and the user is also able to do a full recalibration from/by the instrument itself). I have found and ordered some neat parts for self calibration of the attenuators but I need to try them first to see if they deliver before wetting too many appetites.

The filters don't need adjustment; 5% L / C parts will do just fine and 2% parts aren't extremely expensive. Typically you'll see some variation in bandwidth from oscilloscopes so no worries there.
« Last Edit: December 14, 2020, 10:43:41 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #226 on: December 14, 2020, 10:45:16 pm »
I'm more concerned about inter-channel variation.  If you apply a signal near rolloff point on all 4 channels of an oscilloscope, how much would you expect amplitude to vary?  What about the effects on rise time?  I would expect matching to be better than +/-1dB, and rise times to be within +/-10% of each other,  but I admit this is not something I have tested. 

It would be useful to do some Monte-Carlo simulations on any filters you consider to see how influential certain parts would be. 
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #227 on: December 14, 2020, 11:01:01 pm »
I'm more concerned about inter-channel variation.  If you apply a signal near rolloff point on all 4 channels of an oscilloscope, how much would you expect amplitude to vary?  What about the effects on rise time?  I would expect matching to be better than +/-1dB, and rise times to be within +/-10% of each other,  but I admit this is not something I have tested. 

It would be useful to do some Monte-Carlo simulations on any filters you consider to see how influential certain parts would be.
Certainly  :)
But since the filter doesn't need different component placement, the first step I want to take is to have a board design to do some testing on. Due to the parasitic capacitances the board itself is as much a component as the rest. I expect it will take a few board spins before getting the board right. If each step also includes converging towards a final design then this approach will catch two flies with one stone.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #228 on: December 14, 2020, 11:03:52 pm »
I'm more concerned about inter-channel variation.  If you apply a signal near rolloff point on all 4 channels of an oscilloscope, how much would you expect amplitude to vary?  What about the effects on rise time?  I would expect matching to be better than +/-1dB, and rise times to be within +/-10% of each other,  but I admit this is not something I have tested. 

It would be useful to do some Monte-Carlo simulations on any filters you consider to see how influential certain parts would be.

Were it prohibitively costly to sweep each AFE during production with a VNA, and to store the measured response as calibration data, in oder that corrections can calculated and applied in the digital domain?
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #229 on: December 14, 2020, 11:18:31 pm »
If you do Chebyshev for the capture ... how well can you correct the group delay with a high order digital filter?
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #230 on: December 14, 2020, 11:30:13 pm »
@nctnico

Here's the same information from another author who also provides Matlab scripts, though I've not downloaded them yet.


Elsie calculates the same result:

1132202-0

1132206-1

In any case you can do a first order trapezoid approximation as the convolution of a pair of boxcar(f) functions. One boxcar whose width depends upon the length of the ramp and another that corresponds to length of the flat section of the trapezoid.  That's a bar napkin math problem and the result is sufficiently accurate to make the problem of the aliased sidelobes obvious.

I showed the minimum phase impulse response for a 50% and 80% of Nyquist corners using a trapezoidal approximation a long time ago.  That extra 75 MHz of BW going from 50% to 80% comes at the expense of a lot of ringing in the time domain.

A high order Bessel-Thomson filter has an impulse response which asymptotically approaches  a delayed Gaussian spike.  That's the best possible band limited representation of a Dirac function.  That is the best we can do.

I have a 100 ps pulse generator from Leo Bodnar.  What do you think that is supposed to look like on a DSO sampling at 1 ns?  I want a symmetric Gaussian spike.  That corresponds to the least waveform error for the BW available.  If you want to argue it should be different, I don't care to discuss it.  Do whatever you like.

Edit:  The server borked the figure file naming.
« Last Edit: December 14, 2020, 11:34:24 pm by rhb »
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #231 on: December 15, 2020, 12:08:56 am »
If you do Chebyshev for the capture ... how well can you correct the group delay with a high order digital filter?

Sure, not everything can be compensated. E.g. a multiplication with 0 cannot be undone.
But dou you think that Chebyshev is a realistic assumption for an AFE's frequency response?
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #232 on: December 15, 2020, 12:15:25 am »
But dou you think that Chebyshev is a realistic assumption for an AFE's frequency response?
It can be implemented at RF frequencies, I don't think it rings too badly on a step response that you need to sacrifice too much dynamic range.

So if the quantization and other noise doesn't make high order digital group delay compensation unstable, it's an option. That's the big if.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #233 on: December 15, 2020, 06:25:06 pm »
The biggest issue I see with digital filters is do we do them in real time or on post-captured data? 

The DSP48E1 blocks in the Artix-7 fabric are useful up to around 300MHz in the standard speed grade. Assume maybe 250MHz to be safe.  Each block does a MAC operation with a 48-bit result, so with a symmetric filter that gives us one tap per DSP block.  We'd only need 12 or 14-bit result data,  IIRC the blocks have configurable rounding logic so that will be fine, we can probably truncate results with little harm.

To process 1GSa/s raw data you would need to run four parallel streams of DSP blocks in ~40 tap filter chains, which would create a lot of logic complexity and I don't exactly know how that would work when you switch to multichannel modes, there's a data dependency headache to be resolved.

Whereas if it was done post-processed, assuming 1232 points (100ns/div setting) at 50k waves/sec, that's only 61.6 MSa/s to process.  Plenty comfortable to do this after capture by reading back from the RAM and writing into another buffer before rendering (once the 32-bit interface is in place so we have sufficient bandwidth to do this;  or using an FPGA-side MIG with a small external RAM, the exact architecture needs to be worked out.)

In raw bandwidth terms there's enough DSP to do ~200 tap filters in a 7020 at around 250kwaves/sec, or e.g. 400 taps at 125kwaves/sec or even 2000 taps at 25kwaves/sec...,  although memory bandwidth might start being an issue if samples are stored in an unfavourable order so that needs to be considered.  How much correction could you do with a 200 tap filter on post-captured data?  How much benefit would there be in going up to something like a 7030 with 400 DSP slices and a faster Kintex fabric;  or an UltraScale?
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #234 on: December 15, 2020, 07:55:31 pm »
If the ADC always runs at full speed, and the currently selected sampling rate is lower, then at least the decimation filtering needs to be done in real time, before storing the samples.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #235 on: December 15, 2020, 09:00:57 pm »
That's true, but what does a decimation filter need to look like?  Would an IIR type response be acceptable (sum pairs of samples and shift) or does it need to be a true FIR-type filter?  The latter is considerably more expensive to implement, but I fear it's the only way to achieve the required decimation response.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #236 on: December 15, 2020, 09:41:52 pm »
In raw bandwidth terms there's enough DSP to do ~200 tap filters

In theory you could do FFT overlap add, doesn't have to be straight FIR.
« Last Edit: December 15, 2020, 09:51:16 pm by Marco »
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #237 on: December 15, 2020, 10:40:35 pm »
Quote
but what does a decimation filter need to look like?

A common approach seems to be a CIC decimator, which is computationally cheap (requires not even multipliers), and to adjust its (non-flat) passpand afterwards to the desired shape with a compensation filter (running at the lower rate). The latter could be basically done in the post-processing too, if DSP resources don't suffice to do it real-time.

One point with running a FIR filter in the post-processing is that you lose N-1 samples of each captured buffer (where N is the number of taps), since the captured buffers do no longer form a gapless stream of samples.

In theory you could do FFT overlap add, doesn't have to be straight FIR.

Starting at how many taps is it computationally cheaper than brute-force FIR?
« Last Edit: December 15, 2020, 10:42:44 pm by gf »
 
The following users thanked this post: tom66

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #238 on: December 15, 2020, 10:42:57 pm »
@nctnico

Here's the same information from another author who also provides Matlab scripts, though I've not downloaded them yet.
I just had the chance to review the results. It turns out I made a mistake indeed. For 1Gs/s and a 200MHz bandwidth a Bessel filter (together with several other 1st order roll-offs within the AFE) might do it. For 500Ms/s and a 200MHz bandwidth a much steeper filter it needed.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #239 on: December 15, 2020, 10:58:59 pm »
The ADC should always run full speed. The case of an ADC with different sample rates at different bit depths requires a filter for each speed.  This results in the simplest and easiest architecture to implement.  Everything that can be done in SW is.

A moving average (boxcar in time) filter is two adds and N+1 storage locations.  An arbitrary filter profile can be constructed by placing multiple such filters with different values of N in sequence.  In the frequency domain each filter has a sinc(f) profile.  By varying the lengths of the filters you can place zeros in one profile on a peak in another profile and systematically suppress the stop band.  And in fact sinc(f)**N for N>2 has quite  good stop band performance without doing that.

If you  are downsampling, you can loop on a subset of the filter poles using a single DSP block.  As a consequence, the resource requirements are quite low.

While  the filtering operation is performed, it's very simple to put in the phase shift term without requiring significant  additional operations. 

Here is a canonical set of references:

Best DSP intro I have seen and the one I grab if I want to check a formula.

An Introduction to Digital Signal Processing
John H. Karl
Academic Press 1989

This was my grad school  Linear Systems text.  There is a pictorial dictionary of Fourier transforms in the back which once mastered will allow you to do meticulously correct  analysis of complex DSP problems on a cocktail napkin.  The math text I use most.

The Fourier Transform and Its Applications
Ronald N. Bracewell
McGraw-Hill 1978 2nd ed (there's also a 3rd)

The best summary of Nyquist-Shannon-Weiner ever.  I've relied on it since the 2nd edition.  The successive editions added worked examples of ever more arcane problems such as sign bit. If you don't understand this you don't understand DSP even if that's all you do all day.

Random Data
Bendat & Piersol
Wiley 4th ed

Practical issues such as word length  and digital filters from an EE viewpoint.

Theory and Application of Digital Signal Processing
Rabiner & Gold
Prentice-Hall 1975

The source of the 2nd set of Bessel function plots and arguably the best written of all my analog filter design texts.

Design and Analysis of Analog Filters
Larry D. Paarmann
Springer 2001

[edit:  Best intro to the state of the art

A Wavelet Tour of Signal Processing
Mallat
Academic Press 3rd edbook

edit]



@nctnico  200 MHz corner and 500 MHz Nyquist will give very fine results.  A 5th order filter will be ~ -32 dB at Nyquist and if one picks up another -10 dB elsewhere in the AFE the 8 bit case is done.  Though, personally I'd go to a 7th order just to be sure I had the roll off.  Steep slopes cause heavy ringing, so they are not a good idea.
« Last Edit: December 15, 2020, 11:05:16 pm by rhb »
 
The following users thanked this post: dcarr, Pitrsek

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #240 on: December 15, 2020, 11:14:45 pm »
If the ADC always runs at full speed, and the currently selected sampling rate is lower, then at least the decimation filtering needs to be done in real time, before storing the samples.
No DSO uses a decimation filter in case of a lower sampling rate becaue this conflicts with the specified bandwidth. The only decimation 'allowable' in realtime is peak-detect. Unless ofcourse there is a specific signal filtering feature the user enables.

For post-processing a quick peak detect -ish decimation is used to compress the data quickly into a display record. This can lead to odd behaviour for specific signals like sweeps though which isn't easy to solve. Compressing the data realtime is a more time sensitive operation compared to sampling because any delay will result in sluggish operation.

This is for the GW Instek GDS-2204E but similar intermodulation effects can be made to appear on other (much more expensive) DSOs as well:


@nctnico  200 MHz corner and 500 MHz Nyquist will give very fine results.  A 5th order filter will be ~ -32 dB at Nyquist and if one picks up another -10 dB elsewhere in the AFE the 8 bit case is done.  Though, personally I'd go to a 7th order just to be sure I had the roll off.  Steep slopes cause heavy ringing, so they are not a good idea.
I have to see about the exact impedances to be used in the circuit but so far anything over 5th order is resulting in unpractical part values. And as tom66 noted a Monte-Carlo simulation needs to be done to verify component variation sensitivity. Other roll-offs in the AFE will likely be 1st order anyway.
« Last Edit: December 15, 2020, 11:20:22 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #241 on: December 16, 2020, 12:53:28 am »
Quote
The only decimation 'allowable' in realtime is peak-detect

Don't forget ERES. This is basically a decimation filter - whatever filter response they may have realized.
[ I guess in most cases just a boxcar averaging of the samples in the decimation interval, i.e. the lower fs/2 still falling on the main lobe of the sinc frequency reponse, not at a zero. ]
 
The following users thanked this post: nctnico

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #242 on: December 16, 2020, 08:30:33 am »
The ADC must run at full speed to function as a digital trigger source.

But, in some cases, less data must be stored,  although the trigger is always going to work on realtime data.  The CIC filter looks like an interesting, inexpensive way to downsample.  It's certainly better than alternative of just throwing away (N-1)/N samples.

My experience is the Rigol DS1000Z does not downsample correctly - the scope will alias very easily - but the Agilent DSOX2000A does not.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #243 on: December 16, 2020, 09:25:36 am »
The ADC must run at full speed to function as a digital trigger source.

But, in some cases, less data must be stored,  although the trigger is always going to work on realtime data.  The CIC filter looks like an interesting, inexpensive way to downsample.  It's certainly better than alternative of just throwing away (N-1)/N samples.

My experience is the Rigol DS1000Z does not downsample correctly - the scope will alias very easily - but the Agilent DSOX2000A does not.
Do not filter! Throwing away samples is the only correct way (when in sample mode); otherwise you'll be distorting the signal due to phase delays introduced by filtering. The DSOX2000A likely does some kind of peak-detect because the display part works with decimated data while other DSOs do not.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 4104
  • Country: fi
  • Born in Finland with DLL21 in hand
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #244 on: December 16, 2020, 09:36:37 am »
The ADC must run at full speed to function as a digital trigger source.

But, in some cases, less data must be stored,  although the trigger is always going to work on realtime data.  The CIC filter looks like an interesting, inexpensive way to downsample.  It's certainly better than alternative of just throwing away (N-1)/N samples.

My experience is the Rigol DS1000Z does not downsample correctly - the scope will alias very easily - but the Agilent DSOX2000A does not.
Do not filter! Throwing away samples is the only correct way (when in sample mode); otherwise you'll be distorting the signal due to phase delays introduced by filtering.

 :-+
I drive a LEC (low el. consumption) BEV car. Smoke exhaust pipes - go to museum. In Finland quite all electric power is made using nuclear, wind, solar and water.

Wises must compel the mad barbarians to stop their crimes against humanity. Where have the wises gone?
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #245 on: December 16, 2020, 10:10:16 am »
The ADC must run at full speed to function as a digital trigger source.

But, in some cases, less data must be stored,  although the trigger is always going to work on realtime data.  The CIC filter looks like an interesting, inexpensive way to downsample.  It's certainly better than alternative of just throwing away (N-1)/N samples.

My experience is the Rigol DS1000Z does not downsample correctly - the scope will alias very easily - but the Agilent DSOX2000A does not.
Do not filter! Throwing away samples is the only correct way (when in sample mode); otherwise you'll be distorting the signal due to phase delays introduced by filtering. The DSOX2000A likely does some kind of peak-detect because the display part works with decimated data while other DSOs do not.

Agree!

DSOx/MSOX3000T is also very resilient to aliasing, screen representation keeps outside signal envelope far into sampling rates that should alias badly.  I also think they use peak detect internally for display all time, and decimate for waveform buffer in accordance with sample rate.
I tried enabling /disabling peak detect and saw no difference, so I presume that's it.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #246 on: December 16, 2020, 03:19:18 pm »
otherwise you'll be distorting the signal due to phase delays introduced by filtering.
You get exactly the delay you want with digital filtering. if you really want a 100th order Gaussian response filter you can get it in digital, analogue not so much. The subsampled signal will never be the original signal, all you get to chose is what type of distortion you want ... no distortion is not an option.

Outside of subsampling, I'm told Tek has response correction on by default. The analogue filter introduces group delay based distortion, digital can correct it.
« Last Edit: December 16, 2020, 03:38:11 pm by Marco »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #247 on: December 16, 2020, 05:52:41 pm »
Well, throwing away samples is certainly easier than filtering them.  But, I don't understand why you'd go to so much trouble to build a good antialias filter for the AFE side if you just risk aliasing when your sample rate drops off?  Switching to an automatic 'peak detect' mode is an option,  although it would double the memory required as you need to store a min and max for each sample.

I see ERES or equivalent as being comparably easy to achieve.  Bin N samples (where N is a power of two) into an accumulator, take the top N bits (probably 16 bits so it functions with 14 bit mode up to 4x ERES) and then save into RAM. 

I need to rewrite the acquisition Verilog so it can handle all 4 channel configurations (well, 3 modes as only 1/2/4 are truly supported and 3 channel mode is treated as a subset of 4ch mode.)  Then it needs to be able to discard samples on a binary division rate (all but 2nd, all but 4th, all but 8th, etc.)  and re-order these into RAM correctly.  The 'real trick' here is then getting that data lined up with the digital trigger.  And then ideally include the capability for MSO support.  Needs a good amount of thought to make that work.
« Last Edit: December 16, 2020, 06:04:57 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #248 on: December 17, 2020, 09:27:37 pm »
Well, throwing away samples is certainly easier than filtering them.  But, I don't understand why you'd go to so much trouble to build a good antialias filter for the AFE side if you just risk aliasing when your sample rate drops off?
Aliasing is not a bad thing perse but the user needs to be aware of it. Without filtering you'll still be able to measure the RMS and peak-peak values of a signal which has a fundemental at a higher frequency then Nyquist for the lowered samplerate. IOW: you will still be able to make out the amplitude. In case of narrow pulses you'll miss some pulses. With automatic filtering you will suddenly see no signal or a completely different signal on your screen after changing the time/div which will confuse the user.
« Last Edit: December 17, 2020, 09:35:33 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #249 on: December 18, 2020, 08:18:15 am »
Makes sense - but, in that case, why not omit the input filter altogether and allow the user to cautiously use their instrument up to Nyquist?  All filters risk eliminating signals that you intend to look at - part of operating a scope is understanding approximately what you expect to appear on the screen before you even probe it.
 
The following users thanked this post: nuno

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #250 on: December 18, 2020, 09:20:32 am »
Digital scope with a screen (distinction from digitizer that samples data inside data acquisition system) need to serve two functions:
- emulate behaviour of a CRT oscilloscope on a screen
- function as a digitizer in a background, so that all data that it captured is sampled properly and doesn't contain any nonsense in mathematical way.

First point is well served with decimating to screen with peak detect.
Second one is served well with large buffer that ensures highest sampling most of the time, and by downsampling by filtering to ensure that there are no aliasing artefacts in data that was sampled with lower sample rate.
In which case there must be obvious warning that at his timebase you're working with limited bandwidth. And also a way to disable filtering to have simple decimation by sample discarding. Because that is raw data, people expect sometimes.

There is  no simple, single solution for all. 

For instance, RMS measurements should be performed of full speed sample data, to take into account all high energy content..
Risetime needs fastest edge info it can have.. Etc..
Current scopes do all kinds of compromises to cater to their optimization targets...

 

Offline JohnG

  • Frequent Contributor
  • **
  • Posts: 570
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #251 on: December 18, 2020, 02:34:21 pm »
Makes sense - but, in that case, why not omit the input filter altogether and allow the user to cautiously use their instrument up to Nyquist?  All filters risk eliminating signals that you intend to look at - part of operating a scope is understanding approximately what you expect to appear on the screen before you even probe it.

Because most of the time, for general purpose, you will want the antialias filter in place. The ability to bypass it might be nice, though.

Cheers,
John
"Reality is that which, when you quit believing in it, doesn't go away." Philip K. Dick (RIP).
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #252 on: December 18, 2020, 05:31:10 pm »
Simply rendering all samples with intensity shading is much better than decimation for showing the shape of a modulated signal.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #253 on: December 18, 2020, 05:31:53 pm »
That makes sense.  So, as I understand it, the modes that need to be supported are:

Normal - Applies a downsampling filter* when sampling rate is under the normal ADC rate, otherwise, does not filter
Decimate - Applies only decimation, i.e. dropping samples when sampling rate is under the normal ADC rate,  otherwise identical to Normal
ERES - Averages consecutive samples to increase sample resolution up to 16 bits depending on memory availability
Average - Averages consecutive waveforms to compute one output waveform;  otherwise behaves like Normal mode in terms of decimation/downsampling
Peak detect - Records a min and max during decimation and stores these instead of individual samples.  Halves available memory.

*Exact design of this filter to be worked out (quite possibly CIC given the simplicity?)

Some consideration needs to be made in regards to supporting 12-bit/14-bit modes but they would require external filters unless aliasing is permitted in these modes.

Note that the downsampling would be needed once the timebase exceeds a total timespan of ~240ms, or about 20ms/div, on the current prototype with ~240Mpts memory available.   With 4x the memory, downsampling is still needed once beyond 50ms/div.  Hard to get around the tremendous amount of memory that just sampling at 1GSa/s requires.

In all modes, certain auto measurements can work without acquiring to memory and therefore can work at the full sample rate.  These are:
- The frequency counter
- Vmax, Vmin, Vp-p
- Vrms
- Vavg

though bounding by cycles (e.g. Vrms over a cycle of a wave) does require memory acquisition and therefore would be affected by sampling modes.

Have I missed anything?
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #254 on: December 18, 2020, 06:40:30 pm »
Normal - Applies a downsampling filter* when sampling rate is under the normal ADC rate, otherwise, does not filter
Decimate - Applies only decimation, i.e. dropping samples when sampling rate is under the normal ADC rate,  otherwise identical to Normal
ERES - Averages consecutive samples to increase sample resolution up to 16 bits depending on memory availability
Average - Averages consecutive waveforms to compute one output waveform;  otherwise behaves like Normal mode in terms of decimation/downsampling
Peak detect - Records a min and max during decimation and stores these instead of individual samples.  Halves available memory.

I don't see a principal difference between "Normal" and "ERES". Both decimate with prior filtering. Variables are the kind and order of the filter, and number of bits (>= # ADC bits) per sample being stored (which has an impact on memory consumption).

I would consider "Average" not as separate mode, but rather as optional step in the acquisition pipeline, which can be combined with either Normal, Decimate or ERES (it does not make sense in conjunction with peak detect, of course). Since averaging increases the dynamic range as well, one may also consider to store the data with more bits per sample than delivered by the previous stage in the acquisition pipeline.

EDIT:

Quote
Note that the downsampling would be needed once the timebase exceeds a total timespan of ~240ms, or about 20ms/div, on the current prototype with ~240Mpts memory available.   With 4x the memory, downsampling is still needed once beyond 50ms/div.  Hard to get around the tremendous amount of memory that just sampling at 1GSa/s requires.

There needs to be some default, but IMO the user should still be able to control the trade-offs between acquisition mode, sampling rate (of the stored samples), record size, and number of records that can be stored (within the feasible limits).
« Last Edit: December 18, 2020, 07:13:47 pm by gf »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #255 on: December 18, 2020, 07:22:21 pm »
Where it comes to filtering it may be better to do this as a (first) post processing step before any other operation. From my experience it is useful to be able to adjust the filtering on existing acquisition data (GW Instek does this). Care must be taken though to avoid start-up issues and use real data. The R&S RTM3004 for example filters decimated data and doesn't take initial filter initialisation into account leading to weird behaviour and thus limiting the usefullness of filtering.

Averaging is another interesting case. One of the problems is that ideally you'd save the averaged data so you can scroll left/right zoom in / out. On some oscilloscopes (again R&S RTM3004) the averaged trace dissapears if you move the trace. I second the suggestion to be able to combine acquisitions modes but at some point you'll be creating a new trace in CPU memory and using the acquisition data only to update the trace in CPU memory.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #256 on: December 18, 2020, 08:14:12 pm »
Where it comes to filtering it may be better to do this as a (first) post processing step before any other operation. From my experience it is useful to be able to adjust the filtering on existing acquisition data (GW Instek does this).

For pre-decimation filtering this would imply that the data would need to be stored at the full sampling rate, in order that the first processing steps can filter and decimate them. This likely defends the purpose (lower memory usage), why a lower sampling rate than the maximum was selected.

For any other kind of filtering which does not need to be done prior to decimation, I'm basically with you.

Quote
Care must be taken though to avoid start-up issues and use real data. The R&S RTM3004 for example filters decimated data and doesn't take initial filter initialisation into account leading to weird behaviour and thus limiting the usefullness of filtering.

That's a general issue when filtering is done in the post-processing, on the stored data, where only a set of records, but no continuous stream of data is available. But where should the initial filter state come from? Do you want to ask the user to enter initial values for all state variables of the filter? Like: "Please enter the values of the 199 samples preceding the captured buffer" (for a 200 tap FIR filter). Another alternative could be to simply discard the samples falling into the fade-in/fade-out time interval of the filter, hereby reducing the record size, of course.

EDIT:

Quote
Averaging is another interesting case. One of the problems is that ideally you'd save the averaged data so you can scroll left/right zoom in / out. On some oscilloscopes (again R&S RTM3004) the averaged trace dissapears if you move the trace. I second the suggestion to be able to combine acquisitions modes but at some point you'll be creating a new trace in CPU memory and using the acquisition data only to update the trace in CPU memory.

The question is at which waveform rate the averaged data are supposed to be recorded.

(1) At full trigger rate (storing a moving average)?
(2) At 1/N of the trigger rate (storing only a single averaged buffer after acquiring N triggers).

In case (1) the acquisition engine could equally well store just the triggered waveforms, and the processing engine could average them.

In case (2) the averaging would need to be done in a pipeline stage of the acquisition engine.
This mode saves memory, but at the cost of a lower waveform rate.
« Last Edit: December 18, 2020, 08:38:27 pm by gf »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #257 on: December 18, 2020, 08:26:09 pm »
Skipping 1000 samples at the beginning and another 1000 at the end of a 100Mpts long record is something nobody will notice.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #258 on: December 18, 2020, 08:50:35 pm »
Skipping 1000 samples at the beginning and another 1000 at the end of a 100Mpts long record is something nobody will notice.

Agreed, no problem for suffiiently long records, granted that the buffer management does not impose incompatible record length constraints (like e.g. all record lengths must be power of two, or all records must have the same fixed size,...).
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #259 on: December 18, 2020, 09:05:20 pm »
Normal - Applies a downsampling filter* when sampling rate is under the normal ADC rate, otherwise, does not filter
Decimate - Applies only decimation, i.e. dropping samples when sampling rate is under the normal ADC rate,  otherwise identical to

I would actually tend to use the name "Normal" for the non-filtered mode.
[ Sure, names signify nothing - it were just my personal preference. I wonder how other think about it. ]
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #260 on: December 18, 2020, 09:57:56 pm »
Digital filtering would be post-acquisition using a customisable FIR filter.   There would necessarily be a dead zone at the start and end of each trace depending on the number of taps in the filter.  So in my example provided before with 200 taps and a 1232 point waveform (presently what is used at 100ns/div) then about half of the waveform is lost to filter tap count.  I can't see any feasible way to avoid this - the waveforms are not correlated in time and no data is available outside of their window.  The window of data could be increased, but you may as well just go up a timebase if you wanted to do that.  But I think it's fair to say that at short timebases you don't generally need long filters,  and therefore this will be much less of an issue in the real world.

You are right that averaging may well be considered a subclass of post-acquisition filtering so it doesn't make so much sense to have it as an acquisition mode.  Although, it would be possible to do it during acquisition, it would probably complicate the acquisition engine compared to just reading values out into a FIFO and summing them in a filter pipeline (although there would be a lot of reading and writing, so I need to think about how to make this as optimal as possible.)

Also a good point on normal vs ERES, although I think the subtle difference is normal stores 8-bit samples whereas ERES would halve the available sample depth by storing 16-bit samples.  The penalty being primarily memory, although there may also be a render speed penalty.

Also, the buffer management supports arbitrary buffer lengths, the only restriction is that they are a multiple of 8 samples for pre and post trigger and the overall buffer must be on a 64 byte boundary (but size is not constrained by this) to fit into cache lines.  The records must all be the same size for now, although this is just a programming simplification, there's no strict need for it.
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #261 on: December 18, 2020, 10:11:25 pm »
Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #262 on: December 18, 2020, 10:31:24 pm »
Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.
Those are Lecroy-isms which are more geared towards signal analysis. On other oscilloscopes averaged / high-res traces replace the channel's trace while retaining the channel's color. There are pros and cons. The pro is that you see the 'original' trace and can have multiple representations of the same signal (in different math channels) the con is that a math trace usually has a different color which does not resemble the original trace and you have a trace on the screen which may not be relevant at all.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #263 on: December 18, 2020, 10:33:32 pm »
Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.
Those are Lecroy-isms which are more geared towards signal analysis. On other oscilloscopes averaged / high-res traces replace the channel's trace while retaining the channel's color. There are pros and cons. The pro is that you see the 'original' trace and can have multiple representations of the same signal (in different math channels) the con is that a math trace usually has a different color which does not resemble the original trace and you have a trace on the screen which may not be relevant at all.
None of which is an issue if you can assign any color to a trace.
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #264 on: December 18, 2020, 10:43:17 pm »
Averaging and ERES could both be done before or after the fact,  but implementing both seems a bit silly and logic-expensive. 
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #265 on: December 18, 2020, 10:59:31 pm »
Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.
Those are Lecroy-isms which are more geared towards signal analysis. On other oscilloscopes averaged / high-res traces replace the channel's trace while retaining the channel's color. There are pros and cons. The pro is that you see the 'original' trace and can have multiple representations of the same signal (in different math channels) the con is that a math trace usually has a different color which does not resemble the original trace and you have a trace on the screen which may not be relevant at all.
None of which is an issue if you can assign any color to a trace.
But it will clutter the screen and make operation less intuitive. I own a Lecroy Wavepro 7300A myself but I can't say it is a nice oscilloscope as a daily driver. Setting up averaging / hires takes a lot of button pushes and going through menus while on other scopes it is a simple selection in the acquisition menu. How it is implemented under the hood is a different story (both are likely implemented as math traces) but the complexity is hidden from the user.

Averaging and ERES could both be done before or after the fact,  but implementing both seems a bit silly and logic-expensive. 
It can make sense in some cases. Remember Eres / high res is filtering in the frequency domain while averaging works in the time domain. But it is true very few oscilloscopes allow to use both at the same time. Besides the Lecroy (using stacked math traces) the R&S RTM3004 is the only one I know supports enabling both high-res and averaging at the same time.
« Last Edit: December 18, 2020, 11:03:59 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #266 on: December 18, 2020, 11:11:00 pm »
That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.

Enabling trace averaging and ERES at the same time sounds plausible enough although I'd question what the user was attempting to achieve with such a move - averaging itself implements a form of ERES just with the time correlation of a trigger...  There may be use cases but I can't think of many...
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #267 on: December 19, 2020, 03:46:46 am »
That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.
..getting off into the "religious wars" of scopes at that point, there are very strong reasons to do the filtering before storing the result:
Its fast, you can collect more data. It stores fewer samples per time period, increasing possible memory depth.
Equally there are good reasons to do this post-processing:
You can look through the individual (or higher sample rate data) that would have been discarded if done online.

This same tradeoff is present for persistence rendering, eye-diagrams, measurements, etc.

Somewhere in the middle is dumping all the raw data to a circular history buffer, while also computing in hardware the data for display. Which in a resource constrained system/FPGA takes away from some other characteristic of the system.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #268 on: December 19, 2020, 11:07:14 am »
That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.
The way I see it time would be better spend on a more versatile trigger engine (that needs to be inside the FPGA) and getting more processing power & higher memory bandwidth between CPU and acquisition system right now if you want to write code. That will allow to do post processing in software quickly. Probably with better performance and in less development time compared to what the FPGA can achieve. The biggest advantage of doing post processing is that you can alter the settings and the result will change on-the-fly (this won't be possible for all operations like averaging but for most of the operations it will). From my experience with oscilloscopes post processing leads to the highest flexibility for the operator.

Before doing anything on signal processing there should be a clear architecture on paper which answers all questions on how to deal with the signals, the operations on them (measurement, math, decoding, etc) and how that is displayed (rendering). It is extremely easy to take a wrong turn and get stuck (Siglent has done that). Oscilloscope firmware is one of the most complex pieces of software to create. Starting from a flexible architecture which may not offer the highest performance is mandatory IMHO.
« Last Edit: December 19, 2020, 11:39:54 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #269 on: December 19, 2020, 01:28:36 pm »
Yes, there's definitely something to be said for that.

I have been thinking about the current acquisition engine.  It isn't fit for the task because it naively stores samples in interleaved fashion in memory,  directly received from the ADC.  This makes reading memory out a real pain, because you have to discard unused samples or find some way to 'deinterlace' them while reading.  A better route would be to record each channel in its own buffer.  In 1ch mode, there would be only one buffer, and all data would be stored in that.  In 2ch mode the buffers would be strided by the waveform pitch (so ch1 stored first then ch2), 3/4 channel modes behave similarly.

I think supporting variable waveform lengths would be a pain however there may be a case for supporting dual timebase operations - though the 'dual timebase' section might need to be stored in blockRAM FIFOs and so be limited in memory depth.

Data can be read out in order then and processed by a filter pipeline configured by the CPU.  The data could then be written back into RAM.  The filter pipeline would be capable of performing a few basic operations on each channel (sum or multiply any pair of channels) which is possible because it can have two read pointers.  Supporting 3-4 channel operations would also be *in principle* possible but considerably more complex.

The data is still going to be ordered incorrectly for the pre-trigger so an address translator needs to start reading at the correct address, and the number of words to be read might not be a nice even multiple of 4 or 8, which poses some difficulties.

While the data that is written to RAM would be buffered by a number of FIFOs, a counter would keep track of the trigger position and state and this would be recorded into a separate area of memory and used to correct trigger positions.   Actually, the most difficult aspect of this is solving the FIFO headache, the FIFO needs a dynamically configurable width from 64 bits to 8 bits and the output width should be fixed to 64 bits, and a control channel of 4 bits needs to be passed through at the same time with identical timing.  Changing this on the fly using traditional Xilinx IP is not possible (AFAIK) so I may have to roll my own FIFO.

And the acquisition channels need to be able to discard a variable sample count for decimation modes or enable a CIC/ERES filter ...

« Last Edit: December 19, 2020, 01:34:06 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #270 on: December 19, 2020, 03:26:26 pm »
You can seperate the FIFO if you create chunks of data. So the FIFO (and storage system) always works with a fixed size (say 256 bytes to 1024 bytes). In my design I even went a step further and used records which could contain various types of data (decimated, decoded, digital channels, different bit widths). These records (which could have different data rates!) where streaming into the memory from several FIFOs. The upside is that the memory doesn't need to care what part is for which channel but the downside is that you'll have to read all the data even if you are interested in a particular type of data (for example re-decode from channel 1) and there is some overhead but memory is cheap nowadays.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #271 on: December 19, 2020, 06:40:19 pm »
The goal would be to have the data in linear planes so all ch1 data for a given acquisition would be in order, followed by ch2, ch3 and so on.    I'm not too worried about where individual waveform groups are, but each channel should be in a separate plane.  That way, when data is read out, it is in order (besides the need to rotate for pre-triggers.)  In theory, I can then have another, say, 16-bit side channel for MSO functions, which is on ADC clock. (It would also be possible to do state analysis for MSO function using this, although it might be difficult to line that up with analog channels at that point.)

This is something I wanted to do a while ago, but the complexity put me off.  But, I'm realising what a pain it is to have to deal with interlaced data when it comes to plotting data and processing it afterwards with filters and the like.

I think one of the biggest challenges to solve is memory arbitration,  given one 64-bit AXI bus has 1.6GB/s peak bandwidth I'll need to appropriately arbitrate, possibly across two ports, to make this work well, to avoid running out of bandwidth as more time will be sent setting up smaller transactions.
« Last Edit: December 19, 2020, 06:45:34 pm by tom66 »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #272 on: December 21, 2020, 08:18:49 pm »
A mux/demux (serial/parallel) conversion at any point in a DSP pipeline is cheap to do and often both are done to optimize resource utilization in a gate level design.

If you are downsampling, which is the usual case, you have multiple fabric cycles available for each output sample.  A 2x downsample allows two ops per cycle, 4x four ops, etc. 

The writer only has to deal with the addressing once.  The reader has to do it every time, so for efficiency the data need to be in reader optimal order.

Have Fun!
Reg
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #273 on: December 22, 2020, 03:45:59 pm »
Agreed.  It's just down to getting that to work fast.  There are plenty of ways to do this slowly - it's more difficult to do this when you need to process over a billion samples per second.

 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 73
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #274 on: December 25, 2020, 08:36:36 pm »
Just in case this link is useful and you don't already know about it:
https://www.ti.com/tool/TIDA-00826
which is titled "50-Ohm 2-GHz Oscilloscope Front-end Reference Design". From the associated pdf the design definitely looks non-trivial.
 
The following users thanked this post: tom66, JohnG

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #275 on: December 30, 2020, 12:59:19 am »
Digital scope with a screen (distinction from digitizer that samples data inside data acquisition system) need to serve two functions:
- emulate behaviour of a CRT oscilloscope on a screen
- function as a digitizer in a background, so that all data that it captured is sampled properly and doesn't contain any nonsense in mathematical way.

First point is well served with decimating to screen with peak detect.

[snip]

You clearly have never compared a good analog scope trace at different settings with a "peak detect" DSO using a *very*  short  (<10% of sample interval) pulse.

If you had, you would realize that peak detect is a *very* poor imitation of an analog scope. The Fourier transform does not conform to the displayed trace. Of course, you do need to understand what you are looking at in  both time and frequency by inspection.

"Peak detect" is a crude bodge to make up for improper downsampling by decimation without appropriate low pass filtering.

If you want to understand why things are this way, I'll be happy to pose the problems.  But I had to do them 35 years ago in Linear Systems and have no interest in repeating my school exercises.  I learned the lessons.

After a lot of consideration, I have concluded that a proper DSO should offer the user the option of  either Bessel-Thomson or Buterworth LPF to suit  the use case.  In any* case, it *must* suppress aliases by -6 dB per bit at Nyquist.  Should you be so foolish as to not do the exercises in order to actually learn how it really works, you're on your own. 

Have Fun!
Reg
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #276 on: December 30, 2020, 01:09:42 am »
Digital scope with a screen (distinction from digitizer that samples data inside data acquisition system) need to serve two functions:
- emulate behaviour of a CRT oscilloscope on a screen
- function as a digitizer in a background, so that all data that it captured is sampled properly and doesn't contain any nonsense in mathematical way.

First point is well served with decimating to screen with peak detect.

[snip]

You clearly have never compared a good analog scope trace at different settings with a "peak detect" DSO using a *very*  short  (<10% of sample interval) pulse.

If you had, you would realize that peak detect is a *very* poor imitation of an analog scope. The Fourier transform does not conform to the displayed trace. Of course, you do need to understand what you are looking at in  both time and frequency by inspection.

"Peak detect" is a crude bodge to make up for improper downsampling by decimation without appropriate low pass filtering.

If you want to understand why things are this way, I'll be happy to pose the problems.  But I had to do them 35 years ago in Linear Systems and have no interest in repeating my school exercises.  I learned the lessons.

After a lot of consideration, I have concluded that a proper DSO should offer the user the option of  either Bessel-Thomson or Buterworth LPF to suit  the use case.  In any* case, it *must* suppress aliases by -6 dB per bit at Nyquist.  Should you be so foolish as to not do the exercises in order to actually learn how it really works, you're on your own. 

Have Fun!
Reg

Sometimes I wonder did you ever used a scope in  your life...
Or learn how to read..
Read again what I wrote. I know that peak detect is mathematically incorrect for further analysis. But it is correct for the screen.

We have been here before and I did tell you to this simple experiment that is easy to reproduce:

Take a 100MHz carrier and AM modulate it with 100 Hz, and put scope on 2 ms/div you'll see this:


Scope is sampling at 100 MS/s. At half Nyquist.
And it is showing same as analog scope would do...


« Last Edit: December 30, 2020, 01:11:26 am by 2N3055 »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #277 on: December 30, 2020, 01:45:37 am »
 I can easily contrive cases where it doesn't mater.  I can also create cases where it does. Your assertion is only correct  if you ignore the Fourier transform of the screen image.

I don't wish to be rude, but this is basic DSP 101.

Buy one of Leo bodnar's 100 ps pulse generators, feed it to an analog scope and a DSO with peak detect and we can discuss further.  Until then.

Have Fun!
Reg
« Last Edit: December 30, 2020, 01:50:59 am by rhb »
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #278 on: December 30, 2020, 03:34:58 am »
Reg, maybe you have forgotten the lessen rf-loop gave you here:
https://www.eevblog.com/forum/testgear/scope-wars/msg3121780/#msg3121780
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #279 on: December 30, 2020, 08:57:07 am »
I can easily contrive cases where it doesn't mater.  I can also create cases where it does. Your assertion is only correct  if you ignore the Fourier transform of the screen image.

I don't wish to be rude, but this is basic DSP 101.

Buy one of Leo bodnar's 100 ps pulse generators, feed it to an analog scope and a DSO with peak detect and we can discuss further.  Until then.

Have Fun!
Reg

You are rude because you don't read but are still being condescending and patronizing. I wrote:


Digital scope with a screen (distinction from digitizer that samples data inside data acquisition system) need to serve two functions:
- emulate behaviour of a CRT oscilloscope on a screen
- function as a digitizer in a background, so that all data that it captured is sampled properly and doesn't contain any nonsense in mathematical way.

First point is well served with decimating to screen with peak detect.



So to repeat, again, because we spoke of this before and you learned nothing the first time:

In order for scope to show, visually, on the screen, what people expect to see from time domain instrument, and make it similar to CRT scope it has to deal with the data in different manner, than what you would do if you sample data for spectral analysis, or for mathematically correct DSP analysis of any sort.

In order to deal with these contradictory requirements, high end mixed domain scopes either have modes in which they reconfigure data engines to work in Scope/RF SA mode, or they use powerful FPGA/ASIC and continuously sample at full speed, and then create 3 data streams through 3 separate datapump/decimation/DSP blocks to have screen/SA/propper measurement and further raw data analysis.

For lesser platforms, priority for scopes was to function as scopes so they have screen/propper data buffer architecture, with FFT for spectral analysis on top of normal scope data as afterthought. One inexpensive scope that has MDO approach is GW Instek that has SA mode, where they reconfigure data engine to work in more realtime SA mode.

So to simplify so you can understand this time, you're not supposed to analyse Peak detect data. Nobody ever said that, on several occasion I said explicitly it is incorrect data for any kind of further analysis (not  completely though, you can extract P-P envelope from it for instance).  Peak detect data is perfect data for screen though.
To be completely correct, absolutely correct data to plot, like David Hess said before, would be histogram of data to decimate, encoded in intensity by distribution density at value bins.
That would be perfect CRT emulation.
In real life Peak detect does decent representation, because it will still show very fast and rare peaks, instead for them to be hidden by being too dim to see.. If you would use histogram to calculate pixel brightness, you would still need to not make it completely linear, but bump up black point to be able to see rare events. Some compression would need to be there.

You need to separate screen stream and data buffer stream in the very beginning of data processing, and treat them separately, for them to both be correct.
 
The following users thanked this post: nctnico, JohnG

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #280 on: December 30, 2020, 03:49:10 pm »
https://www.eevblog.com/forum/testgear/scope-wars/msg3121780/#msg3121780

Does it actually mean that this acquisition mode deliberately changes the sampling clock phase for each acquired trace, so that displaying the stitched traces with (persistent) points leads to an oversampled, ETS-like appearance?

EDIT:

Or am I fooled by an illusion? Does the ADC just happen to act as direct down converter here, due to the particular signal frequency to sampling rate ratio chosen for the example? [ which would mean that the same would not work for arbitrary signal frequencies ]

LeCroy's RIS is documented, but where can I actually find a documentation of Siglent's SARI mode?
« Last Edit: December 30, 2020, 05:14:48 pm by gf »
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #281 on: December 30, 2020, 08:13:44 pm »
LeCroy's RIS is documented, but where can I actually find a documentation of Siglent's SARI mode?
Use the LeCroy documentation as much of Siglent's design implementation mirrors theirs as they have both worked together on several products.
That many Siglent products are also rebranded as LeCroy might indicate how the two think alike.  ;)
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #282 on: December 30, 2020, 08:40:53 pm »
https://www.eevblog.com/forum/testgear/scope-wars/msg3121780/#msg3121780

Does it actually mean that this acquisition mode deliberately changes the sampling clock phase for each acquired trace, so that displaying the stitched traces with (persistent) points leads to an oversampled, ETS-like appearance?

EDIT:

Or am I fooled by an illusion? Does the ADC just happen to act as direct down converter here, due to the particular signal frequency to sampling rate ratio chosen for the example? [ which would mean that the same would not work for arbitrary signal frequencies ]

LeCroy's RIS is documented, but where can I actually find a documentation of Siglent's SARI mode?

https://siglent.fi/oskilloscope-info-interpolation.html

google translate from Finish makes decent job...
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #283 on: December 30, 2020, 10:57:57 pm »
Wow, thread blew up over Christmas.  I will publish the results from the survey in the new year when I have some more spare time to dust out my statistics textbook.  Some quite surprising results.

And, regarding the decimate/peak-detect/filter discussions.  This is why I like optionality ... there's nothing stopping the instrument supporting the mathematically correct DSP transform, and the engineering transform that is wrong, but produces results more in line with the users' expectations.  As I see it the biggest problem with peak detect is it doubles memory consumption, so it should not necessarily be a default setting for longer timebases as it will reduce the available sample rate.
 
The following users thanked this post: 2N3055

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #284 on: January 01, 2021, 08:17:59 pm »
Survey Results

Thanks to everyone for filling out the survey.  43 valid responses were received over the course of a month which is not so bad for a small engineering forum, and enough to sample some reasonable data with a fair bit of confidence.  For those interested, the results are publicly published here, besides the comments field for user privacy:  https://docs.google.com/spreadsheets/d/1yqCfIa8lzXFmDxayfT2XsBuWI-NDyL6ssCdF4YwLubI/edit?usp=sharing

Some surprising conclusions from the survey.  It seems that users on here are roughly evenly divided between software engineering and hardware engineering, and about 20% report other fields.  I had actually expected to see more FPGA/RTL engineers (none reported that as their profession) but perhaps this is a limitation of the single selection, as at least in the case of my career and present employment I am both a hardware engineer and a FPGA systems engineer.

It seems the majority of users on here are experienced in their field with more than 12 years' of professional experience.  I had originally intended the distribution of options to be somewhat logarithmic but perhaps I should have included a 20+ years option to further divide this category.  Nonetheless I think it is fair to say that the majority of people interested in this project are professional and experienced engineers.  That means the product must be professional too, of course!  Over 85% of individuals said that they would consider the purchase of a FOSHW instrument, which is good news.

On pricing, the data was a pleasant surprise as I was worried about the need to cost-optimise even further.  Some were willing to pay over $1000 USD for such an instrument.  Weighting the prices by assuming that any option is taken as half-way in the range (with the upper and lower bound options set at their respective minimums and maximums) this gives a price target of USD $713 for the instrument.  That should be achievable, depending upon the final specification and what modules, if any, need to be included in that configuration (I expect the price to include a 4 channel AFE.)

It seems most people would be accepting of a touchscreen based user interface, though there were some who said that it would not be acceptable.  I would be in favour of a limited degree of physical controls, but a complex control assembly would be expensive to tool up and could increase the bill of materials considerably;  it also limits the flexibility (e.g. channel knobs that don't make sense when you have an LA module installed, for instance.)  Some consideration of overall device form factor needs to be made here.

The "please rank the importance" data were also very useful and helps steer this project somewhat:-

Modularity ranked highly (average score 4.0/5), with the majority split between 4 and 5 points suggesting this to be essential to most users.  I intend for the instrument to be modular but it was good to have confirmation of this.

Portability was mostly unimportant to those surveyed (average score 2.1/5), with the majority suggesting 'not at all' useful.

Configurable digital filters scored a rather midfield 3.3/5, with most people selecting option 3, 'somewhat important'.  There however were a substantial number both regarding this as essential, and another group similarly as not essential.  More research and discussion is needed on this point I suspect, to determine the performance level that is required by those who desire such a function.

There was more majority support for an MSO function, but the average score was similar to digital filters at 3.6/5.  However, it seems this score is weighted more by those that regard it to be essential.  One user commentated that state analysis would be essential for such a function, and I agree.   This means the memory capture and trigger path needs to support modes synchronous to analog channels and asynchronous too.  That is a lot more difficult than supporting only one route,  but it should be at least practical to support state-only analysis (with analog channels off),  it gets a bit more difficult to try to synchronise this with the analog data.

Wi-Fi connectivity was not seen as that important, with an average score of 2.0/5.  That is fine - it can be supplied by an external USB stick if required.  It does not appear there is sufficient interest at this point to justify internal integration, with all the complications from an RF and EMC design perspective.  The correlation between this and portability, however, was not that great, at 0.23, indicating a loose agreement between the answers, though the size of the survey may make drawing a better result more difficult here.

Stronger interest was heard for the DDS function, with an average score of about 3.0/5, although much like the configurable digital filters option this seems like an option that has mostly 'average' support, with few people regarding it as essential.  That was somewhat surprising to me and pushes the DDS function more towards an external module card, if it is implemented.  It should be relatively trivial to implement it using the FPGA's spare resources, requiring only a few SERDES blocks,  but will require a board with external filters, offset, gain control and output amplifiers.  A suggestion of an isolated DDS was made by one respondent.  This would likely have to be a separate module (I personally don't think it is worth including as a 'standard' option due to the cost), but with high speed digital isolators available from Analog Devices and others, plus a DC-DC module to bridge the isolation gap, it's eminently practical to do so.  But this would likely add a fair bit to the cost of any such module.

The 10MHz reference function was well-supported, with the majority of interest around 4/5,  for an average score of 3.5/5.  This seems like a no-brainer as it is pretty inexpensive to add and comes at relatively little cost.   The reference signal could be routed via the FPGA fabric as a combinational logic block, although this might add jitter, so external multiplexers may be preferred.   In any case, it's reasonably trivial to feed in a 10MHz reference to the PLL, or to export the PLL's reference signal, with some multiplexer ICs or the FPGA fabric.

And the 50 ohm input termination was also well-supported, with the majority of responses also around 4/5.  Very few people regarded it as not important at all.  The average score was 3.8/5.  I intend to investigate adding a fast relay turn-off circuit to any 50 ohm input, using a simple latch, to allow the terminator to be protected if the input is grossly overloaded.  Of course, it will not save your ass if you connect it to 240V mains,  but it might stop you damaging the terminator if it is hooked to 24V instead of 5Vpk max.  (For simplicity, it would turn off all terminators if input voltage is exceeded, if the 50 ohm mode is enabled for that channel.)

The next steps

The existing hardware architecture was a great proof of concept but I believe to continue this project it will be essential to develop a second-generation PCB.   I see two possible routes forward for this project.  The lack of portability as a serious requirement unlocks options that would be more power-hungry than a battery powered solution might otherwise support (for 8+ hour runtimes.) 

Option A.  Use PCI Express with a Pi Compute Module 4, replacing the reverse engineered CSI bus, connected to a similar Zynq 7000 (PCI-Express variant so likely Zynq 7015.)  The UltraScale is a very nice platform but the biggest disadvantage here is that it restricts you to a Xilinx Linux distribution (and Xilinx are not too good with keeping this up to date, nor do I want to go down the rabbit hole of having to build a custom kernel.)  For all of its faults, the Pi has a open and supported kernel, with limited proprietary software with at least good regular support.  The CM4 is also a 'beast' in terms of processing offering in a modest configuration 4GB of application memory, gigabit Ethernet and a USB2.0 controller.  (There is some pain in supporting USB3.0 and PCI-e at the same time; a bridge IC may be necessary, and I'm not sure of the kernel or driver complexities there.  However, the CM4 does have internal gigabit Ethernet.)  However, the limitation of using the 32-bit Zynq is addressable memory will be limited to 1GB,  so sample memory would be limited to max 900Mpts or so, fixed to the board in a dual DDR3 arrangement. Sample rates would also top out around 2-2.5GSa/s on 1 channel or 1.2GSa/s on a pair of channels. Yes, a MIG configuration is also an option to add say an additional memory bus, but for prior stated reasons I am not a supporter of that route forward.    However I am relatively confident a price of US$750 can be achieved here with a touchscreen UI and a few controls.    My past experience with Zynq suggests that a passive heatsink will be sufficient as a cooling solution;  the Pi 4 may end up being the ultimate thermal bottleneck and require some careful thermal engineering to allow the Pi 4 to run at near-100% CPU load for sustained periods of time.

Option B.  Zynq UltraScale+ architecture with the ZU2EG or ZU3EG variant in use.  No Compute Module.  Linux application runs on the Zynq UltraScale with direct memory access to waveform data,  and Mali-400 GPU helps with any 2D GPU tasks that can be offloaded, though waveform rendering still likely to be software or FPGA based.  Large memory space available: should be able to support at least 8GB and probably 16GB in a user-accessible DDR4 SODIMM,  granting long acquisition buffers.  The biggest disadvantage here is that the cost is considerable - the UltraScale is not a cheap device.  The Linux headaches as above are also noteworthy and may slow development if the available supported kernel is stuck in the dark ages.  I estimate that this solution would push the product more past the US$900 price point which may limit the market for it.  However, it is likely that with the fast UltraScale fabric,  the higher memory bandwidth (more internal AXI fabric, faster FPGA fabric, etc.) that the system could exceed 5GSa/s in a single channel configuration with the right engineering efforts, although probably not from day one.  Power consumption of the UltraScale may also require a heatsink and, while I would intend to avoid it as I have a genuine dislike of the things in test equipment, a fan may also become necessary.

Other options have been discussed on here, including going to a plain FPGA to CM4 interface over PCI Express.  However, there are really serious advantages to having the Zynq ARM (in either configuration) closely coupled to the FPGA fabric, that would be difficult and resource-expensive to implement on a soft-core CPU.  Something like a Kintex with a MicroBlaze has been considered, but the disadvantage is the performance from a soft processor is limited without having to dedicate a large amount of fabric to it.  The Zynq SoCs aren't much more than their comparable FPGA-only brethren, but deal with a lot of the headaches for you already.  Improving how 'accessible' this platform is to hardware and software hackers is critical to me.

I do appreciate the continued discussion here so please let me know your thoughts, positive or negative.

Tom

Edited to fix omission with Wi-Fi results.
« Last Edit: January 01, 2021, 10:25:37 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #285 on: January 01, 2021, 09:38:39 pm »
I still think the best solution is to use PCI express and do all processing on a compute module (not necessarily CM4 but let's start there). Lecroy oscilloscopes are built this way as well; all waveform processing is done on the host platform. Doing processing inside the FPGA sounds nice for short term goals but long term you'll be shooting yourself in the foot because you can't extend the resources at all. At this point you already identify using the Zync as a bottleneck! Going the PCI express route also allows to use multiple FPGAs and extend the system to multiple channels (say 8  ) by just copying and pasting the same circuit. Heck, these could even be modules which plug into a special backplane with slots to distribute trigger, time synchronisation and PCI express signals. A standard PCI express switch chip aggregates all PCI express lanes from the acquisition units towards the compute module. Either way the next step should include moving all processing to a single point to achieve maximum integration. Having multiple processors at work and needing to communicate between them makes the system more difficult to develop and maintain in the end although it may seem like an easy way out right now (been there, done that). Which also circles back to defining the architecture first and then start to develop.

BTW the external 10MHz reference will prove more complicated then you think. This will need to feed the clock synthesizer on the board directly as any FPGA based 'PLL' will add way too much jitter.
« Last Edit: January 01, 2021, 10:27:37 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #286 on: January 01, 2021, 10:30:11 pm »
I still think the best solution is to use PCI express and do all processing on a compute module (not necessarily CM4 but let's start there). Lecroy oscilloscopes are built this way as well; all waveform processing is done on the host platform. Doing processing inside the FPGA sounds nice for short term goals but long term you'll be shooting yourself in the foot because you can't extend the resources at all. At this point you already identify using the Zync as a bottleneck! Going the PCI express route also allows to use multiple FPGAs and extend the system to multiple channels (say 8  ) by just copying and pasting the same circuit. Heck, these could even be modules which plug into a special backplane with slots to distribute trigger, time synchronisation and PCI express signals. A standard PCI express switch chip aggregates all PCI express lanes from the acquisition units towards the compute module. Either way the next step should include moving all processing to a single point to achieve maximum integration. Having multiple processors at work and needing to communicate between them makes the system more difficult to develop and maintain (been there, done that).

While I do agree that the CM4 (or whatever module is used) should do much of the processing, I disagree that it should do all of it.  Certain aspects are beneficial for FPGA based logic, for instance digital filters using multiplier blocks would be some 10-50x faster if using the FPGA fabric, so it is a "no brainer" to do that.  I think the same applies for waveform rendering. 

If I go the PCI-e route, then the Pi's software will be able to access any waveform. Address translation for trigger correction may even be performed on the Zynq side, if I am smart enough to be able to make that work - but if not doing it on the Pi would not be terribly difficult or resource intensive.   There is no 'foot shooting' here because the CM4 would have access to whatever memory is needed so it can do as little or as much processing as it wants. It would just set up a task queue for what waveform buffers need to be rendered and pull the image out over PCI-e when it gets an interrupt, for example.    There's also the bidirectional aspect so the Pi could load arbitrary code into the Zynq (or even a new bitstream!) via the fast PCI-e link,  or send waveforms for DDS functions or for other DSP processing.  IIRC the PCI-e on the Pi CM4 is 5Gb/s x1, and the Zynq supports up to x4,  Ultrascale goes up to x16.

BTW the external 10MHz reference will prove more complicated then you think. This will need to feed the clock synthesizer on the board directly as any FPGA based 'PLL' will add way too much jitter.

Only if we used the FPGA PLLs to do anything with the clock signal - I was merely suggesting routing the logic signal through combinatorial blocks and IO which at 10MHz should be OK.  The biggest risk is it will introduce noise which will worsen the jitter and phase noise, but this might not be that significant.  However, external switches to route the signals in the analog domain (or low jitter digital domain) are another option, they just add more logic and cost.

« Last Edit: January 01, 2021, 10:32:43 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #287 on: January 01, 2021, 11:19:09 pm »
I still think the best solution is to use PCI express and do all processing on a compute module (not necessarily CM4 but let's start there). Lecroy oscilloscopes are built this way as well; all waveform processing is done on the host platform. Doing processing inside the FPGA sounds nice for short term goals but long term you'll be shooting yourself in the foot because you can't extend the resources at all. At this point you already identify using the Zync as a bottleneck! Going the PCI express route also allows to use multiple FPGAs and extend the system to multiple channels (say 8  ) by just copying and pasting the same circuit. Heck, these could even be modules which plug into a special backplane with slots to distribute trigger, time synchronisation and PCI express signals. A standard PCI express switch chip aggregates all PCI express lanes from the acquisition units towards the compute module. Either way the next step should include moving all processing to a single point to achieve maximum integration. Having multiple processors at work and needing to communicate between them makes the system more difficult to develop and maintain (been there, done that).

While I do agree that the CM4 (or whatever module is used) should do much of the processing, I disagree that it should do all of it.  Certain aspects are beneficial for FPGA based logic, for instance digital filters using multiplier blocks would be some 10-50x faster if using the FPGA fabric, so it is a "no brainer" to do that.  I think the same applies for waveform rendering. 
There is no need to do filtering and/or rendering inside the FPGA.

First of all you don't need to filter and render the entire record but only enough to fill the screen. Being able to adjust the filter parameters after an acquisition is a big plus and I don't see that being possible if the FPGA filters the data from the ADC.

Secondly, I understand when you say that the processor also has access to the data but that implies that at some point some operations will happen in the FPGA and some in software. But this means you can't tie them together in an easy way. Say the processor wants the filtered data + the original to do protocol decoding. You'll need to tell the FPGA to deliver the filtered data AND the original data will need to be fetched from the memory. And what if someone wants to extend the filters but the FPGA doesn't support that? In software it is easy to implement a 9th order filter; an FPGA implementation is much more rigid so the person likely ends up giving up or re-implementing filtering in software anyway leaving the FPGA implementation abandoned. Or there is a need to do something with the data in software before filtering which requires to feed the data into the FPGA for filtering and then retrieving it. You have to choose where the processing takes place because whether it is rendering or processing it has to be possible to insert/delete blocks (which perform an operation) into the processing chain in an easy way. It is either FPGA or software. There is no AND because otherwise there will be two different places where a 'product' is being made. Like having half of a car assembled in the UK and the other half in France. It doesn't make sense from an architectural point of view.

Thirdly a reasonable GPU offers a boatload of processing power; probably even more than the FPGA can do and with a lot less effort to get it going. Remember that none of the respondents of your survey listed FPGA development as their profession; this means that the FPGA's role needs to be as minimal as possible to allow as many people as possible to participate.

IMHO the FPGA should only do these functions:
- format & buffer (FIFO) the data from the ADC so it can be stored in the acquisition memory
- run the trigger engine

The trigger engine is already complicated enough if it needs to support protocol triggering (and I like the idea of logic analyser like state machine triggering).

I'm not ruling out that the FPGA can play a role in data processing but that will be a carefully considered optimisation which will fit with the architecture of the rest of the system. Implementing data processing inside the FPGA now is optimising before knowing the actual bottlenecks.

PS: I didn't fill in the survey
« Last Edit: January 02, 2021, 12:05:09 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #288 on: January 02, 2021, 12:21:21 am »
Where are down-sampling/decimation/ERES/peak-detect supposed to be done when data are supposed to be stored at a lower sampling rate (say only 100kSa/s) than the maximum ADC rate (in order that a longer time interval fits into the memory)?

[ Assume, I want to acquire a single buffer with 100M samples at 100kSa/s (i.e. 1000 seconds). In this case it is no longer feasible to dump 1000s of data @1GSa/s to memory first, and decimate in the post-processing. ]

Any trigger filters (in front of the comparators) also need to be applied in the FPGA.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #289 on: January 02, 2021, 12:43:02 am »
Where are down-sampling/decimation/ERES/peak-detect supposed to be done when data are supposed to be stored at a lower sampling rate (say only 100kSa/s) than the maximum ADC rate (in order that a longer time interval fits into the memory)?

[ Assume, I want to acquire a single buffer with 100M samples at 100kSa/s (i.e. 1000 seconds). In this case it is no longer feasible to dump 1000s of data @1GSa/s to memory first, and decimate in the post-processing. ]
Peak-detect and decimation will need to be done inside the FPGA. But these are due to limitations of memory space versus the duration of the acquisition (IOW: when the sampling rate can no longer be the maximum). Eres OTOH can be done in software as this is a post-processing step; doing this in software has the advantage that you can change the setting after the acquisition.

Anything trigger related has to be done inside the FPGA though due to realtime requirements.
« Last Edit: January 02, 2021, 01:12:05 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: 2N3055

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #290 on: January 02, 2021, 01:25:24 am »
Sure, the reason for storing data at a lower sampling rate is of course the limited amount of memory.

So far I have considered ERES/HIRES rather a down-sampling acquisition mode, storing data at lower sampling rate, but with higher precision.

If memory suffices to store the data at full-speed, any kind filter can be applied in the post-processing, of course.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #291 on: January 02, 2021, 01:31:02 am »
Sure, the reason for storing data at a lower sampling rate is of course the limited amount of memory.

So far I have considered ERES/HIRES rather a down-sampling acquisition mode, storing data at lower sampling rate, but with higher precision.
The actual implementation of Eres/hires is very brand specific. Some DSOs will store higher precision values in the acquisition memory (Tektronix for example) where others implement it as a math trace in software (Lecroy for example). Implementing eres/hires in software is the most simple & flexible approach.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: 2N3055

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #292 on: January 02, 2021, 10:24:52 am »
Implementing ERES in software requires a lot more memory than the Zynq can support (1GB memory space) and would require the software process a large number of samples for every waveform rendered.  You can implement an ERES filter later in software if you wanted to,  but the FPGA should still support ERES recording into say 16-bit accumulators which saves memory and improves available sample rate.

However I will concede that nctnico, you have convinced me to investigate the Nvidia Jetson Nano module some more, as it has an x4 PCI-e interface,  which would make it an ideal candidate for high speed interfacing with the FPGA (also supporting x4 PCI-e in 7015 configuration.)  Depending on the level of processing that board can do on the Jetson processor, it may make sense to have a smaller FPGA involved in limited processing, and have more of the processing done in software, where there is more user flexibility. 

It really depends on how much DSP you want to do and there is a case for doing some DSP on the FPGA but a GPU is also an option.  I'm still not thoroughly convinced the GPU would be good for rendering the waveform itself - while initially it seems like a good target for a GPU, it involves random pixel hits, which a GPU is not generally designed to support.  Most GPUs, including Nvidia's Maxwell, are tile based (technically Maxwell is tile-cache based, but there are minor differences), with the expectation that pixel hits will more likely occur within their tile range.  That said, it's certainly worth investigating.

I've ordered a Jetson Nano and will see what I can do with it with internally generated waveform data.  Porting ArmWave across should be an interesting January project.
 
The following users thanked this post: nctnico

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #293 on: January 02, 2021, 04:38:07 pm »
About the Jetson Nano... the thermal solution is horrible and also getting the software going to support the PCI express lanes is not very straightforward. This needs changing the DTB files and NVidia turned these into a huge convoluted mess without any documentation. I can lend a hand with dealing with the Jetson module (I have integrated the Jetson TX2 module in a product) but perhaps the RPi CM4 is a better choice (as a first step) where it comes to simplicity of integration and community support.
« Last Edit: January 02, 2021, 05:48:17 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #294 on: January 02, 2021, 10:41:53 pm »
I think it depends on the level of demand for DSP, but if you want to make the scope mostly software, you need a really powerful GPU and compute core.    Also the x1 PCI-e on the Pi 4 is useful but on the Zynq that could only be used up to 2Gbit/s (after 8b10b) which is barely faster than the 2-lane CSI implementation.  Sure it's memory mapped but there would be ways to do that with CSI-2 as well.   Jetson Nano is x4,  so theoretically,  1GB/s transfer rate.  Whole RAM in one second copy.  Not bad.

The other thing that attracts me to the Jetson Nano is that it's pin compatible with the Jetson Xavier NX, which is some 3x more compute available, so it's a route forward for serious power users.  On the datasheet specifications, it seems plausible that it could do several thousand tap long filters at very high memory depths,  if indeed the MAC engines can be chained in a suitable manner.  It is also likely to be able to do very long FFTs.   

Not too concerned about the thermal solution for development, and for production the module comes without the heatsink so any thermal solution would necessarily be custom.  I'd prefer no fan but am aware that 5W+ in a small case with a passive heatsink will be a challenge.  Heatsinking to a larger aluminum extrusion was my preferred method when CM3 was in play.

Although it proved a bit harder to get the Jetson from the UK as the distributor went out of stock when I ordered it (or maybe they never had stock) so I'll put it on my next Digi-Key order next week.

I've never played with PCI-e before so it will be an interesting learning experience, but so was reverse engineering CSI-2.  Any tips would be appreciated.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #295 on: January 02, 2021, 11:02:35 pm »
I've ordered Jetson stuff from Arrow and they forward the order to the local branch. Works quite good. Passive heatsinking is doable. IIRC the Xavier sits around 25W tot 30W. A 20-ish by 10-ish centimeter heatsink with wide spaced fins to allow convection cooling does the job OK. 'My' Jetson TX2 project uses such a passive heatsink with a thermal design target of 60 degrees ambient for a 20W power dissipation (and some thermal headroom to spare).

Where it comes to PCI-express routing it is a matter of getting the differential pairs right with phase shift corrections to account for bends being shorter / longer. How difficult that is to achieve depends on the PCB package you are using. If it has differential phase matching it is not difficult. At the FPGA side it should be a matter of setting up the core and dropping it into the design. From there it should pop-up in the PCI tree of the Linux kernel. From the software side use memmap to map the PCI express memory areas into user space and you can talk to the FPGA. Unless there is a realtime requirement to handle something from software, a driver may not be necessary. Where it gets hairy is to enable / disable caching and to have the FPGA push data into the processor's memory space but when the acquisition memory is attached to the FPGA that may not be necessary.
« Last Edit: January 02, 2021, 11:26:45 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #296 on: January 02, 2021, 11:28:35 pm »
Unfortunately, I veto ordering from Arrow due to a prior total screwup with them that cost me weeks of my time and chasing them for refunds as they totally misunderstood DDP incoterms, actually for this very project.  Digi-Key do have stock and they've never done me wrong.

Memory mapping should be fine then.  I would assume the process would need to have permission to access PCI devices, though?  Does it need to be in a PCI group or run as root/thru sudo?

PS.  Not worried about differential routing.  The DDR3 memory on the prototype was the hardest part.  CSI-2 bus was comparably easy.  CAD tools, it's all CircuitMaker/Altium.  Am considering whether I should move to KiCad though to keep tools open.
« Last Edit: January 02, 2021, 11:33:14 pm by tom66 »
 

Offline Hydron

  • Frequent Contributor
  • **
  • Posts: 987
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #297 on: January 02, 2021, 11:41:14 pm »
Arrow seems to now have dropped the DDP option for UK shipments altogether, probably due to the clusterfuck that is Brexit making it harder to do.
Means that it's uneconomic to buy from them now anyway as they tend to ship multiple packages per order, each of which FedEx will bill you £12 for handling the VAT payment on.
Having similar issues buying from some other suppliers too, shame all the pain can't be reserved for the people who bought into the lies back in 2016 😡.

As for the Jetson, you may be interested in the open source antmicro Jetson carrier board - they have their Altium design files up on GitHub. Might save some time, even if it's just nabbing footprints etc. from it.
 
The following users thanked this post: tom66

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #298 on: January 03, 2021, 01:22:55 am »
Memory mapping should be fine then.  I would assume the process would need to have permission to access PCI devices, though?  Does it need to be in a PCI group or run as root/thru sudo?
Root rights are enough. From the OS point of view you are mapping a piece of physical memory into a user space process. Something to look out for is to tell memmap to mark the memory as uncacheable.

Quote
PS.  Not worried about differential routing.  The DDR3 memory on the prototype was the hardest part.  CSI-2 bus was comparably easy.  CAD tools, it's all CircuitMaker/Altium.  Am considering whether I should move to KiCad though to keep tools open.
I'd stick to Altium for now. The costs for producing a prototype are so high that it is unlikely many people will be changing the layout and if they do they might want a completely different form factor and start from scratch.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8517
  • Country: us
    • SiliconValleyGarage
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #299 on: January 05, 2021, 04:05:17 am »
If you need layout help ... or a second pair of eyes. ping me.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 
The following users thanked this post: tom66

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #300 on: January 05, 2021, 08:20:52 pm »
If you need layout help ... or a second pair of eyes. ping me.

Thanks, free_electron.  I'd certainly appreciate a second pair of eyes for the dual channel DDR3 memory controller when I get to that point.  The single channel one was fine,  but had I not done my extensive design review process (given such a complex board I felt this necessary) I would have missed a fatal error.
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 28368
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #301 on: January 05, 2021, 08:24:58 pm »
FE's Altium library is also a very useful resource.  ;)
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #302 on: January 07, 2021, 06:13:28 pm »
Jetson Nano arrived.  Probably won't be able to look at it properly until the weekend with my current professional  (i.e. pays the bills) workload.  But excited to give it a play.

I've considered just implementing an x1 PCI-e interface now using the M.2 slot - could be done with a small, probably custom PCB adapter and a Zynq devkit with a PCI-e port.  That said, spinning a board may be the only reasonable way forward, don't know until I do some more research.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #303 on: January 07, 2021, 06:46:21 pm »
Jetson Nano arrived.  Probably won't be able to look at it properly until the weekend with my current professional  (i.e. pays the bills) workload.  But excited to give it a play.

I've considered just implementing an x1 PCI-e interface now using the M.2 slot - could be done with a small, probably custom PCB adapter and a Zynq devkit with a PCI-e port.  That said, spinning a board may be the only reasonable way forward, don't know until I do some more research.
That is certainly doable. I have made an M.2 key E (IIRC) to key B/M converter to connect an NVME to the M.2 slot. For the final board I have used an USB type C connector as a cheap way to bring a PCI express bus off board in order to have an external (pluggable) NVME slot. The only downside is that USB type C can be plugged in both ways so the plug needs to have the right orientation. A mini-SAS SFF8087 connector is another option.
« Last Edit: January 07, 2021, 06:49:14 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #304 on: January 07, 2021, 08:28:51 pm »
I've used HDMI ports before for a proprietary interface that needed 4 diff pairs.  I use SATA on this current prototype but there wouldn't be enough lanes for a x1 PCI-e with that unless I used two cables, which could get messy in terms of length matching etc.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #305 on: January 07, 2021, 08:43:11 pm »
HDMI is also a good option indeed. Lot's of connector choices nowadays to bring high speed interfaces to the outside.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #306 on: January 17, 2021, 10:35:57 pm »
Just got this working.


~130kwaves/sec using the original, buggy GL renderer - that managed to persistently lock up the Pi's GPU driver.  Now, I know we're not necessarily competing on waveform rate here but this has barely been optimised and already outpaces ArmWave by 6x, which should come as no great surprise.  It's doing dot join too (can't necessarily see that from the rendered output as it's a sine wave, but it is joining each point with a vector, which used to kill performance of the older renderer.) 

Caveats:  Internally generated waveform,  not connected to Zynq yet.  Seems to only make use of about half of the GPU's compute units, and is probably inefficiently designed.  Does seem to struggle with longer waveform lengths (>4k points),  need to investigate why.  No zerocopy for now, so every waveform buffer is copied into GPU space on each frame,  but this should be fairly trivial to figure out. 

Currently I render to an offscreen 1536x256 buffer and then scale that up to meet the window size using linear interpolation.  This seems to hide some quantisation artefacts but might be undesirable.  However, comparing e.g. Rigol DS1000Z  and Keysight 2000X,  they both seem to do some kind of linear interpolation in the vertical axis, so I think this is quite normal.  I can use a nearest-neighbour scaler, but it looks awfully ugly.

Next week's challenge, I think, will be to play with some DSP.  But for now,  it's time to sleep. 
« Last Edit: January 17, 2021, 10:41:59 pm by tom66 »
 
The following users thanked this post: nctnico, tv84, JohnG, hhappy1

Offline Pitrsek

  • Regular Contributor
  • *
  • Posts: 171
  • Country: cz
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #307 on: January 20, 2021, 07:08:40 pm »

PS.  Not worried about differential routing.  The DDR3 memory on the prototype was the hardest part.  CSI-2 bus was comparably easy.  CAD tools, it's all CircuitMaker/Altium.  Am considering whether I should move to KiCad though to keep tools open.
I'd stick to Altium for now. The costs for producing a prototype are so high that it is unlikely many people will be changing the layout and if they do they might want a completely different form factor and start from scratch.
I can produce a lengthy list of why KiCad sucks, but for open source project it makes great difference if the tool is open or not. Not just for collaborators, but also for learning and fooling around - just to download the project and play with it. So although I'm no way KiCad evangelist, I have to suggest to switch to KiCad, as de facto standard open source tool. I'd suggest switch to HorizonEDA - this i use for my hobby projects, so far I like it very much. It supports length matching, but I haven't used it for high speed routing, so I can't attest as to how usable it is in this department.  In some aspects it is much more polished tool than KiCad, although it does not have the breadth of KiCad.

Depending on how things turns out in my life, i might have some spare time on my hands in upcoming months. I can do layout, power integrity/power supply design, analog stuff, reviews... PM me if you'd be interested.
 
The following users thanked this post: tom66

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #308 on: February 02, 2021, 08:30:36 am »
Would certainly be interested in that kind of thing Pitrsek.  I think KiCAD is probably the most developed of the FOSS toolchains.  The problem is DDR3 routing,  differential pairs length matching etc,  is a lot harder without proper EDA tool.   You need to be able to length match in groups,  highlight group colours separately,  confidently DRC check,  and dynamic push/shove routing is really nice too.   I used to use gEDA for open source stuff,  the schematic editor is fine,   but the PCB tool is not modern enough to keep up with many high speed designs and that's problematic, although I last used it in 2015, maybe it has moved on since. 

Apologies for not updating,  just been very busy here and continue to be busy.  I hope to have some proper time to give this project a look soon, the Jetson Nano is sitting on my desk waiting for me.
 

Offline Pitrsek

  • Regular Contributor
  • *
  • Posts: 171
  • Country: cz
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #309 on: February 03, 2021, 10:03:57 pm »
Length tuning, differential pairs, skew in differential pairs, push and shove, net classes - it's all there. There are quite a few boards with DDR3 already done with Kicad
https://kicad.org/made-with-kicad/categories/Single-board-Computer/ One is actually with Zynq.

The rules are probably the biggest difference so far, as usually the target length is defined manually. I would not call it on par with the pro tools, but it seems definitely usable.
HorizonEDA router is the one from kicad.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #310 on: February 07, 2021, 10:09:47 am »
I'm aware KiCad can do that, so it makes sense to use that at some point.  Actually, CircuitMaker doesn't even do length matching or tuning, that is reserved for the premium Altium product.  You do at least get length lists, but if you want to length match to the longest group, you have to do it manually, both the calculation and the tuning. Control/address bus needs to be the longest of all of the groups, then the data buses need to be equally matched to each other, including strobe and mask (though data groups can be independent lengths  as they are each tuned on power up by Zynq DDR3 controller.)  So lots of Excel spreadsheets and hand calculations.

I also noticed a recent mistake when reviewing this design as I'm working on a commercial project now that uses a Zynq too.  I didn't terminate ODT ball to Vtt, as I read somewhere that it was a CMOS signal. Wrong, it is high speed. Somehow, the scope board still works OK, but I guess the ODT signal has a lot of reflections on it, which might affect when the termination on the data bus side is enabled.    I'll make sure to terminate ODT in the new design!
« Last Edit: February 07, 2021, 10:21:58 am by tom66 »
 
The following users thanked this post: egonotto

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #311 on: February 07, 2021, 11:52:23 pm »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.

I've got a concept and LTSpice simulation of the attenuator and pre-amp side, but nothing has been tested for real or laid out.  It would be useful to have an experienced analog engineer look at this - I know enough to be dangerous but that's about it.
I have attached a design I created based on earlier circuits. IIRC it is intended to offer a 100MHz bandwidth and should survive being connected to mains (note the date!).
Left to right, top to bottom:
- Input section with attenuators. Note sure whether the capacitance towards the probe is constant.
- Frequency compensation using varicaps. This works but requires an (digitally) adjustable voltage of up to 50V and I'm not sure how well a calibration holds over time. Using trim capacitors might be a better idea for a first version.
- over voltage protection
- high impedance buffer
- anti-aliasing filter. Looking at it I'm not sure whether a 7th order filter is a good idea due to phase shifts.
- single ended to differential amplifier and analog offset
- gain control block and ADC.

Nowadays I'd skip the external gain control and use the internal gain control of the HMCAD1511/20 devices. It could be a nice Christmas project to see how it behaves.
Meanwhile I have been slowly moving forward with this and had a PCB made which is based on the schematic I posted earlier. I'm not going to post the schematic of this new circuit yet because some parts are experimental and I don't want a flurry of schematics floating around.

This first version has the following design targets:
- sensitivity from 500uV/div to 20V/div by using 1:2.5, 1:10 and 1:200 attenuators
- constant input capacitance
- DAC driven offset adjust
- target 200MHz bandwidth
- 2 different anti-aliasing filters
- DAC driven voltage adjustable capacitors to make the attenuators have a flat frequency response; however the adjustable capacitors I have found turned out to have a too low resistance path and thus are useless. I have found some low voltage varicaps though which seems suitable as well; these are on the way.
- adjustable, high precission compensator output which can be used for self-calibration of the front-end
- 20x or 2x gain consisting of fixed 2x gain stage and switchable 10x gain stage. I want to see if the 10x can work by switching a feedback resistor (probably not) or an extra amplifier stage is needed. The prototype supports both.
- differential output compatible with tom66's prototype board which uses a SATA connector and the HMCAD15xx ADCs

It will probably take a while before I get it fully tested & tweaked. I just wanted to give an update.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66, 2N3055, Pitrsek

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #312 on: February 08, 2021, 11:44:36 pm »
- sensitivity from 500uV/div to 20V/div by using 1:2.5, 1:10 and 1:200 attenuators

Do you mean 2.5x attenuation minimum, even for 500µV/div?
(that were only 200µV/div after the attenuator -- aren't that possibly ~8dB of renounced SNR?)

Quote
- 20x or 2x gain consisting of fixed 2x gain stage and switchable 10x gain stage. I want to see if the 10x can work by switching a feedback resistor (probably not) or an extra amplifier stage is needed. The prototype supports both.

Hmm, how is 20x gain supposed to suffice for 500µV/div (i.e. 5mV full scale on a 10 div display)?
After 1:2.5 attenuation, a gain of rather 1000x were required to obtain the 2V full scale input voltage for the HMCAD1511 (and without prior attenuation it were still 400x gain).

In 8-bit mode I see the possibility to augment the analog gain by some amount of HMCAD1511's digital gain, with only small SNR degradation, but according to [1] the useful range is still limited to <= 10x.

OTOH, for the 12/14-bit modes of HMCAD1520 (which were discussed in this thread, too), which already utilize (almost) the full DR of the ADC, I don't see digital (coarse) gain as an option, so that the whole amplification to 2V full scale needs to be done in the analog domain. [Digital fine gain can possibly still be used for small calibration adjustments of +/- a few percent, but I'm not sure if "no missing codes" ist still guaranteed in 14-bit mode then.]


[1] https://www.analog.com/media/en/technical-documentation/application-notes/using_digital%20gain_feature_of_hmcad1511.pdf
Quote
Lab testing has shown that gain settings up to 8x (corresponding to 0.25Vpp-diff. input full-scale) show minimal loss in SNR (as evident in Figure 5). SNR in dBc starts to degrade rapidly beyond digital gain of 10x, so the user is advised to keep the digital gain setting at 10x or less.
« Last Edit: February 08, 2021, 11:46:15 pm by gf »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #313 on: February 09, 2021, 12:04:20 am »
- sensitivity from 500uV/div to 20V/div by using 1:2.5, 1:10 and 1:200 attenuators

Do you mean 2.5x attenuation minimum, even for 500µV/div?
(that were only 200µV/div after the attenuator -- aren't that possibly ~8dB of renounced SNR?)
Yes. This is to have some protection of the input and provide a constant input capacitance.

Quote
Quote
- 20x or 2x gain consisting of fixed 2x gain stage and switchable 10x gain stage. I want to see if the 10x can work by switching a feedback resistor (probably not) or an extra amplifier stage is needed. The prototype supports both.

Hmm, how is 20x gain supposed to suffice for 500µV/div (i.e. 5mV full scale on a 10 div display)?
After 1:2.5 attenuation, a gain of rather 1000x were required to obtain the 2V full scale input voltage for the HMCAD1511 (and without prior attenuation it were still 400x gain).

In 8-bit mode I see the possibility to augment the analog gain by some amount of HMCAD1511's digital gain, with only small SNR degradation, but according to [1] the useful range is still limited to <= 10x.

OTOH, for the 12/14-bit modes of HMCAD1520 (which were discussed in this thread, too), which already utilize (almost) the full DR of the ADC, I don't see digital (coarse) gain as an option, so that the whole amplification to 2V full scale needs to be done in the analog domain. [Digital fine gain can possibly still be used for small calibration adjustments of +/- a few percent, but I'm not sure if "no missing codes" ist still guaranteed in 14-bit mode then.]


[1] https://www.analog.com/media/en/technical-documentation/application-notes/using_digital%20gain_feature_of_hmcad1511.pdf
Quote
Lab testing has shown that gain settings up to 8x (corresponding to 0.25Vpp-diff. input full-scale) show minimal loss in SNR (as evident in Figure 5). SNR in dBc starts to degrade rapidly beyond digital gain of 10x, so the user is advised to keep the digital gain setting at 10x or less.
For the 500uV/div (which translates to 4mVpp with 8 divisions) the ADC gain will need to be set to maximum indeed and then you still won't get the full range. This is a limitation. Still there is plenty of room to increase the amplification later on. I just had to start somewhere and concentrated on the 8 bit ADC. I would like to avoid using a VGA because that adds extra noise to the signal path. If you translate the SNR to what is being displayed; setting the gain to maximum results in degradation of about 12dB / 2 bits /4 LSB. So basically you get 1/8th of a vertical division of noise on screen (assuming 8 divisions is full range) with the ADC gain set to max. I want to see how the analog front end behaves where it comes to noise first before worrying about the ADC.
« Last Edit: February 09, 2021, 12:14:19 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #314 on: February 09, 2021, 08:29:19 am »
In my original sketching of ideas I had always planned for an IC like LMH2832.
https://www.ti.com/lit/ds/symlink/lmh2832.pdf

This would be combined with a single -39dB attenuation relay to get you a 78dB attenuation range in total. 

If you want the precision modes of the ADC, you can't use the gain stages.  For 8-bit mode they may be sufficient.  Not sure if the gain stages vary with the '1511,  if it uses just an 8-bit core internally,  or if the true difference between the parts is only the ability to export that precision data out.

The HMCAD1511 also requires all of its inputs to be centred around VCOM, about 1 volt.  Shouldn't be a problem, just be aware it has a limited common mode range.
« Last Edit: February 09, 2021, 08:31:22 am by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #315 on: February 09, 2021, 09:16:55 am »
In my original sketching of ideas I had always planned for an IC like LMH2832.
https://www.ti.com/lit/ds/symlink/lmh2832.pdf

This would be combined with a single -39dB attenuation relay to get you a 78dB attenuation range in total. 
The tricky part of VGAs in general is that they need differential inputs with a specific DC offset. Not impossible but it takes some careful planning. I'm also not convinced a VGA is the solution with the lowest noise. DSOs using the HMCAD1511 (without VGA) are consistently showing extremely low noise levels.

Quote
If you want the precision modes of the ADC, you can't use the gain stages.  For 8-bit mode they may be sufficient.  Not sure if the gain stages vary with the '1511,  if it uses just an 8-bit core internally,  or if the true difference between the parts is only the ability to export that precision data out.

The HMCAD1511 also requires all of its inputs to be centred around VCOM, about 1 volt.  Shouldn't be a problem, just be aware it has a limited common mode range.
Don't worry, I have catered for a VCOM pin on my design.
« Last Edit: February 09, 2021, 09:20:38 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #316 on: February 09, 2021, 09:24:00 am »
Not sure if the gain stages vary with the '1511,  if it uses just an 8-bit core internally,  or if the true difference between the parts is only the ability to export that precision data out.

I'm pretty confident that it is the latter. The 1511 digital gain application notes I referenced above also mention higher internal precision. I guess the limitation is rather the maximum LVDS output data rate, which limits even the 1520 to 2/3 of the full sampling rate when it outputs 12-bit high-speed data instead of 8-bit.

14-bit precicsion mode of the 1520 is still a bit different, as it utilizes only single ADC core per analog channel w/o interleaving.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #317 on: February 09, 2021, 01:51:09 pm »
If it was just data rate limit then the 640Msa/s 12-bit unpacked mode only uses ~7.7Gbit/s link rate, which is actually lower than the 8-bit 1GSa/s mode. 

I plan to use the part in a 16-bit padded mode as it's more compatible with a single receiver engine, just need to adjust how the data is unpacked (the SERDES blocks on a Xilinx 7 series part can't be dynamically resized).   This will limit sample rate to 500Msa/s on 12-bit mode.  Further work would be required to move the SERDES to a receiver with a gearbox that could interpret each 8-bit word received differently, which is needed to unlock the 640MSa/s rate (+28%). 

I suspect (if there is a difference at all, which I have yet to validate) that the '1511 has a fuse or laser disable for the additional functions that the '1520 has.  If we're really lucky there is no difference at all, just the '1520 functions in '1511 register space are unqualified and untested (a bit like the 40MSa/s ADC in the old Rigols that was running at 100MSa/s!)

Informally, with the basic on board PLL I have got the '1511 on one board up at 1.2Gsa/s, I suspect the PLL was the ultimate limit as the failure was a loss of lock for the ADC, as if the input amplitude requirement (which was already marginal) was being violated even at 1GSa/s due to my buggy PLL design.  I'll try with a higher amplitude or external clock sometime.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #318 on: February 09, 2021, 05:47:22 pm »
Quote from: tom66
If it was just data rate limit then the 640Msa/s 12-bit unpacked mode only uses ~7.7Gbit/s link rate, which is actually lower than the 8-bit 1GSa/s mode.
I think they wanted to avoid an odd number like 666.66666...GSa/s, and decided that 640 is a "nice" number which is close enough.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #319 on: February 09, 2021, 06:22:30 pm »
Quote from: tom66
If it was just data rate limit then the 640Msa/s 12-bit unpacked mode only uses ~7.7Gbit/s link rate, which is actually lower than the 8-bit 1GSa/s mode.
I think they wanted to avoid an odd number like 666.66666...GSa/s, and decided that 640 is a "nice" number which is close enough.

Yes - but point is it's not the LVDS transceivers that limit that data rate.
It's either a difference in the ADC structures, or perhaps more likely just the better parts that are picked to have 12/14 bit modes with good enough INL/DNL/some other parameter?  There may not be any difference at all...
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #320 on: February 09, 2021, 07:54:27 pm »
Yes - but point is it's not the LVDS transceivers that limit that data rate.

Given the specified 20%-80% LVDS clock an data rise and fall times of 0.7ns in the "LVDS Output Timing Characteristics" I don't see how the LVDS data rate can be significantly more than 1Gbit/s per lane, without violating these specs. At 1Gbit/s the data are stable for only 0.3ns, and at say 1.5 Gbit/s the data had no time to settle, given 0.7ns rise time. These specs apply to both 1511 and 1520. In practice the tranceivers may happen to be faster of course, but they don't guarantee it.

[ And if the data rate must not exceed 1Gbit/s per lane (by definition/specification) then the conversion rate needs to be reduced to <= 666MSa/s when 12 bits instead of 8 need to be transferred per conversion. ]
« Last Edit: February 09, 2021, 08:08:22 pm by gf »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #321 on: February 09, 2021, 08:04:53 pm »
Yes - but point is it's not the LVDS transceivers that limit that data rate.

Given the specified 20%-80% LVDS clock an data rise and fall times of 0.7ns in the "LVDS Output Timing Characteristics" I don't see how the LVDS data rate can be significantly more than 1Gbit/s per lane, without violating these specs. At 1Gbit/s the data are stable for only 0.3ns, and at say 1.5 Gbit/s the data had no time to settle, given 0.7ns rise time. These specs apply to both 1511 and 1520. In practice the tranceivers may happen to be faster of course, but they don't guarantee it.
It is differential! In the end it depends on the threshold of the receiver; the pulse width of the receiver will be the full cycle (minus some jitter). If you look at the datasheet there is an intentional 50ps delay between the LVDS clock output and data output as well.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #322 on: February 09, 2021, 10:01:45 pm »
Yes - but point is it's not the LVDS transceivers that limit that data rate.

Given the specified 20%-80% LVDS clock an data rise and fall times of 0.7ns in the "LVDS Output Timing Characteristics" I don't see how the LVDS data rate can be significantly more than 1Gbit/s per lane, without violating these specs. At 1Gbit/s the data are stable for only 0.3ns, and at say 1.5 Gbit/s the data had no time to settle, given 0.7ns rise time. These specs apply to both 1511 and 1520. In practice the tranceivers may happen to be faster of course, but they don't guarantee it.
It is differential! In the end it depends on the threshold of the receiver; the pulse width of the receiver will be the full cycle (minus some jitter). If you look at the datasheet there is an intentional 50ps delay between the LVDS clock output and data output as well.

Sure, in the end it depends on the receiver threshold; and the programmable clock phase also enables adjusting the point in time where the clock crosses the receiver threshold. But even if it still happens to work I would no longer call it a "clean" timing when the transition time exceeds the unit interval. And LVDS standards are obviously even stricter.

https://www.ti.com/lit/ug/slld009/slld009.pdf?ts=1612860014236
Quote
LVDS/M-LVDS Summary
The  most  attractive  features  of  LVDS  include  its  high  signaling  rate,  low  power  consumption,  andelectromagnetic compatibility. The following sections summarize each of these benefits and Chapter 2, LVDSand M-LVDS Line Circuit Characteristics and Features, offers a more detailed explanation.Signaling RateWe define the number of state changes per unit time as the signaling rate for the interface. Knowing the unitinterval time, tUI, between state changes, you can derive the signaling rate as the inverse of the unit interval.TIA/EIA-644-A and TIA/EIA-899 require that driver output transition times be less than 30% of the unit interval,with a lower limit of 260 ps and 1 ns, respectively. The standards also recommend that the transition time atthe receiver input be less than 50% of the unit interval. The difference between driver output rise time andreceiver input rise time allows for signal degradation through the interconnect media
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #323 on: February 10, 2021, 08:09:54 am »
Well, for what it is worth, the standard speed grade of the Zynq is only rated to 950Mbit/s per pin on SERDES and I haven't yet got the input delay tuning algorithm working, but it works stable whether hot or cold.  On this basis (though, admittedly with no way to measure it) I suspect the LVDS signal is not as marginal as suggested and the specifications are worst case, though I may have just gotten lucky!
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #324 on: February 10, 2021, 09:13:02 am »
Yes - but point is it's not the LVDS transceivers that limit that data rate.

Given the specified 20%-80% LVDS clock an data rise and fall times of 0.7ns in the "LVDS Output Timing Characteristics" I don't see how the LVDS data rate can be significantly more than 1Gbit/s per lane, without violating these specs. At 1Gbit/s the data are stable for only 0.3ns, and at say 1.5 Gbit/s the data had no time to settle, given 0.7ns rise time. These specs apply to both 1511 and 1520. In practice the tranceivers may happen to be faster of course, but they don't guarantee it.
It is differential! In the end it depends on the threshold of the receiver; the pulse width of the receiver will be the full cycle (minus some jitter). If you look at the datasheet there is an intentional 50ps delay between the LVDS clock output and data output as well.

Sure, in the end it depends on the receiver threshold; and the programmable clock phase also enables adjusting the point in time where the clock crosses the receiver threshold. But even if it still happens to work I would no longer call it a "clean" timing when the transition time exceeds the unit interval. And LVDS standards are obviously even stricter.

https://www.ti.com/lit/ug/slld009/slld009.pdf?ts=1612860014236
Quote
LVDS/M-LVDS Summary
The  most  attractive  features  of  LVDS  include  its  high  signaling  rate,  low  power  consumption,  andelectromagnetic compatibility. The following sections summarize each of these benefits and Chapter 2, LVDSand M-LVDS Line Circuit Characteristics and Features, offers a more detailed explanation.Signaling RateWe define the number of state changes per unit time as the signaling rate for the interface. Knowing the unitinterval time, tUI, between state changes, you can derive the signaling rate as the inverse of the unit interval.TIA/EIA-644-A and TIA/EIA-899 require that driver output transition times be less than 30% of the unit interval,with a lower limit of 260 ps and 1 ns, respectively. The standards also recommend that the transition time atthe receiver input be less than 50% of the unit interval. The difference between driver output rise time andreceiver input rise time allows for signal degradation through the interconnect media
Yes and no. The steeper the edges the more noise & heat dissipation (which is what a high speed / precission ADC can do without). Slower edges means more jitter at the receiver's end but since the connection between the ADC and FPGA is basically a synchronous parallel bus over a distance of a few cm at most, the environment is much better defined. The TI document is about using LVDS in multi-drop situations over relatively long distances and likely in a way where the receiver also needs to do clock recovery where jitter could be an issue.
« Last Edit: February 10, 2021, 12:52:15 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
An update on the analog front-end. No white smoke yet. What I was already afraid of turned out to be the case.

I basically started with the circuit I stopped working on over a decade ago (more or less the schematic I posted earlier). This circuit has seperate attenuator sections which get switched on/off using relais. I have come up with the following attenuation / gain table to get to a wide variety of attenuation ranges:



There are 3 attenuation factors: 2.5, 10 and 200 and a 10x amplification stage on top of a standard 2x amplification (where it says 1 in the table there is no attenuation / amplification; Excel just works easier this way). The ADC's maximum input amplitude is 2Vpp (differential).



Now to the board: I thought I had found some nice voltage controlled capacitors for NFC applications but these turned out to have a 30k Ohm DC resistance so useless in a high impedance circuit. So back to using varicaps (which I have used before). Using some low voltage high ratio varicaps I managed to get a decent voltage controlled adjustment range which seems to work nicely using a +/- 5V supply. I added these with some hacking.

The biggest problem with the circuit is that the long traces have too much inductance and thus start to resonate at frequencies well over 50MHz causing dips and peaks in the frequency response. That was the part I already was afraid off and while measuring it dawned to me that I had originally designed (and probably tested) this circuit to work to 45MHz. The NanoVNA I bought recently served as a really nice tool to measure the frequency responses quickly.

All in all I will need to rethink the input divider. I can still use the board to do more testing and use it to verify some modelling of the board by including trace inductance into the divider design. Also the anti-aliasing filters need to be tested (the large number of unpopulated 0402 components). I have included a relativile steep elliptic filter and a Bessel filter.
« Last Edit: March 22, 2021, 07:24:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: tom66, DaneLaw, gf, HerbTarlek

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Nice work. Unfortunately, I've not had enough time to look at the Jetson again.

Also, EEVblog doesn't notify me about replies on this thread for some reason, I'll try to keep up to date.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
I think everyone here is owed an update as it has been a long while now.

Unfortunately, I've come to the conclusion that realistically, I can't gather enough time to finish this project, with my career and other work taking up most of my free time.  I am currently in a precarious living situation,  looking to buy a home soon but in a market that has gone crazy due to COVID/work-from-home.  So, I find myself working long hours and generally not having enough time or energy left to look at this, in order to be able to provide the down payment on a home.  I am kicking myself for not doing this years ago, but that is FOMO and we don't do that here.

Longer term, I do want to revisit it and give another look to the platform and the software, but it probably won't be for another year or so when things have settled down on my personal side.  I have a lot of ideas for how to improve the platform.

I am more than happy for the community (if interested) to continue this project; as noted previously, everything is now open source, and I can supply the hardware schematics and designs in CircuitMaker (you can find them by looking for 'Scopy_MVP_Platform', they should be public).  But realistically this project requires several more man-years of effort, which I just cannot give it.  In any case, my day job now mostly involves dealing with Zynq and UltraScale and it's really hard to find the time or energy to continue starting at Vivado after doing 10 hours of it every day.  At my old job I was only doing basic microcontroller and analog circuit design so I felt I had a lot more 'bandwidth' available to do project work after a work day.

Anyway, I appreciate all of the advice and comments on here and, maybe someday, there will be a good open-source scope available.  But, sadly,  I don't think I will be the one to deliver it - not just yet, at least.
 
The following users thanked this post: rf-loop, nctnico, egonotto, tv84, Fungus, jjoonathan, 2N3055, gf, HerbTarlek, Anthocyanina

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
I think everyone here is owed an update as it has been a long while now.

Unfortunately, I've come to the conclusion that realistically, I can't gather enough time to finish this project, with my career and other work taking up most of my free time.  I am currently in a precarious living situation,  looking to buy a home soon but in a market that has gone crazy due to COVID/work-from-home.  So, I find myself working long hours and generally not having enough time or energy left to look at this, in order to be able to provide the down payment on a home.  I am kicking myself for not doing this years ago, but that is FOMO and we don't do that here.

Longer term, I do want to revisit it and give another look to the platform and the software, but it probably won't be for another year or so when things have settled down on my personal side.  I have a lot of ideas for how to improve the platform.

I am more than happy for the community (if interested) to continue this project; as noted previously, everything is now open source, and I can supply the hardware schematics and designs in CircuitMaker (you can find them by looking for 'Scopy_MVP_Platform', they should be public).  But realistically this project requires several more man-years of effort, which I just cannot give it.  In any case, my day job now mostly involves dealing with Zynq and UltraScale and it's really hard to find the time or energy to continue starting at Vivado after doing 10 hours of it every day.  At my old job I was only doing basic microcontroller and analog circuit design so I felt I had a lot more 'bandwidth' available to do project work after a work day.

Anyway, I appreciate all of the advice and comments on here and, maybe someday, there will be a good open-source scope available.  But, sadly,  I don't think I will be the one to deliver it - not just yet, at least.

Tom,

Thank you for letting us know, and I wish you a good luck finding and buying a new home... Life comes first...

Take care,
Sinisa
 
The following users thanked this post: tom66, egonotto

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
FWIW I'd like to note that there is a 2nd prototype with a 7020 installed which I paid for which *really* should be in the hands of someone who wants to work on it.  Like Tom, I just don't have the time.  I had him keep it lest there be an accident with the other prototype.

Tom asked me if I'd pick up the tab for the 2nd prototype unit which I did, but I was a bit skittish about the resources in the Zynq part Tom chose.  So I paid to have a 7020 instead.  It's got a *lot* more resources than the 7014S.

Both of us would really like to see someone grab the ball and run with it.  So if anyone is interested, contact one or the other of us.

Tom and I have been using iPads, Facetime and MS Whiteboard to discuss technical stuff.  I can't speak for Tom, but I can certainly make time for consulting on DSP if someone wants to work on this.  There are some very cool features a la high end LeCroys I want to see implemented in a low end DSO.

Have Fun!
Reg
 
The following users thanked this post: Anthocyanina

Offline tatel

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: es
This situation is very sad but fully understandable. I want to thank both tom66 and rhb for all they have done and for the contribution they are making now.

However, it's clear to me that a project of this magnitude must be financed in some way. Perhaps through successive bounties as a series of goals are achieved.

I don't have the necessary knowledge, not even close. But without a doubt, there must be gifted people out there, capable of demonstrating they have achieved one of those goals.

If the objective were to develop an open hardware and free software device, regardless of commercial interests, I would commit myself to contribute €100 personally. I'm sorry it couldn't be more. There may be other people who think like me.

Of course there will be many who consider this crazy. If you are a living "I give only negative feedback" t-shirt, please don't bother to provide that feedback here. Of course the obstacles are enormous, not only financial and technical, but also organizational and human.

On the other hand, if there is only one place in the world where an initiative like this can be successful, without a doubt it's this forum ...  moreover it is impossible to do worse than Hantek with its latest development :box:

Perhaps we should think about moving this thread to a more appropriate section of the forum ... or perhaps it's better let the dead rest in peace.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
My estimate was it would require, at billable hours, around £100,000 worth of work done on it to get it to a finished state where it could be reasonably pushed to the real world. 

Realistically, this is only going to happen if the project were to be on a platform like Kickstarter or the like,  and I really don't feel comfortable with pushing it onto a platform like that unless it was a 'nearly-finished' platform.   A secondary issue means re-spinning a board now is rather difficult given the worldwide shortages of semiconductors.  I would have to completely redesign many parts out of it, because you cannot buy them from anywhere with any confidence that they are real parts.

I don't consider this a "dead" project just one in hibernation until the time and circumstances are right. 
 
The following users thanked this post: rhb, egonotto, Anthocyanina, tatel

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Actually I have been toying with the idea to start a Kickstarter for an open source DSO long before this thread started. But tom66 is right, this is probably going to take a lot of money. I estimate it will take between 50k and 100k euro. I simply can't afford to do such a project for free entirely.

However, I'd take a different route and follow the architecture of the Lecroy Wavepro 7k I have. I will also go for very simple acquisition hardware (basically an FPGA with memory and some trigger facilities) and do all processing on the CPU/GPU (hoprfully using some of the code tom66 has made). FPGA development is extremely time consuming and the result is usually rather inflexible.

I have base hardware designs, boards & software environments for both NXP iMX8 and NVidia Jetson TX2 which could serve as a development platform and as a basis for the final design. Both have support for all kinds of TFT screens and PCI express brought out on a high speed connector so it is possible to hookup an FPGA board which has PCI express. At this point I wouldn't worry about component shortage. By the time the design is mature the shortage is very likely to be over / less severe. I'd probably start on the iMX8 because this is the lowest cost option.

At the software side I probably implement some simple waveform rendering and a plugin system for using Python scripts and (probably) Sigrok. From there the community can start extending the functionality.
« Last Edit: June 16, 2021, 07:17:40 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: egonotto

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
My estimate was it would require, at billable hours, around £100,000 worth of work done on it to get it to a finished state where it could be reasonably pushed to the real world.

[snip]

I don't consider this a "dead" project just one in hibernation until the time and circumstances are right.

I completely agree with Tom on this. 

Tom did all the work.  All I did was throw a bit of money in the air to see what would happen.  I'm *very* happy with the result even though we are still a long way from an OS DSO.  In many respects, I still think hacking a Zynq based COTS DSO is the best option.

The things that *must* be done in an FPGA are fairly limited.  IIRC my conclusion was filtering and triggering were the only things which had to be done in the FPGA.  The rest is much more leisurely because the human eye can't see changes faster than  120 Hz.

I think it worth noting that the nanoVNA languished for several years before it suddenly exploded on the world stage.  I'm hopeful that will be the case with this project.

Have Fun!
Reg
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
I completely agree with Tom on this. 

Tom did all the work.  All I did was throw a bit of money in the air to see what would happen.  I'm *very* happy with the result even though we are still a long way from an OS DSO.  In many respects, I still think hacking a Zynq based COTS DSO is the best option.
The problem with that is that extendability and processing abilities are quite low. When cycling to a customer this morning I got another idea: it could be worthwhile to add an edge TPU coprocessor like the Google Coral module to the platform (for example in an NVME slot). I think it would allow to do all kinds of signal processing tasks at very high speeds.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
It would be really neat to play with neural network triggers but again, I think you need to do it in real time, which implies it's done on the FPGA, unless the edge TPU can accept 8Gbit/s+ ADC data.

But the idea of giving a scope an arbitrary waveform and telling it, "OK, infer a good trigger from this mess"  is quite neat.  I suspect it would require a dense NN but it could be done,  it's essentially the same problem as processing vision data to detect patterns but applied only in 1D space.  Doing it across multiple channels is a little more complex.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
AFAIK Lecroy does some of the more advanced triggers in software (=not realtime) so the idea of feeding data into an edge TPU for triggering is not out of the question.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6704
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
You could probably pre-qualify the data with an edge trigger and only send frames of edge triggered data to the TPU, which need to be realigned to create a real trigger.  It would reduce the data rate to be something similar to what the Pi does in the current design.

This is the kind of neat research-like project that could come about from an open source oscilloscope project,  it's just a shame that time is short.
 

Offline Aleksorsist

  • Newbie
  • Posts: 7
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #338 on: November 20, 2021, 09:59:34 pm »
I’ll start my own thread for this, but I would like to also reply here since it’s on topic. I’ve been working on an open source scope for a while now as well and it’s at about the same stage as this project, just with more mature hardware (spent months on the front end alone) and less mature software. I’m still actively developing it and was wondering if anyone would be interested in working together to make both of these projects happen?

Both target 1000-series specs (we use the same ADC too), just mine is a PC connected scope and this is a benchtop scope. There’s been mention of using PCIe to a jetson board instead of CSI to a RasPi, my hardware uses PCIe to the user’s PC, so it would be easy enough to swap the PC out with the jetson (which has the four lanes of PCIe needed) to make my hardware into a suitable benchtop unit. That, combined with the more stable software from this project, could make both a PC and benchtop version a reality!

Here’s all the documentation for my project: https://hackaday.io/project/180090-thunderscope

Let me know what you all think!
-Aleksa
 
The following users thanked this post: tv84, AndrewBCN

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #339 on: November 20, 2021, 10:17:03 pm »
Oh... One more "let's make better open source mousetrap" project  :palm:
 

Offline Aleksorsist

  • Newbie
  • Posts: 7
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #340 on: November 20, 2021, 10:30:15 pm »
Yeah just another mousetrap, with more bandwidth, gigs of sample memory and expandable triggers and measurements for the same price as other mousetraps. Or would you rather I have closed sourced it and nickel and dimed everyone for every feature?
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #341 on: November 20, 2021, 10:53:02 pm »
Yeah just another mousetrap, with more bandwidth, gigs of sample memory and expandable triggers and measurements for the same price as other mousetraps. Or would you rather I have closed sourced it and nickel and dimed everyone for every feature?
More bandwidth, more gigs of memory, triggers and measurements than what exactly? Already available scopes? I am sorry to dissapoint dreamers, but let's be real - unless you find massive amount of supporters who need what you are offering, your open source initiative will fail. This is not the first and unfortunately last try.
 

Offline Aleksorsist

  • Newbie
  • Posts: 7
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #342 on: November 20, 2021, 11:01:05 pm »
More bandwidth, more gigs of memory, triggers and measurements than what exactly? Already available scopes?
Yes, for anything this side of $1000.
 

Online jjoonathan

  • Frequent Contributor
  • **
  • Posts: 783
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #343 on: November 20, 2021, 11:14:34 pm »
I'm glad someone took up the torch! Best of luck.
 
The following users thanked this post: Aleksorsist

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6630
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #344 on: November 20, 2021, 11:24:48 pm »
More bandwidth, more gigs of memory, triggers and measurements than what exactly? Already available scopes?
Yes, for anything this side of $1000.

I for one like that you made PCIe acquisition card. That can ensure bandwidth high enough to be usable.

My advice would be to get in contact with Andrew Zonenberg and try to make your hardware work with GLscopeClient of his.
He's software is most advanced and most promising of them all. You would need years to get to where it is now.
He's also great guy and I'm sure he would welcome the idea. He is also working on some acquisition boards of his own (among 100s of other projects in parallel, guy is a machine :-)) but the more , the merrier...

https://twitter.com/azonenberg?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
 
The following users thanked this post: nctnico, Kean, Aleksorsist

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #345 on: November 20, 2021, 11:31:54 pm »
More bandwidth, more gigs of memory, triggers and measurements than what exactly? Already available scopes?
Yes, for anything this side of $1000.

I for one like that you made PCIe acquisition card. That can ensure bandwidth high enough to be usable.

My advice would be to get in contact with Andrew Zonenberg and try to make your hardware work with GLscopeClient of his.
He's software is most advanced and most promising of them all. You would need years to get to where it is now.
He's also great guy and I'm sure he would welcome the idea. He is also working on some acquisition boards of his own (among 100s of other projects in parallel, guy is a machine :-)) but the more , the merrier...
That looks great at first glance and could be a good combo with some PCIe acquisition hardware + computer module like the Jetson !  :-+
« Last Edit: November 21, 2021, 12:08:54 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: 2N3055, Aleksorsist

Offline Aleksorsist

  • Newbie
  • Posts: 7
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #346 on: November 20, 2021, 11:32:57 pm »
I was thinking the same thing, no need to reinvent the wheel making a whole new scope client since there’s already an awesome open source option! Glad to hear that Andrew Zonenberg’s a cool dude too, I’ll shoot him a message and get things started  :-+
 

Offline JoeRoy

  • Contributor
  • Posts: 37
  • Country: us
I don't know where this article could fit in this forum, but I guess has some relation to this project.

https://hackaday.io/project/167292-8-ghz-sampling-oscilloscope
 

Offline Tycho_Brahe

  • Contributor
  • Posts: 17
  • Country: us
Makes me wonder what happened to Ted Yapo.  Maybe burned out.  He was prolific for a while, up until a few years ago.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
FWIW  There's a 2nd board with a 7020 available to the right person.  So if someone wants to develop the FW for the FPGA a dev board is available.
 

Offline ArsenioDev

  • Regular Contributor
  • *
  • Posts: 238
  • Country: us
    • DiscountMissiles: my portfolio and landing page
Makes me wonder what happened to Ted Yapo.  Maybe burned out.  He was prolific for a while, up until a few years ago.
His home and lab burned down :(
Also, burnout and general exhaustion
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf