Author Topic: A High-Performance Open Source Oscilloscope: development log & future ideas  (Read 69757 times)

0 Members and 1 Guest are viewing this topic.

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16628
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #25 on: November 16, 2020, 08:13:24 pm »
If you look at Rigol DS1000Z then you can see a fairly hefty SRAM chip attached to the FPGA, in addition to a regular DDR2/3 memory device.  It is almost certain that the DDR memory is used just for waveform acquisition and that the waveform is rendered into the SRAM buffer and then streamed to the i.MX processor (possibly over the camera port like I am using.)   Whether the FPGA colourises the camera data or whether Rigol use the i.MX's ISP block to do that is unknown to me.  Rigol likely chose an expensive SRAM because it allows for true random access with minimal penalty in jumping to random addresses.

I believe the Rigol main CPU can only "see" a window of 1200 samples at a time, as decimated by the FPGA. This is the reason that all the DS1054Z measurements are done "on screen", etc.

1200 samples is twice the screen display (600 pixels).

 

Online tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6686
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #26 on: November 16, 2020, 08:13:55 pm »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
« Last Edit: November 16, 2020, 08:20:10 pm by tom66 »
 

Online tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6686
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #27 on: November 16, 2020, 08:15:54 pm »
If you look at Rigol DS1000Z then you can see a fairly hefty SRAM chip attached to the FPGA, in addition to a regular DDR2/3 memory device.  It is almost certain that the DDR memory is used just for waveform acquisition and that the waveform is rendered into the SRAM buffer and then streamed to the i.MX processor (possibly over the camera port like I am using.)   Whether the FPGA colourises the camera data or whether Rigol use the i.MX's ISP block to do that is unknown to me.  Rigol likely chose an expensive SRAM because it allows for true random access with minimal penalty in jumping to random addresses.

I believe the Rigol main CPU can only "see" a window of 1200 samples at a time, as decimated by the FPGA. This is the reason that all the DS1054Z measurements are done "on screen", etc.

1200 samples is twice the screen display (600 pixels).

Yes it seems likely to me that it is transmitted as an embedded line in whatever is transmitting the video data.  The window is about 600 pixels across so it makes sense that they would be using e.g. the top eight lines for this data, two per channel.  It is also clear that Rigol use a 32-bit data bus instead of my 64-bit data bus as the holdoff/delay counter resolution is half what I support. (My holdoff setting has 8ns resolution due to 125MHz clock; theirs is 4ns/250MHz.)  They use a Spartan-6 with fewer LUTs than my 7014S so it's perhaps a trade off there.

I am almost certain (though have not physically confirmed it) that the Rigol is doing all the render work on the FPGA.  Perhaps they are using the i.MX CPU for the Anti-Alias mode which gets very slow on longer timebases as it appears to be rendering more (all?) of the samples.

The Rigol also does not decimate the data when doing the waveform rendering, so you can get aliasing in some cases although they are fairly infrequent corner cases.
« Last Edit: November 16, 2020, 08:32:58 pm by tom66 »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26873
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #28 on: November 16, 2020, 08:34:06 pm »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Personally I don't have a real need for high waveform update rates. Deep memory is usefull (either as a continuous record or as segmented / history buffer; segmented and history are very much the same). But with deep memory also comes the requirement to be able to process it fast.

Nearly 2 decades ago I embarked on a similar project where I tried to cram all the realtime & post processing into the FPGAs. In the end you only need to fill the width of a screen which is practically 2000 pixels. This greatly reduces the bandwidth towards the display section but needs a huge efford on the FPGA side. The design I made could go through 1Gpts of 10bit data within 1 second and (potentially) produce multiple views of the data at the same time. The rise of cheap Asian oscilloscopes made me stop the project. If I where to take on such a project today I'd go the GPU route and do as little as possible inside an FPGA. I think creating trigger engines for protocols and special signal shapes will be challenging enough already.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2729
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #29 on: November 16, 2020, 09:17:40 pm »
The bandwidth of this interface is less critical than it sounds,  for 8Gbit/s ADC (1GSa/s 8-bit) then just 10 LVDS pairs are needed.  A modern FPGA has 20+ on a single bank and on the Xilinx 7 series parts, each has an independent ISEREDESE2/OSERDESE2 which means you can deserialise and serialise as needed on the fly on each pin.   There are routing and timing considerations but I've not had an issue with the current block running at 125MHz,  I think I might run into issues trying to get it above 200MHz with a standard -3 grade part.
As you go into giga samples range, ADC quickly becoming jesd204b-only, which is itself a separate big can of worms. And many of them will happily send 12 Gpbs per lane and even more, for that you will need something more recent than 7 series (or using Virtex-7, I think they can go that high, though no personal experience).

Online tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6686
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #30 on: November 16, 2020, 09:31:10 pm »
There's JESD204B support in the Zynq 7000 series, though only via the gigabit transceivers which are on the much more expensive parts.

I've little doubt that I'll cap the maximum performance around the 2.5GSa/s range - at that point memory bandwidth becomes a serious pain.

I've a coy play for how to get up to 2.5GSa/s using regular ADC chips - it'll require an FPGA as 'interface glue' to achieve but it could be a relatively small FPGA.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4525
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #31 on: November 16, 2020, 10:16:31 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.

In better news if you're going down an all digital trigger route (probably a good idea) then the vast majority of "trigger" types are simply combinations of 2 thresholds and a one shot timer, which are easy enough. That can then be passed off to slower state machines for protocol/serial triggers. But without going down dynamic reconfiguration or using multiple FPGA images supporting a variety of serial trigger types becomes an interesting problem all of its own.
 

Offline Circlotron

  • Super Contributor
  • ***
  • Posts: 3176
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #32 on: November 16, 2020, 11:28:06 pm »
This takes "home made" to a whole new level!
My suggestion would be to have an A/D with greater than 8 bits. This would set it apart from so many other "me to" scopes. I'm sure there is a downside to this though - price, sample rate limitations etc. Also, if there is to be a hi-res option, maybe have a user adjustable setting for how many averaged samples per final sample or however it is expressed. I love sharp, clean traces. None of this furry trace rubbish!
« Last Edit: November 16, 2020, 11:30:41 pm by Circlotron »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4525
  • Country: au
    • send complaints here
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #33 on: November 16, 2020, 11:47:20 pm »
This takes "home made" to a whole new level!
My suggestion would be to have an A/D with greater than 8 bits. This would set it apart from so many other "me to" scopes. I'm sure there is a downside to this though - price, sample rate limitations etc. Also, if there is to be a hi-res option, maybe have a user adjustable setting for how many averaged samples per final sample or however it is expressed. I love sharp, clean traces. None of this furry trace rubbish!
Part of the fun of open source is you can ignore the entrenched ways of doing things and offer choices to the user (possibly ignoring IP protection along the way). A programmable FIR + CIC + IIR acquisition filter could implement a wide range of useful processing.
 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 73
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #34 on: November 17, 2020, 12:54:57 am »
A suggestion: replace the barrel connector (for power I assume) and the USB type A receptacle with 2 USB-C female receptacles. Both USB-C connectors should support PD (power delivery) allowing up to 20 Volts @ 5 Amps to be sunk through either connector. This assumes that power draw of your project is <= 100 Watts. If the power draw is <= 60 Watts then any compliant USB-C cable could be used to supply power. If the power draw is <= 45 Watts then a product like the Morphie USB-C 3XL battery could be used to make the 'scope portable. Dual role power (DRP) would also be desirable, so if a USB key is connected to either USB-C port then it could source 5 Volts say around 1 Amp. A USB-C (M) to USB-A (F) adapter or short cable could be supplied with the 'scope for backward compatibility. I guess most folks interested in buying this 'scope will own one or more USB-C power adapters, so it frees the OP from needing to provide one (so the price should go down). Many significant semiconductor manufacturers have USB-C offerings (ICs) with evaluation boards available (but not many eval boards do DRP).
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16628
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #35 on: November 17, 2020, 03:18:37 am »
Personally I don't have a real need for high waveform update rates.

I don't recall any discussions here about waveforms/sec, waveform record/playback, etc.

I remember a lot of heated discussions about things like FFT and serial decoders.

 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16628
  • Country: 00
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #36 on: November 17, 2020, 03:27:56 am »
A suggestion: replace the barrel connector (for power I assume) and the USB type A receptacle with 2 USB-C female receptacles. Both USB-C connectors should support PD (power delivery) allowing up to 20 Volts @ 5 Amps to be sunk through either connector. If the power draw is <= 45 Watts then a product like the Morphie USB-C 3XL battery could be used to make the 'scope portable.

(Seen from another perspective)

You mentioned adding a battery to this but that means:
a) Extra design work
b) A lot of charging circuitry on the PCB
c) Adding a battery compartmentrr/connector
c) A lot of safety concerns
d) Higher price
e) Bigger size/extra weight

Making it work with suitably rated power banks makes a lot more sense.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #37 on: November 17, 2020, 06:42:47 am »
The circuitry required to manage a battery pack would be absolutely trivial compared to what has already been achieved here. This is a well developed area, every laptop for the last 15 years at least has mastered the handling of a li-ion battery pack.

For what it's worth, I have not been impressed with USB-C, my work laptop has it and I have to use dongles for everything. The cables are more fragile and more expensive than USB-3, the standard is still a mess after all this time as IMO it tries to be everything to everybody and the result is just too complex. I have never been a fan of using USB for power delivery, a dedicated DC power jack is much nicer.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 6574
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #38 on: November 17, 2020, 08:19:06 am »
IMHO you are at a cross road where you either choose for implementing a high update rate but poor analysis features and few people being able to work on it (coding HDL) versus a lower update rate and having lots of analysis features with many people being able to work on it (using OpenCL or even Python extensions). Another advantage of a software / GPU architecture is that you can update to higher performance hardware as well by simply taking the software to a different platform. Think about the NVidia Jetson / Xavier modules for example. A Jetson TX2 module with 128Gflops of GPU performance starts at $400. More GPU power automatically translates to a higher update rate. This is also how the Lecroy software works; look at how Lecroy's Wavepro oscilloscopes work and how a better CPU and GPU drastically improve the performance.

I agree, although there's no reason you can't do both;  I had always intended for the waveform data to be read out by the main application software in a different pipeline to that of the render pipeline.  In a very early prototype, I did that by changing the Virtual Channel ID of the data set, so you could set up two simultaneous receiving engines.

What this means is though the render engine might be complex HDL you'll still be able to read linear wave data in any instance - I'd like for instance this to interface well with Numpy arrays and Python slices as well as a fast C API for reading the data. 

But it would be good to ask.  Do people really, genuinely benefit from 100kwaves/sec?  I have regarded intensity grading as a "must have" so the product absolutely will have that, but is 30kwaves/sec "good enough" for almost all uses, that potential users would not notice the difference?  I have access to a Keysight DSOX2012A right now, and I wouldn't say the intensity grading function is that much more useful that my Rigol DS1074Z despite the Keysight scope having an on-paper spec of ~8x that of the Rigol.
 
Certainly, a more useful function would (in my mind) be the rolling history function combined with >900Mpts of sample memory so you can go back up to ~90 seconds in time to see what the scope was showing at that moment and I find the Rigol's ~24Mpt memory far more useful than the ~100kpt memory of the Keysight.

Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.

To get discussion back to the track, let me chime in on some questions here.

For the purposes of nice intensity/colour graded waveform display, having very high display rate is diminishing returns game. Basically, if you look at, let's say, 10 MHz AM modulated with 100 Hz. You will need few thousand WFMs/s to make it smooth, so display will not have moire effect. And also, if you are watching something interactively, it will be faster than human eye and to us full real time.

I consider rettriger time important, but could live with 20-30 us rettriger time (30-50kWfms/s), if sequence mode would be much faster, on the level of 1-2 us. In that mode no data processing is performed and  that should be reachable. Picoscopes are like that. They also capture full data in a buffer, but send fast screen updates of decimated data for display, and full data delayed.

There are many scopes, even cheap ones, that do great job of interactive instrument. What would be groundbreaking is open source analytical scope.
 

Online tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6686
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #39 on: November 17, 2020, 08:22:40 am »
Why would I use USB-C?

- Power adapters are more expensive and less common
- The connector is more fragile and expensive
- I don't need the data connection back (who needs their widget to talk to their power supply?)
- I need to support a wider range of voltages e.g. 5V to 20V input which complicates the power converter design  (present supported range is 7V - 15V)

The plan for the power supply of the next generation product was to have everything sitting at VBAT (3.4V ~ 4.2V) and all DC-DC converters running off that.  It's within the range that a buck/LDO stage can work to give a 3.2V rail (good enough for 3.3V rated devices) and a boost stage can provide 5V.

Now, I was going to design it so that if you connected a 5V source it could charge the battery, so a simple USB type A to barrel jack cable can be supplied.  That would be inexpensive enough because we still have a buck input stage for single-cell Li-Ion charging (I'm keen to avoid multi-cell designs)  but at a maximum 'safe' limit of 5W from such a source, I doubt the scope could run without slowly discharging its battery.

When charging the battery this device could pull up to 45W (36W charging + 9W application) - that's roughly a 1C charge rate for a 10000mAh cell
« Last Edit: November 17, 2020, 08:24:37 am by tom66 »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26873
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #40 on: November 17, 2020, 10:56:11 am »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6686
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #41 on: November 17, 2020, 11:21:36 am »
Just wondering... has any work been done on an analog front-end? I have done some work on this in the past; I can dig it up if there is interest. Looking at the Analog devices DSO fronted parts it seems that these make life a lot easier.

I've got a concept and LTSpice simulation of the attenuator and pre-amp side, but nothing has been tested for real or laid out.  It would be useful to have an experienced analog engineer look at this - I know enough to be dangerous but that's about it.

At the time I was looking at a relay-based attenuator for the -40dB step and then a gain/attenuator block for +6dB to -38dB (think it was a TI part, I'll dig it out) which would get you from +6dB to -78dB attenuation.  Enough to cope with typical demands of a scope (1mV/div to 10V/div).

I was also looking into how to do 20MHz B/W limit and whether it would be practical to vary the varicap voltage with some PWM channels on an MCU to fine tune bandwidth limits.
« Last Edit: November 17, 2020, 11:23:37 am by tom66 »
 

Online tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6686
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #42 on: November 17, 2020, 11:33:25 am »
The existing AFE is purely ac-coupled.  Attached schematic.  The ADC needs about 1Vp-p input to get full scale code.

Presently the ADC diffpairs go over SATA cables, they are cheap and (usually) shielded.
 

Offline Zucca

  • Supporter
  • ****
  • Posts: 4306
  • Country: it
  • EE meid in Itali
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #43 on: November 17, 2020, 11:39:02 am »
Personally I don't have a real need for high waveform update rates. Deep memory is usefull

Ditto. Normally on our benches we have already a high waveform rate scope.
I believe many of us have (or would buy) a USB/PC scope to cover application where deep memory is needed.

For a project like this I would put all my poker fiches to get as much memory as possible. All in.
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 
The following users thanked this post: 2N3055

Offline Circlotron

  • Super Contributor
  • ***
  • Posts: 3176
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #44 on: November 17, 2020, 12:04:20 pm »
Over the past year and a half I have been working on a little hobby project to develop a decent high performance oscilloscope, with the intention for this to be an open source project.  By 'decent' I class this as something that could compete with the likes of the lower-end digital phosphor/intensity graded scopes e.g. Rigol DS1000Z,  Siglent SDS1104X-E,  Keysight DSOX1000, and so on. <snip>  I'll welcome any suggestions.
Sounds reminiscent of a newsgroup posting by a certain fellow from Finland some years ago... Lets hope it becomes as big.  :-+
 

Online tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6686
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #45 on: November 17, 2020, 12:12:48 pm »
Shifting the dots is computationally simple even with sinx/x (which is not yet implemented).  It's just offsetting a read pointer and a ROT-64 with an 8-bit multiple, practically perfect FPGA territory.  In the present implementation I simply read 0..3 dummy words from the FIFO, then rotate two words to get the last byte offset.
Noting that triggers in modern scopes are aligned more finely than the sample rate (interpolation), with the reconstruction and interpolation methods also dependent on the front end characteristics. Expect the rendering speeds to collapse in a software/GPU approach once you put in that phase alignment and sinc interpolation.

In better news if you're going down an all digital trigger route (probably a good idea) then the vast majority of "trigger" types are simply combinations of 2 thresholds and a one shot timer, which are easy enough. That can then be passed off to slower state machines for protocol/serial triggers. But without going down dynamic reconfiguration or using multiple FPGA images supporting a variety of serial trigger types becomes an interesting problem all of its own.

As I understand it, and please DSP gurus do correct me if I am wrong, if the front-end has a fixed response to an impulse (which it should do if designed correctly), and you get a trigger at value X but intend the trigger to be at value Y, then you can calculate the real time offset based on the difference between these samples which can be looked up in a trivial 8-bit LUT (for an 8-bit ADC).   It's reasonably likely the LUT would be device-dependent for the best accuracy (as filters would vary slightly in bandwidth) but this could be part of the calibration process and the data burned into the 1-Wire EEPROM or MCU.

In any case there is a nice trade-off that happens as the timebase drops: you are processing less and less samples.  So, while you might have to do sinx/x interpolation on that data and more complex reconstructions on trigger points to reduce jitter, a sinx/x interpolator will have most of its input data zeroed when doing 8x extrapolation, so the read memory bandwidth falls.   I've still yet to decide whether the sinx/x is best done on the FPGA side or on the RasPi - if it's done on the FPGA then you're piping extra samples over the CSI bus which is bandwidth constrained, although not particularly much at the faster timebases, so, it may not be an issue.  The FPGA has a really nice DSP fabric we might use for this purpose.

I don't think it will be computationally practical to do filtering or phase correction in the digital side on the actual samples.  While there are DSP blocks in the Zynq they are limited to an Fmax of around 300MHz which would require a considerably complex multiplexing system to run a filter at the full 1GSa/s. And that would only give you ~60 taps which isn't hugely useful except for a very gentle rolloff.

I think you could do more if filters are run on post-processed, triggered data.   Total numeric 'capacity' is approx 300MHz * 210 DSPs = 63 GMAC/s.    But at that point it comes down to how fast you can get data through your DSP blocks and they are spread across the fabric, which requires very careful design when crossing columns as that's where the fabric routing resource is more constrained.  I'd also be curious what the power consumption of the Zynq looks like when 63 GMAC/s of number crunching is being done - but it can't be low.  I hate fans with a passion.  This scope will be completely fanless.  It will heatsink everything into the extruded aluminum case. 

Regarding digital (serial) triggers, my thought was around the area of a small configurable FSM that can use the digital comparator outputs from any channel.  The FSM would have a number of programmable states and generate a trigger pulse when it reaches the correct end state. This itself is a big project, it would need to be designed, simulated and tested; hence why I have stuck with a fairly simple edge trigger (and the pulse width, slope, runt and timeout triggers are fairly trivial and the core technically supports them, although they are unimplemented in software for now.)  The FSM for complex triggers could have a fairly large 'program' and the program could be computed dynamically (e.g. for I2C address trigger, it would start with a match for a start condition, then look for the relevant rising edges on each clock and compare SDA at that cycle - the Python application would be able to customise the sequence of states that need to pass through to generate triggers in a -very- basic assembly language.)

Serial decode itself would likely use Sigrok, though its pure-Python implementation may cause performance issues in which case a compiled RPython variant may be usable instead.    There is some advantage to doing this on the Zynq in spare cycles if using e.g. a 7020 with the FPGA accelerating the level comparison stage so the ARM just needs to shift bits out a register to decide what to do with each data bit.
« Last Edit: November 17, 2020, 12:17:32 pm by tom66 »
 

Offline capt bullshot

  • Super Contributor
  • ***
  • Posts: 3033
  • Country: de
    • Mostly useless stuff, but nice to have: wunderkis.de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #46 on: November 17, 2020, 02:06:42 pm »
Nothing to say yet, but joining this quite interesting thread by leaving a post.
BTW, to OP: great work.
Safety devices hinder evolution
 
The following users thanked this post: tom66

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2729
  • Country: ca
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #47 on: November 17, 2020, 02:52:47 pm »
Why would I use USB-C?
Because it's super convenient. You have a single power supply that can provide any voltage *you* (as designer) want as opposed to what power supply can provide, it's fairly easy to implement - Cypress has a fully standalone controller chip which handles everything, you use few resistors to tell it which voltages you need, and it gives you two voltages out - one is going to be in the range you've set up with resistors, another "fallback one" - will be 5V if power supply can't provide you what you want, so you can indicate to the user that he connected wrong supply. Or you can use STM32G0 MCU, which has integrated USB-C PD PHY peripherals. USB-C PD is specifically designed to follow "waterfall" model, when if it supports higher voltage, it must support all standard values of lower voltages. Which is why you can request, say 9 V at 3 Amps, and any PSU that provides more than 27 W of power full be guaranteed to work with your device and provide you said 9 V regardless of their support of higher voltages.

- Power adapters are more expensive and less common
Really? Everybody's got one by now with any smart phone purchased in the last 2-3 years. They are also used with many laptops - these are even better.
- The connector is more fragile and expensive
No it's not more fragile. And not expensive either if you know where to look. Besides - did I just see someone complaining about $1 part in a $200+ BOM?
- I don't need the data connection back (who needs their widget to talk to their power supply?)
That's fine - you can use power-only connector.
- I need to support a wider range of voltages e.g. 5V to 20V input which complicates the power converter design  (present supported range is 7V - 15V)
No you don't - see my explanation above.
« Last Edit: November 17, 2020, 03:19:07 pm by asmi »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26873
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #48 on: November 17, 2020, 03:31:46 pm »
Still isn't adding USB-C adding on more complexity to an already complex project? I recall Dave2 having quite a bit of difficulties implementing USB-C power for Dave's new power supply.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: ogden

Online tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6686
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #49 on: November 17, 2020, 03:34:47 pm »
asmi, could you link to that Cypress solution?  I will give it a look, but it does (as nctnico says) seem like added complexity for little or no benefit.

In my fairly modern home with a mix of Android and iOS devices I have one USB-C cable and zero USB-C power supplies.  My laptop (a few years old, not ultrabook format) still uses a barrel jack connector.  Girlfriend's laptop is the same and only 1 year old.  I've no doubt that people have power supplies with Type C,  but barrel-jack connectors are more common and assuming this device will ship with a power adapter, it won't be too expensive to source a 36W/48W 12V AC-adapter whereas a USB Type-C adapter will almost certainly cost more.

And there will be that not-insignificant group of people who wonder "why does it not work with -cheap 5W smartphone charger-?"  When you have to qualify it with things like, only use >45W or more rated adapter, then the search-space of usable adapters drops considerably.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf