Author Topic: A High-Performance Open Source Oscilloscope: development log & future ideas  (Read 68899 times)

0 Members and 5 Guests are viewing this topic.

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6694
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #175 on: December 05, 2020, 04:33:43 pm »
All the FPGA should be doing is digital phosphor accumulation.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #176 on: December 05, 2020, 04:38:52 pm »
All the FPGA should be doing is digital phosphor accumulation.
No, not at this point in the project. This can be done in software just fine.

If you look at Siglent's history you'll notive they have rewritten their oscilloscope firmware at least 3 times from scratch before getting where they are now. Creating oscilloscope firmware is hard and it is super easy to paint yourself into a corner. The right approach is to get the basic framework setup first (going through several iterations for sure) and then optimise. IMHO the value of this project is going to be in flexibility to make changes / add new features. If people want crazy high update rates they can buy an exisiting scope and be done with it.

For example: if the open source platform allows to add a Python or C/C++ based protocol decoder in a couple of hours then that is a killer feature. Especially if the development environment already runs on the oscilloscope so no software installation for cross compiling or whatever is needed. If OTOH you'd need to get a Vivado license first and spend a couple of days on understanding the FPGA code then nobody will want to do this.

A good example is how the Tektronix logic analyser software can be extended by decoders: https://xdevs.com/guide/tla_spi/
« Last Edit: December 05, 2020, 04:43:28 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: nuno, JPortici

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6680
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #177 on: December 05, 2020, 04:39:45 pm »
It would of course be very interesting to see what you come up with nctnico.  In the meantime I am focused on the digital systems engineering parts of this project.  I am presently designing the render-acquisition engine which would replace the existing render engine in software on the Pi.
I'd advise against that. With the rendering engine fixed inside the FPGA you'll loose a lot of freedom in this part. Lecroy scopes do all their rendering in software to give them maximum flexibility for analysis. A better way would be to finalise the rendering in software first and then see what can be optimised where using the FPGA is the very very last resort. IMHO it would be a mistake to put the rendering inside the FPGA because it will fixate a lot of functionality and lock many people out of being able to help improve this design.

The problem with software rendering is you can't do as much with software as you can do with dedicated hardware blocks. The present rendering engine achieves ~23k wfms/s and is about as optimised as you can achieve on a Raspberry Pi ARM processor taking maximum advantage of cache design and hardware hacks.  And that is without vector rendering, which currently approximately halves performance.

An FPGA rendering engine should easily be able to achieve over 200k wfm/s and while raw waveforms rendered per second is a case of diminishing returns (there probably is not much benefit with the 1 million waves/sec scopes from Keysight - feel free to disagree with me here) there is still some advantage to achieving e.g. 100k wfm/s which is where many US$900 - 1500 oscilloscopes seem to be benchmarking.

This also frees the ARM on the Pi to be used for more useful things - while theoretically 100kwfm/s might be possible if all four ARMs were busy would this be a good thing? The UI would become sluggish and features like serial decode would depend on the ARM processor too, in all likelihood, and therefore would suffer in performance.

As for maintainability, that shouldn't be as much of a concern. Sure, it is true that the raw waveform engine may not be maintained as much (it is a 'get it right and ship' thing in my mind), but the rest of the UI and application will be in userspace, including cursors, graticule, that sort of thing.  In fact, it is likely that all the FPGA renderer will do is pass out a rendered image of the waveform for a given channel which the Pi or other applications processor can plot at any desired location.  Essentially, as Marco states, the FPGA is doing the digital phosphor part which is the thing that needs to be fast.  The applications software will always have access to the waveform data too.
« Last Edit: December 05, 2020, 04:41:38 pm by tom66 »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #178 on: December 05, 2020, 04:44:57 pm »
Trust me, nobody cares about waveforms per second! It is not a good idea to just pursue a crazy high number just for the sake of achieving it. There are enough readily made products out there for sale for the waveforms/s aficionado. IIRC the Lecroy Wavepro 7k series tops at a couple of thousand without any analysis functions enabled.

You have to define the target audience. What if someone has a great idea on how to do color grading differently but if that is 'fixed' inside the FPGA there is no way to change it. Also, with rendering fixed inside the FPGA you basically end up with math traces for anything else and you can't make a waveform processing pipeline (like GStreamer does for example) that easely.

I'm 100% sure that the software and GPU approach offers the best flexibility and is the way to the future (also for oscilloscope manufacturers in general). A high end version can have a PCIexpress slot which can accept a high end video card to do the display processing. The waveforms/s rate goes up immediately and doesn't take any extra development effort. Again, look at the people upgrading their Lecroy Wavepro 7k series with high end video cards.
« Last Edit: December 05, 2020, 04:55:01 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Zucca

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6694
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #179 on: December 05, 2020, 04:52:58 pm »
No, not at this point in the project.
For a minimum functional prototype to get some hype going that makes sense, high capture rate digital phosphor and event detection is a high end feature. Budgeting some room/memory for it in the FPGA costs very little time though.
« Last Edit: December 05, 2020, 04:56:02 pm by Marco »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #180 on: December 05, 2020, 04:57:27 pm »
No, not at this point in the project.
For a minimum functional prototype to get some hype going that makes sense, high capture rate digital phosphor is a high end feature. Budgeting some room/memory for it in the FPGA costs very little time though.
But it is just one trick you don't really need and it seriously hampers the rest of the functionality. Look at how limited the Keysight oscilloscopes are; basically one trick ponys. If you use a trigger then the chance of capturing a specific event is 100% and you don't need to stare at the screen without blinking your eyes. At this moment time is better spend on getting the trigger system extended so it can trigger on specific features and sequences of a signal.
« Last Edit: December 05, 2020, 05:14:08 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6694
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #181 on: December 05, 2020, 05:44:59 pm »
Digital phosphor is more to get a general idea how the circuit is behaving, using it for detecting if a signal goes beyond bounds by eye seems kinda silly. High capture rates are also valuable for fault detection and also benefit from being implemented in the FPGA. The two features are orthogonal ... but for a minimum prototype FPGA implementation for both could be delayed, even if the latter has higher priority.
« Last Edit: December 05, 2020, 05:48:34 pm by Marco »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #182 on: December 05, 2020, 06:23:23 pm »
The point is that you can do 'digital phospor' just fine in software; doing it in FPGA right now just hampers progress of the project and it doesn't add much in terms of usefulness. Look at high end signal analysis oscilloscopes; none of them have high waveform update rates. It is just that Keysight has been hyping this to be a useful feature on their lower end gear while it isn't. Also realise that the highest waveform update rates happens at a very specific time/div setting only. A high update rate has never helped me to solve a problem. Deep memory and versatile trigger methods are much more useful.
« Last Edit: December 05, 2020, 06:26:59 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6680
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #183 on: December 05, 2020, 06:38:57 pm »
Well, there's always the option for both.  The present Python application architecture has support for different rendering engines - ArmWave is just the only one presently implemented but FPGAWave would also be an option.  In that case, the user would have an option to select their preference, and the Zynq SoC would select the required data stream and mode for the CSI transfer engine.

Personally one of the benefits I find from high waveform render rates is that jitter and ringing is more clearly understandable - I know how frequent an event is.

Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer for digital phosphor to avoid too much stairstepping implying a necessarily higher capture/render rate. More so potentially at higher zoom levels where there is much more than 1 wave point per displayed X column.
« Last Edit: December 05, 2020, 06:41:39 pm by tom66 »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6694
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #184 on: December 05, 2020, 06:49:17 pm »
When you're initiating the capture of a waveform based on stuff happening hundreds/thousands of samples after a simple/protocol trigger, I'm not sure calling it flexible triggering does that justice.

It's high capture rate pass/fail testing. A feature which can really still wait for a minimum viable prototype, just stick to simple triggers for the moment.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #185 on: December 05, 2020, 07:30:53 pm »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction. Secondly you can't see very dark colors so the intensity has to start somewhere half way. So at the hardware level you are limited to 100 levels. And then there is the limit of what the human eye can distinguish. If you have 32 or maybe 64 different levels you have more than enough to draw a meaningfull picture. However, intensity grading is just mimicing analog oscilloscope behaviour; it doesn't add much in terms of usefullness. Color grading or reverse intensity (see my RTM3000 review) are far more usefull to look at a signal compared to 'simple' intensity grading. Having 8 levels of intensity grading is likely to be more informative in terms of providing meaningfull information; with just 8 levels there will be a clear binning effect of how often a trace hits a spot.
« Last Edit: December 05, 2020, 08:17:24 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Someone, JamesLynton, JPortici

Offline JamesLynton

  • Contributor
  • Posts: 35
  • Country: gb
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #186 on: December 05, 2020, 08:53:41 pm »
Very sensible on the intensity binning idea, I like that idea *a lot* :)
May also help to have an adjustable +/- exponential tracking curve to assign binning transition spread on the fly when you are trying to tease 'data' out that frequently isn't quite statistically linear in its repetition rate.

Also, awesome project, after being rather disappointed by the UI, Features & Performance of all the commercial pc based dongle scopes I've seen so far, this immediately is looking really nice.
« Last Edit: December 05, 2020, 08:56:24 pm by JamesLynton »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6680
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #187 on: December 06, 2020, 09:20:42 am »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction.

That's not really true - not on any modern TFT LCD at least.  Gamma correction is done in the analogue DAC that supplied with gamma reference levels. The DACs for each pixel column interpolate (linear, but it's a close approximation) between these channels.  The resulting effect is that all 256 codes have a useful and distinct output and the output is linear.   This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.   

A typical big LCD panel has 16 gamma channels, 8 for each drive polarity.  Cheaper panels use 6 or 8 channels, with dithering used to interpolate further between these levels.

Secondly you can't see very dark colors so the intensity has to start somewhere half way. So at the hardware level you are limited to 100 levels. And then there is the limit of what the human eye can distinguish.
If you have 32 or maybe 64 different levels you have more than enough to draw a meaningfull picture. However, intensity grading is just mimicing analog oscilloscope behaviour; it doesn't add much in terms of usefullness. Color grading or reverse intensity (see my RTM3000 review) are far more usefull to look at a signal compared to 'simple' intensity grading. Having 8 levels of intensity grading is likely to be more informative in terms of providing meaningfull information; with just 8 levels there will be a clear binning effect of how often a trace hits a spot.

Many people would say the human eye can distinguish between at least 10 bits of resolution but possibly more.   Obviously not all that useful on an 8 bit panel but it is a bit of a fallacy to say the human eye is the limit here.   It is true that totally dark colours are not as useful but this is what the intensity control on most oscilloscopes does - it adjusts the minimum displayed brightness.  It is still probably fair to say at least 200 codes of the displayed codes are useful.  You could always turn up the intensity control to see those darker values, even if the brighter values now saturate.  But you need to have the depth of the intensity bins large enough to store this data to then make use of this function.

I would agree that colour grading is really useful and perhaps more useful than regular intensity grading though it depends on the application.  Really what we're looking at here is having enough resolution in the internal buffers to then use this data, either with simple intensity grading or with arbitrary colour grading. The present ArmWave renderer supports regular intensity grading, inverted, and rainbow/palette modes.

Edit: fixed typo
« Last Edit: December 06, 2020, 10:46:21 am by tom66 »
 
The following users thanked this post: rf-loop

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #188 on: December 06, 2020, 10:58:42 am »
Also - the peak wfm/s rate is one measure of performance but the other is how many intensity-graded levels the display achieves.  To achieve at least 256 then you need a minimum of 256*60 = 15.3kwfms/s but you might want to apply gamma correction and use a 10-bit or 12-bit accumulation buffer
256 intensity levels is another nice but otherwise utterly meaningless marketing number. First of all a TFT panel can use 8 bits at most however a portion those bits are lost to gamma and intensity correction.

That's not really true - not on any modern TFT LCD at least.  Gamma correction is done in the analogue DAC that supplied with gamma reference levels. The DACs for each pixel column interpolate (linear, but it's a close approximation) between these channels.  The resulting effect is that all 256 codes have a useful and distinct output and the output is linear.   This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.   

A typical big LCD panel has 16 gamma channels, 8 for each drive polarity.  Cheaper panels use 6 or 8 channels, with dithering used to interpolate further between these levels.
Well, I'm doing a lot with TFT panels in all shapes and sizes but I have never seen one which has gamma correction inside the panel. The panel typically uses 8 bit LVDS data which comes from a controller which does gamma correction. But what goes into the panel is still 8 bit.

And there is also a difference between being able to see different shades and how many different shades you can actually interpret. Sometimes less is more. If you look at the Agilent 54835A for example you'll see that the color grading uses binning. Every color is assigned a specific bin which says how many waveforms have been captured inside that bin. IMHO you have to be very careful not to hunt for eye candy (or worse: analog scope emulation which hides part of the signal by definition) but think about ways to show a signal on screen in a way which provides meaningfull information about the signal.
« Last Edit: December 06, 2020, 11:48:44 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6680
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #189 on: December 06, 2020, 02:28:50 pm »
Many inexpensive panels generate the gamma voltages internally in the source drivers to reduce cost of having external references, but this absolutely is a thing:

https://www.ti.com/lit/ds/symlink/buf12800.pdf  as an example.  When I was a student I made fair bank replacing AS15-F gamma reference ICs on LCD T-con boards for LCD televisions. They would common fail causing a badly distorted or inverted image.

The voltages steer the output codes for the DAC.  The panel data indeed is 8-bit input and the DAC has only 256 valid output codes, but the output is nonlinear.  An additional signal from the T-con flips the output from 7.5V - 15V range to 7.5V - 0V for pixel inversion (maintaining zero net bias). This is common amongst most LCD panels, although there are some older/cheaper panels that use 6-bit DACs with looser gamma correction and dithering.

You could do an experiment:  put a 256-level gradient on a display of choice, provided it is wide enough you should be able to see distinct stair-stepped bands.  If the gradient has nonlinear steps, then the gamma correction is done before the DACs.  If it has linear bands, then there is no gamma correction applied to the digital output. 
« Last Edit: December 06, 2020, 02:33:16 pm by tom66 »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #190 on: December 06, 2020, 03:11:53 pm »
It is still probably fair to say at least 200 codes of the displayed codes are useful.  You could always turn up the intensity control to see those darker values, even if the brighter values now saturate.
The problem with this approach is that you basically are displaying something which is not quantifiable. When testing oscilloscopes people often use AM modulated signals to create a pretty picture. But that picture doesn't say anything about the signal. OTOH if you use fixed level binning then the number of visible levels actually says something about the AM modulation depth.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1132
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #191 on: December 06, 2020, 04:05:47 pm »
Quote
Well, I'm doing a lot with TFT panels in all shapes and sizes but I have never seen one which has gamma correction inside the panel. The panel typically uses 8 bit LVDS data which comes from a controller which does gamma correction. But what goes into the panel is still 8 bit.

If a panel takes an input signal with a color depth of only 8 bits per channel, then it is necessary that the signal is gamma-encoded (i.e. not linear), otherwise a single quantization step would be clearly visible in dark regions, and one could not display smooth gradients. Human vision is not linear. Uniform luminance spacing is not perceptually uniform as well, but the human vision can distinguish smaller luminance steps in dark regions than in bright regions.

Regarding discernable shades of gray: The human vision can adapt to several decades of luminance (e.g. outoor bright sunlight vs. indoor candle light), but at a particular adaptation state it cannot distinguish more than about 100 gray levels (with perceptually unifirm spacing from black to white). If I'd want to be able to distinguish adjacent bins clearly, then I'd not use more than 32 bins.

Quote
This is the slight absurdity with VGA feeding digital LCD panels:  the VGA signal is gamma corrected, which is reversed by the LCD controller, and then a different, opposite gamma correction curve is applied.

The aim is that the display outputs linear luminance. So the LCD column driver needs to undo the gamma encoding of the input signal, and additionally compensate any non-linearily of the LC cell's voltage to optical transmittance transfer function.

Instead of using a non-linear DAC, this could be also done with a LUT in the digital domain. Then the DAC could be linear, but it would need to have more bits (and most of the levels were unused).
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #192 on: December 06, 2020, 05:50:45 pm »
FWIW In grad school I created  256 step color and gray scale plots on an $80K Gould-Dianza graphics system attached to a VAX 11/780.  The steps were not visible.

There is a lot of folk lore about the sensitivity of the human eye which may be readily disproved by simple experiment.  While the eye is very sensitive to color,  that sensitivity does not extend to the intensity of arbitrary color scales.

Have Fun!
Reg
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6454
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #193 on: December 06, 2020, 06:52:22 pm »
FWIW In grad school I created  256 step color and gray scale plots on an $80K Gould-Dianza graphics system attached to a VAX 11/780.  The steps were not visible.

There is a lot of folk lore about the sensitivity of the human eye which may be readily disproved by simple experiment.  While the eye is very sensitive to color,  that sensitivity does not extend to the intensity of arbitrary color scales.

Have Fun!
Reg

You are very correct on this. That is why all kinds colour grading displays were invented.

Nico is right.. if you're displaying pixel retrace frequency/distribution and encoding it in pixel intensity, there has to be compression of all values from minimum clearly visible (but obviously dimmed) to full intensity of pixel. So there is obvious nothing, something obviously visible meant to be only one repetition  and maximum brightness for pixels that get lit up all the time.  You cannot go from 0. It probably has to be nonlinear. What people are used to is simply response characteristics of phosphorus. That will compress on high side, once you get bright enough it won't be brighter, the dot will start to bloom.

I also agree with Nico about colour grading. I cannot comprehend why more manufacturers use reverse grading (to highlight rare events, not frequent ones, you want to see the outliers..).

Regards,
 
The following users thanked this post: tom66

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6680
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #194 on: December 06, 2020, 09:20:16 pm »
Reverse grading seemed obvious to me.  Hence the present code supports it although it's not exposed on the UI.

The rendering engine presently has a 16-bit accumulator as 8-bit was insufficient without saturating arithmetic. In reality I think something like a 12-bit buffer would be sufficient.    The resulting 16-bit values are taken through a palette lookup process to produce the resulting pixel value. So inverting the palette is pretty simple, just flip the table (just want to exclude the zeroth value so you don't write pixels everywhere.)

It really depends on what you want to achieve from intensity grading.  I think there's a mix of uses:

- Some users just want more detail than just 'hit' or 'not hit' and to see the approximate intensity of a pixel indicating the energy in that area (I suspect this is the primary category of user.)  These users expect their DSO to behave ~roughly the same as every other DSO, although obviously there are opportunities to improve this behaviour.

- Some users are doing things like eye diagram or jitter analysis and setting a threshold where you can say <10% of events hit this bin could be useful.  In this case I suspect the users benefit from either reverse intensity grading or rainbow/custom palette grading.

- Others are just expecting a DSO to behave like an analog scope, especially so when in XY mode.  I suspect this is a relatively small category of user, and this user drives the inclusion of 'variable persistence' modes in most modern oscilloscopes.

« Last Edit: December 06, 2020, 09:25:07 pm by tom66 »
 

Offline tom66Topic starter

  • Super Contributor
  • ***
  • Posts: 6680
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #195 on: December 06, 2020, 09:29:59 pm »
One other thing:-

The present prototype when in 'Auto' memory depth (which is currently the only memory depth exposed to the user and otherwise behaves similarly to the Rigol DS1000Z 'Auto' function) uses all available RAM as a history buffer. With 256MB of RAM and at 50ns/div (~23k wfm/s, 610 pts), this gives approximately 17 seconds of history buffer that is recorded in real time.  In my mind, this is far more useful than any infinite or variable persistence feature, and as far as I can tell, only Siglent expose this in normal use - which led to Dave complaining about it as it was turned on by default.    As far as I can see, there is no reason not to enable this function by default, as it is just a case of walking through memory pointers.  If the user selects a larger memory size, then the instrument will have less record time, but should always have the amount of memory available that the user requests.

Most people know this function as segmented memory.  The only difference is it's a continuously active segmented memory function, which adapts to current settings to make the most use of the memory available.  It avoids that headache of pressing the 'STOP' button and missing the trigger by a few milliseconds.

This is one time the user might want to turn down the waveform rate as e.g. reducing the update rate to 1k wfm/s would increase the memory time to over 6 minutes.  Giving the user that trade off is valuable (this is pretty much always found on scopes with segmented memory).  Depending on the future platform choice,  I expect a later version of the scope to support at least 1GB of RAM which would give around 900 Mpts of usable waveform memory.  So at 23k wfm/s, instrument could record ~1 minute of waveform history and select any one of those timestamped frames or analyse any single given capture.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #196 on: December 06, 2020, 09:47:36 pm »
One other thing:-

The present prototype when in 'Auto' memory depth (which is currently the only memory depth exposed to the user and otherwise behaves similarly to the Rigol DS1000Z 'Auto' function) uses all available RAM as a history buffer. With 256MB of RAM and at 50ns/div (~23k wfm/s, 610 pts), this gives approximately 17 seconds of history buffer that is recorded in real time.  In my mind, this is far more useful than any infinite or variable persistence feature, and as far as I can tell, only Siglent expose this in normal use - which led to Dave complaining about it as it was turned on by default.    As far as I can see, there is no reason not to enable this function by default, as it is just a case of walking through memory pointers.  If the user selects a larger memory size, then the instrument will have less record time, but should always have the amount of memory available that the user requests.
There are a few remarks to be made here:

1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.

2) Having a history buffer running in the background is standard on Yokogawa and R&S oscilloscopes as well. The memory left after the user's memory depth selection (which can be set to auto meaning to use just enough memory to fill the screen) is used as a history buffer.

3) Segmented recording is close to history mode but the user selects a specific record length and number of records instead of the oscilloscope doing this automatically. The distinction is between the oscilloscope determining something automatically versus the user being very specific in order to tailor the oscilloscope configuration to a particular measurement. Having a history buffer with 100k segments while the user is only interested in 5 is counter productive.

4) Variable and infinite persistence are required on a DSO. I regulary use infinite persistence for tests which take hours to weeks. I just want to see the extends of where a signal goes (and it doesn't need crazy high update rates).

Another nice feature to have is detailed mask testing. Again it seems oscilloscope makers aim for high update speeds but in doing so they throw the baby out with the bathwater. To give an example: I have a product which outputs a low and high frequency signal during several seconds. A 10Mpts oscilloscope can sample this signal with enough detail however it turns out that mask testing seems to use peak-detect and decimates the data to a couple of hundred points. It would be nice to be able to compare traces with a length of 10Mpts (or more). It doesn't matter if it is slow; it will always be faster and more accurate compared to checking a signal visually.
« Last Edit: December 06, 2020, 10:46:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 6454
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #197 on: December 06, 2020, 11:16:00 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).

I think best way to explain this is to try call it that LeCroy is sample rate defined, sample buffer length is calculated in time (not samples) and it is same as displayed time base, with defined maximum.
That means it will keep sample rate and retrigger rate as high as possible at all times, until it reaches max memory allowed, and only then it will start dropping sample rate.

That is very good strategy for general purpose scope because it maximises retrigger rate, and captures only data needed for time span we are interested in. It is simple to think about from operators standpoint: I have 120ns of data. It was taken at 5GS/s so I know there is no aliasing on my 200 MHz signal...

It is not so good for FFT, where we want exact control over sample buffer size and sample rate...
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26757
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #198 on: December 06, 2020, 11:29:30 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).
Let's keep it at me not being able to explain it.  8) I know perfectly how it works and why it is bad in which situation. It is based on my own hands-on experience; I have owned a Siglent oscilloscope in the past and also own a Lecroy oscilloscope (I don't think there is any DSO brand left from which I have not used/owned a DSO myself; yes including Picoscope).
« Last Edit: December 06, 2020, 11:32:43 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: 2N3055

Offline tautech

  • Super Contributor
  • ***
  • Posts: 28142
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #199 on: December 06, 2020, 11:34:28 pm »
1) Siglent and Lecroy scopes only capture enough data to fill the screen regardless the memory depth the user selects. This is wrong for a general purpose oscilloscope. It simply doesn't suit all use cases.
Nico,
we keep geting back to this, and every time I read this definition of yours, I don't know if you have problem explaining it or have misunderstanding how it works (which I, honestly think you don't).

I think best way to explain this is to try call it that LeCroy is sample rate defined, sample buffer length is calculated in time (not samples) and it is same as displayed time base, with defined maximum.
That means it will keep sample rate and retrigger rate as high as possible at all times, until it reaches max memory allowed, and only then it will start dropping sample rate.

That is very good strategy for general purpose scope because it maximises retrigger rate, and captures only data needed for time span we are interested in. It is simple to think about from operators standpoint: I have 120ns of data. It was taken at 5GS/s so I know there is no aliasing on my 200 MHz signal...

It is not so good for FFT, where we want exact control over sample buffer size and sample rate...
Maybe just maybe he will one day understand just why these different strategies are used but maybe not as wfps has never been of high concern for him....no guesses as to why.  ::)

3 choices, ASIC, ADC allowing for large captures and ADC with optimised wfps...pick your poison and understand its limitations.
« Last Edit: December 07, 2020, 12:12:40 am by tautech »
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf