Author Topic: DSO sampling rate at slow time bases  (Read 10098 times)

0 Members and 1 Guest are viewing this topic.

Online jpb

  • Super Contributor
  • ***
  • Posts: 1620
  • Country: gb
DSO sampling rate at slow time bases
« on: May 15, 2013, 09:00:22 am »
This post is prompted by some discussion on another thread where the question was asked: what is your scope's sample rate at 50msec/div timebase?

The straight forward answer in my case, where my scope has 500k memory per channel, is 1MS/sec giving a nominal bandwidth of 500kHz.

But then it struck me that my scope only displays 500 points on the screen (if persistence mode is turned off) and I realised that I don't know how it selects those 500 points from the 500k it has sampled. If, as I suspect is the case, it just selects from the samples then the effective sampling rate is a 1000 times worse at only 1kS/sec with an effective bandwidth of only 500Hz! (I'm assuming that peak detect is not being used either.)

If this is the case then if you don't use persistence mode or peak detect the fact that extensive memory allows fast sampling will not protect you from seeing aliasing effects on the screen.

Given the drive for high waveforms per second rates I would have thought that only one point per horizontal pixel is used; so are the samples sampled for display?
Or do scopes average in some way every 1000 stored samples to produce one displayed sample - I suspect that they don't because it would introduce an overhead and slowdown the display rate.


EDIT : I think my post above is based on the false premise that my scope only plots 500 points because there are only 500 pixels. :-[
« Last Edit: May 15, 2013, 11:10:17 am by jpb »
 

Offline KedasProbe

  • Frequent Contributor
  • **
  • Posts: 502
  • Country: be
Re: DSO sampling rate at slow time bases
« Reply #1 on: May 15, 2013, 10:22:19 am »
Maybe you first have to ask yourself what you would ideally like to see on an 0.5sec wide screen of 10 cm. (assume infinite display pixels for a moment)
Not everything that counts can be measured. Not everything that can be measured counts.
[W. Bruce Cameron]
 

Offline cyr

  • Frequent Contributor
  • **
  • Posts: 251
  • Country: se
Re: DSO sampling rate at slow time bases
« Reply #2 on: May 15, 2013, 10:49:10 am »
A good scope will show you all the information it has captured, that means using all points from every waveform it has captured since it last refreshed the screen. Intensity grading will show how many samples have been captured that fall on/near a particular pixel on screen.

If your scope is a "good" scope according to that definition I have no idea :)
 

Online jpb

  • Super Contributor
  • ***
  • Posts: 1620
  • Country: gb
Re: DSO sampling rate at slow time bases
« Reply #3 on: May 15, 2013, 11:09:00 am »
A good scope will show you all the information it has captured, that means using all points from every waveform it has captured since it last refreshed the screen. Intensity grading will show how many samples have been captured that fall on/near a particular pixel on screen.

If your scope is a "good" scope according to that definition I have no idea :)

Yes, I think that you are right and my scope plots many more than 500 points (it has a gradient display).

In fact I decided I'd been slightly silly and tried to delete my original post but as I was the OP I wasn't allowed to!
 

Offline KedasProbe

  • Frequent Contributor
  • **
  • Posts: 502
  • Country: be
Re: DSO sampling rate at slow time bases
« Reply #4 on: May 15, 2013, 08:13:06 pm »
My point was that your eyes can't see that high resolution on your screen.
Assume you have a screen with an extreme high resolution 2400DPI (the 'Retina display' from apple is about 8 times lower) beyond that your eyes will just see grey.

Example: 2400DPI or about 4*2400 pixels= 9600 pixels on 10cm/0.5sec
0.5sec/9600 or about 0.05ms per pixel, for a square wave you need 2 pixels: period=0.1ms or 10kHz, so still much below your sample rate.

Or with other words your eyes are the limited factor, an infinite DPI can't help, that's why you have to stretch it out and trigger on the part you want to see.

But that doesn't really answer your question "which samples do you see".
Ideally if a spike is present you want to see it even if it is much smaller than 1 pixel. (like 200 times smaller)
Knowing that 1 pixel will be much too wide to represent the spike.
So will it?

I wasn't 100% sure what my scope would do so I did the test to see if my scope would show it.
So on the 50ms/div scale or 1 pixel/ms, 500KS/s (sample every 2µs)
I provided an 5µs wide pulse every 50ms.
As you see below, the scope is showing it without problem.
(I also tried 2 short pulses right after each other, the brightness of the spike went up, 3 spikes a little bit more bright etc.)

So no you don't see all samples and no they are not averaged and you also don't miss anything important on the screen, the 500 samples for each pixel are processes and something 'appropriate' is displayed. But obviously you need to zoom in to really see it.

On the next screen-shot you see the same but I reduced the pulse width to 1µs this is below the sample rate time and you see it's starting to show wrong measurements as expected.
Edit: the interruption in the noise line is the update position of the scope.
« Last Edit: May 15, 2013, 08:31:20 pm by KedasProbe »
Not everything that counts can be measured. Not everything that can be measured counts.
[W. Bruce Cameron]
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #5 on: May 16, 2013, 01:06:11 am »
This post is prompted by some discussion on another thread where the question was asked: what is your scope's sample rate at 50msec/div timebase?

The straight forward answer in my case, where my scope has 500k memory per channel, is 1MS/sec giving a nominal bandwidth of 500kHz.

But then it struck me that my scope only displays 500 points on the screen (if persistence mode is turned off) and I realised that I don't know how it selects those 500 points from the 500k it has sampled. If, as I suspect is the case, it just selects from the samples then the effective sampling rate is a 1000 times worse at only 1kS/sec with an effective bandwidth of only 500Hz! (I'm assuming that peak detect is not being used either.)

If this is the case then if you don't use persistence mode or peak detect the fact that extensive memory allows fast sampling will not protect you from seeing aliasing effects on the screen.

Given the drive for high waveforms per second rates I would have thought that only one point per horizontal pixel is used; so are the samples sampled for display?
Or do scopes average in some way every 1000 stored samples to produce one displayed sample - I suspect that they don't because it would introduce an overhead and slowdown the display rate.

It's true that on some scopes, only every n-th data point is displayed: the DSO just uses decimation (throwing out) of the unneeded sample points. This technique allows the fastest waveform update rates, but as the display data is decimated, important signal details can be lost, because (as you pointed out) the sample rate is effectively reduced.

A much better way is to do a peak to peak assessment: if a number of samples occur within one pixel of the display, then the extreme values are displayed vertically and joined together; then all of the samples per pixel are visible. This, of course, requires more processing - so slower wfrm/s rates.

One other thing I might point out in this discussion that is relevant - but not understood by some DSO users: deep memory is not just important in SINGLE SHOT mode; it has a direct and important impact on running the scope in NORMAL mode (at slower time base settings) = because the memory length determines the sampling rate.

Look at the two attached images showing the exact same signal at 50ms/div: the first with sample length set to 14k, so the rate is 20kSa/s (thus missing some of the 20us spikes); the second with sample length set to 56MB, so the rate is 50MSa/s.
« Last Edit: May 16, 2013, 02:09:48 pm by marmad »
 

Online Hydrawerk

  • Super Contributor
  • ***
  • Posts: 2380
  • Country: 00
Re: DSO sampling rate at slow time bases
« Reply #6 on: May 16, 2013, 01:18:04 am »
Here it's useful to have a scope with large memory. Agilent with 100kpoints per channel is not ideal.  :-/O
Amazing machines. https://www.youtube.com/user/denha (It is not me...)
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #7 on: May 16, 2013, 01:38:55 am »
Given the drive for high waveforms per second rates I would have thought that only one point per horizontal pixel is used; so are the samples sampled for display?
Or do scopes average in some way every 1000 stored samples to produce one displayed sample - I suspect that they don't because it would introduce an overhead and slowdown the display rate.
A diagram showing the two different methods adopted by DSO manufacturers for reducing/displaying extra samples per pixel point:

 

Online Someone

  • Super Contributor
  • ***
  • Posts: 2103
  • Country: au
Re: DSO sampling rate at slow time bases
« Reply #8 on: May 16, 2013, 06:18:06 am »
Use a tool wrong and of course it gets the wrong answers, here are the pulse widths where the signal stops being reliably visible for a range of scopes when set to 40-50 ms/division.

ScopeModePulse WidthSample Rate
Tektronix 7834200nS
Tektronix 2430ANormal1ms
Tektronix 2430AEnvelope<10ns
Tektronix TDS3014BNormal40us25kSa/s
Tektronix TDS3014BPeak<10ns25kSa/s
Lecroy 324Normal1us1MSa/s
Lecroy 324Peak<10ns1MSa/s
Agilent MSOX3104ANormal800ns2MSa/s
Agilent MSOX3104APeak<10ns500kSa/s
Tektronix MSO4054Normal40ns25MSa/s
Tektronix MSO4054Envelope or Peak<10ns25MSa/s

All scopes were set for their maximum memory depth. The width they will catch is a function of the sample rate achievable (which is dominated by the memory capacity), but even the historic Tektronix scope will still catch it at any setting when you enable the peak/envelope acquisition mode.

As to the change in point memory when switching to peak mode, only the Agilent X series dropped its acquisition rate. They use 4 times the memory to store min/max data rather than the expected 2 times (some peculiarity of the asic? http://www.home.agilent.com/owc_discussions/thread.jspa?threadID=17257&tstart=825), which raises the question how do other scopes maintain the memory depth between sampling and peak/envelope modes?
« Last Edit: May 16, 2013, 06:24:24 am by Someone »
 

Online jpb

  • Super Contributor
  • ***
  • Posts: 1620
  • Country: gb
Re: DSO sampling rate at slow time bases
« Reply #9 on: May 16, 2013, 08:40:07 am »
As to the change in point memory when switching to peak mode, only the Agilent X series dropped its acquisition rate. They use 4 times the memory to store min/max data rather than the expected 2 times (some peculiarity of the asic? http://www.home.agilent.com/owc_discussions/thread.jspa?threadID=17257&tstart=825), which raises the question how do other scopes maintain the memory depth between sampling and peak/envelope modes?
On my scope, the LeCroy (Iwatsu) WaveJet it defines peak mode as being the maximum and minimum values in twice the sampling interval so the number of points per sampling interval remains at 1 so no extra memory is required but of course more processing. I think this is a fairly standard way of doing it.

The question this doesn't answer is what time values are assigned to each? In normal mode the samples are at fixed whole time steps, in peak mode the maximum and minimum will be at arbitrary values within the two time steps so are they simply ordered in time and then the earlier one assigned to the first time step and the later one to the second time step or is some attempt made to assign them to their correct times (eg if both actually occur in the second time step)?
 

Online jpb

  • Super Contributor
  • ***
  • Posts: 1620
  • Country: gb
Re: DSO sampling rate at slow time bases
« Reply #10 on: May 16, 2013, 08:42:23 am »
I wasn't 100% sure what my scope would do so I did the test to see if my scope would show it.
So on the 50ms/div scale or 1 pixel/ms, 500KS/s (sample every 2µs)
I provided an 5µs wide pulse every 50ms.
As you see below, the scope is showing it without problem.
(I also tried 2 short pulses right after each other, the brightness of the spike went up, 3 spikes a little bit more bright etc.)

So no you don't see all samples and no they are not averaged and you also don't miss anything important on the screen, the 500 samples for each pixel are processes and something 'appropriate' is displayed. But obviously you need to zoom in to really see it.

On the next screen-shot you see the same but I reduced the pulse width to 1µs this is below the sample rate time and you see it's starting to show wrong measurements as expected.
Edit: the interruption in the noise line is the update position of the scope.
Presumably this was in normal sampling mode rather than peak mode? In peak mode I presume it would show all the spikes even at the lower memory depth?
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #11 on: May 16, 2013, 11:27:37 am »
Use a tool wrong and of course it gets the wrong answers, here are the pulse widths where the signal stops being reliably visible for a range of scopes when set to 40-50 ms/division.

Who exactly are you responding to? I thought the thread was about what DSOs do with extra samples per pixel point - not about what pulse widths are visible at any particular time base setting.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Banned!
  • Posts: 2766
  • Country: gb
  • Occasionally active on the forum, available via PM
Re: DSO sampling rate at slow time bases
« Reply #12 on: May 16, 2013, 12:08:28 pm »
A diagram showing the two different methods adopted by DSO manufacturers for reducing/displaying extra samples per pixel point:
[/img]

The right diagram is not fully correct. It says all data points are displayed, but that is strictly only true for the example waveform, but not necessarily the case with all waveforms. Peak-to-Peak (as the name implies) is quite similar to Peak Detect mode, it takes the extrema of n number of samples and uses them as display points. This generally produces a much better replication of the original waveform, but it still means that data points are thrown away.

There's also a third method (Binning) which really ensures that any data point ends up on the display and no information is lost. It's much slower, and thus doesn't provide the high waveform rates that nowadays people want to see.

Most low end scopes use Decimation, and most better scopes (midrange/highend) use Peak-to-Peak. Some newer high end scopes and even older LeCroy scopes (9300/LC/Waverunner LT/WavePro 900) use Binning.

Brexit n - The undefined being negotiated by the unprepared in order to get the unspecified for the uninformed.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #13 on: May 16, 2013, 12:18:12 pm »
The right diagram is not fully correct. It says all data points are displayed, but that is strictly only true for the example waveform, but not necessarily the case with all waveforms. Peak-to-Peak (as the name implies) is quite similar to Peak Detect mode, it takes the extrema of n number of samples and uses them as display points. This generally produces a much better replication of the original waveform, but it still means that data points are thrown away.

But if all data points are being mapped to the same vertical line of pixels (which they must be) - and if the minimum and maximum data points are connected by a vertical line - how are possible data points in-between visibly lost? Where would they be shown otherwise?   

BTW, the diagram is from some Yokogawa literature.
« Last Edit: May 16, 2013, 12:32:15 pm by marmad »
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Banned!
  • Posts: 2766
  • Country: gb
  • Occasionally active on the forum, available via PM
Re: DSO sampling rate at slow time bases
« Reply #14 on: May 16, 2013, 01:32:40 pm »
But if all data points are being mapped to the same vertical line of pixels (which they must be) - and if the minimum and maximum data points are connected by a vertical line - how are possible data points in-between visibly lost? Where would they be shown otherwise?

The thing is that Peak-to-Peak only considers the Extrema, and throws away the points in between. They are lost.

To give you a (very simple!) example:

Say we have the following data points:

+2|+2.3|+2.6|-4|-2.8|-1.3|-0.5|+3

Now we take the Extrema which are '-4' and '+3'. These two data points are now used to build the next display point (there are several methods to do that, which for that example don't matter).

Now lets look at a second sample set:

+2.8|+2.7|+1.8|-2.4|-4|-0.5|+1.5|+3

Again we take the Extrema which are still '-4' and '+3', which would result in the same display point. But if you actually plot both data sets you'll see that the underlying waveform is completely different. But with Peak-to-Peak this information is lost.

Quote
BTW, the diagram is from some Yokogawa literature.

It's a good diagram (despite this small error) and shows the general difference between the two most used methods to create display points. But it's probably a bit biased in the method I guess Yokogawa wants to promote (probably similar to looking for information on the relevance of waform rates in Agilent literature).
Brexit n - The undefined being negotiated by the unprepared in order to get the unspecified for the uninformed.
 

Offline Yaksaredabomb

  • Regular Contributor
  • *
  • Posts: 124
  • Country: us
Re: DSO sampling rate at slow time bases
« Reply #15 on: May 16, 2013, 01:45:30 pm »
....Where would they be shown otherwise?
....
Again we take the Extrema which are still '-4' and '+3', which would result in the same display point. But if you actually plot both data sets you'll see that the underlying waveform is completely different. But with Peak-to-Peak this information is lost....
I'm not sure you really answered Marmad's question.  I understand what information is lost with peak detect, but not how "binning" would show anything additional.  There is only 1 pixel column with which to represent all the samples.
 
I suppose intensity grading could be used.  For instance, for the data (-4,-4,-2,-2,4,4,4) peak detect would show a plain vertical pixel line from -4 to 4.  But for peak detect with grading the line would be brighter at -4, -2, and 4 than the points in between (with the line being brightest by 4).
 
That's just something I came up with off the top of my head - no clue if any scopes really do this or if it would be of any real benefit.  Just trying to brainstorm how you can display more information in a single pixel width.
« Last Edit: May 16, 2013, 01:47:07 pm by jneumann »
My display name changed June 6th from "jneumann" to "Yaksaredabomb"
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #16 on: May 16, 2013, 01:54:11 pm »
Peak-to-Peak (as the name implies) is quite similar to Peak Detect mode, it takes the extrema of n number of samples and uses them as display points. This generally produces a much better replication of the original waveform, but it still means that data points are thrown away.

I don't think this is correct.

Peak-to-Peak is just compression from the sample data to the display data in Normal mode: all samples gathered within the period of time represented by one screen pixel are reduced to a single vertical line of pixels. Nothing is lost from what is sampled - if I stop the DSO and 'zoom' in, all of the original sampled data should still be visible at it's correct position in time (but the DSO will have missed any pulses smaller than the original sample rate).

Peak Detect mode is optimization at slower sampling rates, allowing the capture and display of pulses smaller than the 'given' sample rate (determined by initial time base and sample length). But if I stop the DSO and 'zoom' in, the sampled points/positions are not necessarily 'true'.
« Last Edit: May 16, 2013, 03:02:21 pm by marmad »
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #17 on: May 16, 2013, 01:59:31 pm »
I'm not sure you really answered Marmad's question.  I understand what information is lost with peak detect, but not how "binning" would show anything additional.  There is only 1 pixel column with which to represent all the samples.
 
I suppose intensity grading could be used.  For instance, for the data (-4,-4,-2,-2,4,4,4) peak detect would show a plain vertical pixel line from -4 to 4.  But for peak detect with grading the line would be brighter at -4, -2, and 4 than the points in between (with the line being brightest by 4).
 
That's just something I came up with off the top of my head - no clue if any scopes really do this or if it would be of any real benefit.  Just trying to brainstorm how you can display more information in a single pixel width.

Yes, I think you're correct - the only way to show more information in a single pixel line is with an added 'dimension' - in this case, with a gradient.
 

Online jpb

  • Super Contributor
  • ***
  • Posts: 1620
  • Country: gb
Re: DSO sampling rate at slow time bases
« Reply #18 on: May 16, 2013, 02:03:13 pm »
Peak-to-Peak (as the name implies) is quite similar to Peak Detect mode, it takes the extrema of n number of samples and uses them as display points. This generally produces a much better replication of the original waveform, but it still means that data points are thrown away.

I don't think this is correct.

Peak-to-Peak is just compression from the sample data to the display data: all samples gathered within the period of time represented by one screen pixel are reduced to a single vertical line of pixels. Nothing is lost from what is sampled - if I stop the DSO and 'zoom' in, all of the original sampled data should still be visible at it's correct position in time (but the DSO will have missed any pulses smaller than the original sample rate).

Peak Detect mode is optimization at slower sampling rates, allowing the capture and display of pulses smaller than the 'given' sample rate (determined by initial time base and sample length). But if I stop the DSO and 'zoom' in, the sampled points/positions are not necessarily 'true'.

My understanding from what I've read is that data is lost in the sense that only the minimum and maximum points in twice the sample interval are retained. But the sample interval itself may be a lot smaller than one pixel width.

For example, on my WaveJet the sample memory is 500k points and the screen width is 500 pixels, at a time base of 50msec/div or 500msec per screen even if I set the memory to the maximum 500k the sample interval will be 1 microsecond. With peak detect it will sample at 1nsec intervals for 2 microseconds and pick the maximum and minimum points so it will retain only 2 out of 2000 points. Each pixel width is 1msec so is a 1000 of the 1microsec sample steps so you could still do a lot of zooming.
« Last Edit: May 16, 2013, 02:04:55 pm by jpb »
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #19 on: May 16, 2013, 02:07:44 pm »
My understanding from what I've read is that data is lost in the sense that only the minimum and maximum points in twice the sample interval are retained. But the sample interval itself may be a lot smaller than one pixel width.

Yes, but again, there is a definite distinction between using Peak-to-Peak to just compress sample data to display data in Normal mode - and using Peak Detect mode to optimize the display.

As per my other post, two images showing the DSO zoomed into the same signal in Normal and Peak Detect mode:
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Banned!
  • Posts: 2766
  • Country: gb
  • Occasionally active on the forum, available via PM
Re: DSO sampling rate at slow time bases
« Reply #20 on: May 16, 2013, 02:09:30 pm »
I don't think this is correct.

I'm pretty sure this is correct. Peak-to-Peak was never about retaining all data points.

Quote
Peak-to-Peak is just compression from the sample data to the display data: all samples gathered within the period of time represented by one screen pixel are reduced to a single vertical line of pixels.

This is not what peak-to-peak does (unless Yokogawa came up with a variant of their using under this term, which is not impossible of course). Peak-to-Peak is about data reduction, i.e. one method (which I guess is what Yokogawa is using) by taking the Extrema and drawing vertical a line between them. This however only describes the range in which the thrown away sampling points must be located (they can't be bigger or smaller than the Extrema), but this method does no weighting (i.e. in which area are most of the data points), and in the end looses a lot of information.

Quote
Nothing is lost from what is sampled - if I stop the DSO and 'zoom' in, all of the original sampled data should still be visible at it's correct position in time.

Try it, you will find that it's gone. And even if Peak-to-Peak would work as you believe it does, you would still use the time information as to where the individual data points had been located in time. But this information would be required to 'restore' the full set of information.

Quote
Peak Detect mode is optimization at slower sampling rates, allowing the capture and display of pulses smaller than the 'given' sample rate (determined by initial time base and sample length). But if I stop the DSO and 'zoom' in, the sampled points/positions are not necessarily 'true'.

Right, but the same principle is used by Peak-to-Peak.
Brexit n - The undefined being negotiated by the unprepared in order to get the unspecified for the uninformed.
 

Online jpb

  • Super Contributor
  • ***
  • Posts: 1620
  • Country: gb
Re: DSO sampling rate at slow time bases
« Reply #21 on: May 16, 2013, 02:23:41 pm »
My understanding from what I've read is that data is lost in the sense that only the minimum and maximum points in twice the sample interval are retained. But the sample interval itself may be a lot smaller than one pixel width.

Yes, but again, there is a definite distinction between using Peak-to-Peak to just compress sample data to display data in Normal mode - and using Peak Detect mode to optimize the display.

As per my other post, two images showing the DSO zoomed into the same signal in Normal and Peak Detect mode:

Your screen grabs are instructive. I assume that it is meant to be a single spike (or narrow pulse) and it can be seen that the peak-detect has inserted a minimum point in the middle of it which shows that it has shifted the minimum point in time. This implies that for the Rigol at least the minimum and maximum points are saved in a fixed order regardless of which actually occurred first.

But I don't think peak-to-peak is any different (I don't know because it is not a mode that I have on my scope).

I unfortunately don't yet have a function generator so can't repeat your experiment on the WaveJet to see if it orders the points better.

The lesson I guess is that peak-detect is designed to do that, that is detect glitches. If you need to zoom in on them you probably need to re-trigger at a faster timebase.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #22 on: May 16, 2013, 02:35:58 pm »
Try it, you will find that it's gone. And even if Peak-to-Peak would work as you believe it does, you would still use the time information as to where the individual data points had been located in time. But this information would be required to 'restore' the full set of information.

Of course it's there! We seem to be having a misunderstanding here: a mix-up between how a DSO converts it's sample memory to it's display memory in Normal mode - and then other acquisition modes; these are two distinct and different things.

If my sample length is 14k @ 50ms/div, the sample rate of my DSO is 20kSa/s - or 50us per sample. OTOH, my waveform display area is 700 pixels wide - which means that each pixel is equivalent to 20 samples - which can be reduced by either decimation OR peak-to-peak - binning makes no sense in this regard (except perhaps if taking a third dimension [grading] into consideration). But if I stop my DSO, I can zoom and SEE the samples at 50us spacing - so nothing has been lost.

Now I have no knowledge of which technique my Rigol uses for reducing sample data to display data - but I suspect, due to it's low cost, that it's likely decimation. But I'll have to think up an experiment to try to capture it.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #23 on: May 16, 2013, 02:43:10 pm »
Your screen grabs are instructive. I assume that it is meant to be a single spike (or narrow pulse) and it can be seen that the peak-detect has inserted a minimum point in the middle of it which shows that it has shifted the minimum point in time. This implies that for the Rigol at least the minimum and maximum points are saved in a fixed order regardless of which actually occurred first.

Yes - exactly.

Quote
But I don't think peak-to-peak is any different (I don't know because it is not a mode that I have on my scope).

Again, this is NOT a switchable mode - it's just a technique your DSO might use to solve the problem you posted in your original post: how to reduce many samples to a single vertical line of pixels for display while in Normal acquisition mode?
« Last Edit: May 16, 2013, 02:59:02 pm by marmad »
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #24 on: May 16, 2013, 05:10:26 pm »
Ok, I did some experimentation using an AWG file of three consecutive short pulses with increasingly higher voltages - with the DSO triggering on the second of the three pulses.

It appears the Rigol DS2000 does Peak-to-Peak conversion from sample to display memory in Normal mode, since if it only displayed every 20th point (using decimation), you wouldn't see the amplitude of the third pulse when looking at the display at the 100ms/div setting.

The attached images show 100ms/div and 200us/div displays of Normal mode (@ both 14k and 56Mpts sample lengths) and 100ms/div and 200us/div of Peak Detect mode @ 14kpts.

As mentioned above, Peak Detect is a different mode of acquisition - I've included it just to show that it can affect the contents of sample memory (as shown at 200us/div) - while Peak-to-Peak in Normal mode does not.

Edit: I've attached the 4000pt. ARB Express .CSV file (zipped), if anyone else wants to try this.
« Last Edit: May 16, 2013, 05:25:16 pm by marmad »
 

Online jpb

  • Super Contributor
  • ***
  • Posts: 1620
  • Country: gb
Re: DSO sampling rate at slow time bases
« Reply #25 on: May 16, 2013, 05:40:59 pm »
Ok, I did some experimentation using an AWG file of three consecutive short pulses with increasingly higher voltages - with the DSO triggering on the second of the three pulses.

It appears the Rigol DS2000 does Peak-to-Peak conversion from sample to display memory in Normal mode, since if it only displayed every 20th point (using decimation), you wouldn't see the amplitude of the third pulse when looking at the display at the 100ms/div setting.
That is interesting, thank you.

Of course you can't really differentiate between what you refer to as peak-to-peak and simply plotting all the points with many of them ending up on the same pixel as the two would look the same given that it is not displaying any intensity gradient. Given that the coding is much simpler I suspect it is just a direct plotting of every sample point. One way to tell might be if the waveforms per second dropped with the higher memory - but then again it would do the same even if it was using a more intelligent peak-to-peak calculation.
 

Offline KedasProbe

  • Frequent Contributor
  • **
  • Posts: 502
  • Country: be
Re: DSO sampling rate at slow time bases
« Reply #26 on: May 16, 2013, 05:43:30 pm »
...
On the next screen-shot you see the same but I reduced the pulse width to 1µs this is below the sample rate time and you see it's starting to show wrong measurements as expected.
Edit: the interruption in the noise line is the update position of the scope.
Presumably this was in normal sampling mode rather than peak mode? In peak mode I presume it would show all the spikes even at the lower memory depth?

Correct, this was without peak-detect.
Below you see with peak-detect ('PD: Refresh') on, a more extreme screen-shot 1µs pulse on 500ms/div 40kS/s (=every 25µs) all 1µs peaks are shown. (you also see the max. zoom of 50µs/div)
Even in zoom it shows 1 pulse of 1µs as 50µs
« Last Edit: May 16, 2013, 06:48:04 pm by KedasProbe »
Not everything that counts can be measured. Not everything that can be measured counts.
[W. Bruce Cameron]
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #27 on: May 16, 2013, 06:03:54 pm »
Of course you can't really differentiate between what you refer to as peak-to-peak and simply plotting all the points with many of them ending up on the same pixel as the two would look the same given that it is not displaying any intensity gradient. Given that the coding is much simpler I suspect it is just a direct plotting of every sample point.

Yes, the outcome would be the same with either peak-to-peak plotting or plotting all points, but I don't think it's a coding issue - it's a time issue. With a 14k sample length, it's only 20 sample points per pixel - but with a 14M sample length, it's 20,000 sample points per pixel. So I think, with certain sample lengths, plotting every point to display memory would be slower than just using a sorting algorithm.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Banned!
  • Posts: 2766
  • Country: gb
  • Occasionally active on the forum, available via PM
Re: DSO sampling rate at slow time bases
« Reply #28 on: May 16, 2013, 06:20:57 pm »
I'm not sure you really answered Marmad's question.


I believe I did. It shows that with what Yokogawa calls 'Peak-to-Peak' you loose the time resolution (so even if the line would contain all data points in the specific sample set then still the weighting would be lost).

[/quote]I understand what information is lost with peak detect, but not how "binning" would show anything additional.  There is only 1 pixel column with which to represent all the samples.
 
I suppose intensity grading could be used.  For instance, for the data (-4,-4,-2,-2,4,4,4) peak detect would show a plain vertical pixel line from -4 to 4.  But for peak detect with grading the line would be brighter at -4, -2, and 4 than the points in between (with the line being brightest by 4).
 
That's just something I came up with off the top of my head - no clue if any scopes really do this or if it would be of any real benefit.  Just trying to brainstorm how you can display more information in a single pixel width.
[/quote]

Exactly, intensity grading is one way (and the most predominant method) to store the weighting. But Binning Decimation as for example used by the larger LeCroy scopes (WaverunnerLT /WavePro/9300/LC) is a little bit more complicated as it just doesn't bin data points, it uses a set of algorithms to modify the binning process.
Brexit n - The undefined being negotiated by the unprepared in order to get the unspecified for the uninformed.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #29 on: May 16, 2013, 06:45:00 pm »
Exactly, intensity grading is one way (and the most predominant method) to store the weighting. But Binning Decimation as for example used by the larger LeCroy scopes (WaverunnerLT /WavePro/9300/LC) is a little bit more complicated as it just doesn't bin data points, it uses a set of algorithms to modify the binning process.

The only LeCroy documentation I can find describing 'binning' is in reference to their Math function of creating histograms - not quite what we've been discussing here. Do you have a link, perhaps?
« Last Edit: May 16, 2013, 06:50:12 pm by marmad »
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Banned!
  • Posts: 2766
  • Country: gb
  • Occasionally active on the forum, available via PM
Re: DSO sampling rate at slow time bases
« Reply #30 on: May 16, 2013, 06:50:26 pm »
Try it, you will find that it's gone. And even if Peak-to-Peak would work as you believe it does, you would still use the time information as to where the individual data points had been located in time. But this information would be required to 'restore' the full set of information.

Of course it's there! We seem to be having a misunderstanding here: a mix-up between how a DSO converts it's sample memory to it's display memory in Normal mode - and then other acquisition modes; these are two distinct and different things

No kiddin'  ;) I assume that we're talking about normal acquisition modes and not about Peak Detect Acquisition Mode (which as you correctly say is something different).

Quote
If my sample length is 14k @ 50ms/div, the sample rate of my DSO is 20kSa/s - or 50us per sample. OTOH, my waveform display area is 700 pixels wide - which means that each pixel is equivalent to 20 samples - which can be reduced by either decimation OR peak-to-peak

BTW: all these methods are called 'Decimation', because that is what they do. What Yokogawa labels 'Decimation' on their marketing sheet is actually '1-n Decimation', then there is Peak-to-Peak Decimation (also called 'Peak Detect Decimation', again not the same as Peak Detect Acquisition Mode), Decimation by Binning, Decimation by Resampling, Rho mean square Decimation, and a few others. They all have in common to reduce the amount of data.

Quote
- binning makes no sense in this regard (except perhaps if taking a third dimension [grading] into consideration). But if I stop my DSO, I can zoom and SEE the samples at 50us spacing - so nothing has been lost.

It's indeed possible that you see the data points when zooming. Most scopes do Decimation not only during the acquisition cycle but everytime the screen content changes, i.e. when you zoom. Decimation can happen during any phase and when and what is done depends on the specific design of the scope. Unfortunately it's very hard to find any details about this even for scopes from the big names, and I guess it's even worse for the Chinese brands.

Quote
Now I have no knowledge of which technique my Rigol uses for reducing sample data to display data - but I suspect, due to it's low cost, that it's likely decimation.

It's probably either 1-n Decimation or Peak-to-Peak Decimation, as these are the methods that require the least processing power.

Quote
But I'll have to think up an experiment to try to capture it.

Taking your above example of 20kSa/s sample rate (50us sample period), I'd feed the scope with a series (i.e. 10) of 50-100us pulses with varying amplitude (i.e. stairs or alternating) in single shot mode, and then try to find every pulse in your non-zoomed image. Depending on the method used for Decimation in your scope, a few or most of them will be lost.
« Last Edit: May 17, 2013, 09:22:21 am by Wuerstchenhund »
Brexit n - The undefined being negotiated by the unprepared in order to get the unspecified for the uninformed.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #31 on: May 16, 2013, 07:24:23 pm »
Try it, you will find that it's gone.

Sorry, man, but you were absolutely wrong with what you wrote here.  ;)

Quote
And even if Peak-to-Peak would work as you believe it does, you would still use the time information as to where the individual data points had been located in time. But this information would be required to 'restore' the full set of information.

Yes, peak-to-peak works exactly as I believed it did  - and how I wrote about it in previous posts. I never claimed no information was lost - information can ALWAYS be lost (i.e. in regards to how it's currently displayed - not in regards to the actual samples) - there is only so much voltage/time information that can be conveyed by a single pixel, even if you include intensity levels. And the initial discussion here involved, as far as I was concerned, the idea of compressing a single waveform capture of N samples to X pixels without using intensity levels.

Quote
No kiddin'  ;) I assume that we're talking about normal acquisition modes and not about Peak Detect Acquisition Mode (which as you correctly say is something different).

But then why would you have assumed my DSO would throw out captured samples? I.e. the first quoted sentence above.

Quote
It's indeed possible that you see the data points when zooming. Most scopes do Decimation not only during the acquisition cycle but everytime the screen content changes, i.e. when you zoom. Decimation can happen during any phase and when and what is done depends on the specific design of the scope. Unfortunately it's very hard to find any details about this even for scopes from the big names, and I guess it's even worse for the Chinese brands.

Sure - but again: there's a big difference between decimating for sample memory - and decimating for display memory. The latter only affects what I see - not the actual, captured data.
« Last Edit: May 16, 2013, 08:27:00 pm by marmad »
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Banned!
  • Posts: 2766
  • Country: gb
  • Occasionally active on the forum, available via PM
Re: DSO sampling rate at slow time bases
« Reply #32 on: May 17, 2013, 08:54:45 am »
The only LeCroy documentation I can find describing 'binning' is in reference to their Math function of creating histograms - not quite what we've been discussing here. Do you have a link, perhaps?

No, unfortunately not. There is very little written info (not only from LeCroy) about this topic.
Brexit n - The undefined being negotiated by the unprepared in order to get the unspecified for the uninformed.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Banned!
  • Posts: 2766
  • Country: gb
  • Occasionally active on the forum, available via PM
Re: DSO sampling rate at slow time bases
« Reply #33 on: May 17, 2013, 09:15:46 am »
No kiddin'  ;) I assume that we're talking about normal acquisition modes and not about Peak Detect Acquisition Mode (which as you correctly say is something different).

But then why would you have assumed my DSO would throw out captured samples? I.e. the first quoted sentence above.

I guess there's a major misunderstanding, as it seems you believe that "trowing out" refers to the sample memory. It doesn't. Decimation (when used for reducing the number of data points to a display with less pixels, which is what we discuss) does not affect the sample memory, it does this for the display buffer only. Even 1-n Decimation does not remove any captured data from the sample memory - it just ignores them ('throws them away') for creating the screen map.

And this is also the reason why on many better scopes you will still see all data points when zooming in, because the zoomed waveform is created from the sample memory data, it will go through a new Decimation process and then end up on your screen. And as you zoom into a waveform (i.e. enlarging a smaller section) fewer sample pixels have to go into a Decimation round which means more data will find its way to your screen.

Quote
Sure - but again: there's a big difference between decimating for sample memory - and decimating for display memory. The latter only affects what I see - not the actual, captured data.

Decimation (for the intent we're discussion here) never decimates data in the sample memory, only the display buffer.

This is also the reason why the method of Decimation does not affect you scope's general ability to capture small glitches. However, it does affect your ability to see the glitches in normal operation (without zooming). With 1-n Decimation for example, chances are good that you will miss many/most short glitches unless you trigger/zoom in on the affected section (but this would require you to know that there actually is a glitch, which you won't as the waveform looked normal on your scope). Peak-to-Peak Decimation will increase your chances to see many glitches you wouldn't with 1-n Decimation. LeCroy's Binning will let you see all glitches (within the limits set by sample rate etc of course), even at dreadfully slow waveform update rates that are the result of the amount of processing that is required for this technique.

But no matter what Decimation technique is used, your sample memory retains all sampled data points.
« Last Edit: May 17, 2013, 09:20:30 am by Wuerstchenhund »
Brexit n - The undefined being negotiated by the unprepared in order to get the unspecified for the uninformed.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #34 on: May 17, 2013, 10:33:12 am »
I guess there's a major misunderstanding, as it seems you believe that "trowing out" refers to the sample memory.

Huh??!??!?? I never thought or wrote any such thing - but you did. Wuerstchenhund, I appreciate your posts, but either you don't remember what you've written - or you just can't seem to admit when you've obviously made a mistake.

Once again, I wrote:
Peak-to-Peak is just compression from the sample data to the display data in Normal mode: all samples gathered within the period of time represented by one screen pixel are reduced to a single vertical line of pixels. Nothing is lost from what is sampled - if I stop the DSO and 'zoom' in, all of the original sampled data should still be visible at it's correct position in time.

To which you responded:
Try it, you will find that it's gone.

This is both wrong AND clearly shows that you (at least initially) thought Peak-to-Peak resulted in data loss, despite my earlier attempts to explain otherwise. Anyway, this is now just silly; I'll leave it to other readers here to work out who thought what and when. I made the initial post about the different types of display decimation, and I've known from the start that neither one affected sample memory.

Quote
LeCroy's Binning will let you see all glitches (within the limits set by sample rate etc of course), even at dreadfully slow waveform update rates that are the result of the amount of processing that is required for this technique.

Well, I'm afraid without ANY information available anywhere to suggest that this real-time 'binning' decimation to the display either exists - or - that LeCroy is using something like it - I will have to assume it's a "belief" of yours about their DSOs.  ;)
« Last Edit: May 17, 2013, 11:40:23 am by marmad »
 

Offline Yaksaredabomb

  • Regular Contributor
  • *
  • Posts: 124
  • Country: us
Re: DSO sampling rate at slow time bases
« Reply #35 on: May 17, 2013, 12:21:55 pm »
But if all data points are being mapped to the same vertical line of pixels (which they must be) - and if the minimum and maximum data points are connected by a vertical line - how are possible data points in-between visibly lost? Where would they be shown otherwise[emphasis added]?
The thing is that Peak-to-Peak only considers the Extrema, and throws away the points in between. They are lost. ....
I'm not sure you really answered Marmad's question [emphasis added].  I understand what information is lost with peak detect, but not how "binning" would show anything additional [emphasis added].  There is only 1 pixel column with which to represent all the samples.
I suppose intensity grading could be used. ....
I believe I did [emphasis added]. It shows that with what Yokogawa calls 'Peak-to-Peak' you loose the time resolution (so even if the line would contain all data points in the specific sample set then still the weighting would be lost).
It appears to me that you have not answered Marmad's question, or at least, not entirely.  He asked where the in-between data points would be shown, if not in that "minimum and maximum data points...connected by a [single] vertical line".  You only responded talking about the points Peak-to-Peak loses, not how binning displays more data.

Exactly, intensity grading is one way (and the most predominant method) to store the weighting. But Binning Decimation as for example used by the larger LeCroy scopes (WaverunnerLT /WavePro/9300/LC) is a little bit more complicated [emphasis added] as it just doesn't bin data points, it uses a set of algorithms to modify the binning process.
The only LeCroy documentation I can find describing 'binning' is in reference to their Math function of creating histograms - not quite what we've been discussing here. Do you have a link, perhaps [emphasis added]?
No, unfortunately not. There is very little written info (not only from LeCroy) about this topic [emphasis added].

Here you acknowledged intensity grading is one way to show more information, but said binning decimation is "more complicated".  I take this to mean binning doesn't use intensity grading, or at least, doesn't only use intensity grading.  Marmad asked for more info - still not understanding how binning "ensures that any data point ends up on the display and no information is lost" - and you admitted you could not explain it (saying there is "very little written info" and providing no links).

Interestingly, you went on to reiterate your claims about binning.  It "will let you see all glitches" but causes "dreadfully slow waveform update rates".  With still no links though and only your unsupported assertions of course Marmad can't help but continue to be skeptical:

LeCroy's Binning will let you see all glitches (within the limits set by sample rate etc of course), even at dreadfully slow waveform update rates that are the result of the amount of processing that is required for this technique.
Well, I'm afraid without ANY information available anywhere to suggest that this real-time 'binning' decimation to the display either exists - or - that LeCroy is using something like it - I will have to assume it's a "belief" of yours about their DSOs.  ;)
You could be 100% correct that this "third method" "really ensures that any data point ends up on the display and no information is lost."  I admit I'm pretty curious about what it is and how it works.  It sure is odd though that references to such an advantageous feature don't appear to be more readily available.
 
Edit: Added emphasis for clarity
« Last Edit: May 17, 2013, 12:25:28 pm by jneumann »
My display name changed June 6th from "jneumann" to "Yaksaredabomb"
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf