Author Topic: DSO sampling rate at slow time bases  (Read 14590 times)

0 Members and 1 Guest are viewing this topic.

Offline jpbTopic starter

  • Super Contributor
  • ***
  • Posts: 1771
  • Country: gb
Re: DSO sampling rate at slow time bases
« Reply #25 on: May 16, 2013, 05:40:59 pm »
Ok, I did some experimentation using an AWG file of three consecutive short pulses with increasingly higher voltages - with the DSO triggering on the second of the three pulses.

It appears the Rigol DS2000 does Peak-to-Peak conversion from sample to display memory in Normal mode, since if it only displayed every 20th point (using decimation), you wouldn't see the amplitude of the third pulse when looking at the display at the 100ms/div setting.
That is interesting, thank you.

Of course you can't really differentiate between what you refer to as peak-to-peak and simply plotting all the points with many of them ending up on the same pixel as the two would look the same given that it is not displaying any intensity gradient. Given that the coding is much simpler I suspect it is just a direct plotting of every sample point. One way to tell might be if the waveforms per second dropped with the higher memory - but then again it would do the same even if it was using a more intelligent peak-to-peak calculation.
 

Offline KedasProbe

  • Frequent Contributor
  • **
  • Posts: 646
  • Country: be
Re: DSO sampling rate at slow time bases
« Reply #26 on: May 16, 2013, 05:43:30 pm »
...
On the next screen-shot you see the same but I reduced the pulse width to 1µs this is below the sample rate time and you see it's starting to show wrong measurements as expected.
Edit: the interruption in the noise line is the update position of the scope.
Presumably this was in normal sampling mode rather than peak mode? In peak mode I presume it would show all the spikes even at the lower memory depth?

Correct, this was without peak-detect.
Below you see with peak-detect ('PD: Refresh') on, a more extreme screen-shot 1µs pulse on 500ms/div 40kS/s (=every 25µs) all 1µs peaks are shown. (you also see the max. zoom of 50µs/div)
Even in zoom it shows 1 pulse of 1µs as 50µs
« Last Edit: May 16, 2013, 06:48:04 pm by KedasProbe »
Not everything that counts can be measured. Not everything that can be measured counts.
[W. Bruce Cameron]
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #27 on: May 16, 2013, 06:03:54 pm »
Of course you can't really differentiate between what you refer to as peak-to-peak and simply plotting all the points with many of them ending up on the same pixel as the two would look the same given that it is not displaying any intensity gradient. Given that the coding is much simpler I suspect it is just a direct plotting of every sample point.

Yes, the outcome would be the same with either peak-to-peak plotting or plotting all points, but I don't think it's a coding issue - it's a time issue. With a 14k sample length, it's only 20 sample points per pixel - but with a 14M sample length, it's 20,000 sample points per pixel. So I think, with certain sample lengths, plotting every point to display memory would be slower than just using a sorting algorithm.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: DSO sampling rate at slow time bases
« Reply #28 on: May 16, 2013, 06:20:57 pm »
I'm not sure you really answered Marmad's question.


I believe I did. It shows that with what Yokogawa calls 'Peak-to-Peak' you loose the time resolution (so even if the line would contain all data points in the specific sample set then still the weighting would be lost).

[/quote]I understand what information is lost with peak detect, but not how "binning" would show anything additional.  There is only 1 pixel column with which to represent all the samples.
 
I suppose intensity grading could be used.  For instance, for the data (-4,-4,-2,-2,4,4,4) peak detect would show a plain vertical pixel line from -4 to 4.  But for peak detect with grading the line would be brighter at -4, -2, and 4 than the points in between (with the line being brightest by 4).
 
That's just something I came up with off the top of my head - no clue if any scopes really do this or if it would be of any real benefit.  Just trying to brainstorm how you can display more information in a single pixel width.
[/quote]

Exactly, intensity grading is one way (and the most predominant method) to store the weighting. But Binning Decimation as for example used by the larger LeCroy scopes (WaverunnerLT /WavePro/9300/LC) is a little bit more complicated as it just doesn't bin data points, it uses a set of algorithms to modify the binning process.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #29 on: May 16, 2013, 06:45:00 pm »
Exactly, intensity grading is one way (and the most predominant method) to store the weighting. But Binning Decimation as for example used by the larger LeCroy scopes (WaverunnerLT /WavePro/9300/LC) is a little bit more complicated as it just doesn't bin data points, it uses a set of algorithms to modify the binning process.

The only LeCroy documentation I can find describing 'binning' is in reference to their Math function of creating histograms - not quite what we've been discussing here. Do you have a link, perhaps?
« Last Edit: May 16, 2013, 06:50:12 pm by marmad »
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: DSO sampling rate at slow time bases
« Reply #30 on: May 16, 2013, 06:50:26 pm »
Try it, you will find that it's gone. And even if Peak-to-Peak would work as you believe it does, you would still use the time information as to where the individual data points had been located in time. But this information would be required to 'restore' the full set of information.

Of course it's there! We seem to be having a misunderstanding here: a mix-up between how a DSO converts it's sample memory to it's display memory in Normal mode - and then other acquisition modes; these are two distinct and different things

No kiddin'  ;) I assume that we're talking about normal acquisition modes and not about Peak Detect Acquisition Mode (which as you correctly say is something different).

Quote
If my sample length is 14k @ 50ms/div, the sample rate of my DSO is 20kSa/s - or 50us per sample. OTOH, my waveform display area is 700 pixels wide - which means that each pixel is equivalent to 20 samples - which can be reduced by either decimation OR peak-to-peak

BTW: all these methods are called 'Decimation', because that is what they do. What Yokogawa labels 'Decimation' on their marketing sheet is actually '1-n Decimation', then there is Peak-to-Peak Decimation (also called 'Peak Detect Decimation', again not the same as Peak Detect Acquisition Mode), Decimation by Binning, Decimation by Resampling, Rho mean square Decimation, and a few others. They all have in common to reduce the amount of data.

Quote
- binning makes no sense in this regard (except perhaps if taking a third dimension [grading] into consideration). But if I stop my DSO, I can zoom and SEE the samples at 50us spacing - so nothing has been lost.

It's indeed possible that you see the data points when zooming. Most scopes do Decimation not only during the acquisition cycle but everytime the screen content changes, i.e. when you zoom. Decimation can happen during any phase and when and what is done depends on the specific design of the scope. Unfortunately it's very hard to find any details about this even for scopes from the big names, and I guess it's even worse for the Chinese brands.

Quote
Now I have no knowledge of which technique my Rigol uses for reducing sample data to display data - but I suspect, due to it's low cost, that it's likely decimation.

It's probably either 1-n Decimation or Peak-to-Peak Decimation, as these are the methods that require the least processing power.

Quote
But I'll have to think up an experiment to try to capture it.

Taking your above example of 20kSa/s sample rate (50us sample period), I'd feed the scope with a series (i.e. 10) of 50-100us pulses with varying amplitude (i.e. stairs or alternating) in single shot mode, and then try to find every pulse in your non-zoomed image. Depending on the method used for Decimation in your scope, a few or most of them will be lost.
« Last Edit: May 17, 2013, 09:22:21 am by Wuerstchenhund »
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #31 on: May 16, 2013, 07:24:23 pm »
Try it, you will find that it's gone.

Sorry, man, but you were absolutely wrong with what you wrote here.  ;)

Quote
And even if Peak-to-Peak would work as you believe it does, you would still use the time information as to where the individual data points had been located in time. But this information would be required to 'restore' the full set of information.

Yes, peak-to-peak works exactly as I believed it did  - and how I wrote about it in previous posts. I never claimed no information was lost - information can ALWAYS be lost (i.e. in regards to how it's currently displayed - not in regards to the actual samples) - there is only so much voltage/time information that can be conveyed by a single pixel, even if you include intensity levels. And the initial discussion here involved, as far as I was concerned, the idea of compressing a single waveform capture of N samples to X pixels without using intensity levels.

Quote
No kiddin'  ;) I assume that we're talking about normal acquisition modes and not about Peak Detect Acquisition Mode (which as you correctly say is something different).

But then why would you have assumed my DSO would throw out captured samples? I.e. the first quoted sentence above.

Quote
It's indeed possible that you see the data points when zooming. Most scopes do Decimation not only during the acquisition cycle but everytime the screen content changes, i.e. when you zoom. Decimation can happen during any phase and when and what is done depends on the specific design of the scope. Unfortunately it's very hard to find any details about this even for scopes from the big names, and I guess it's even worse for the Chinese brands.

Sure - but again: there's a big difference between decimating for sample memory - and decimating for display memory. The latter only affects what I see - not the actual, captured data.
« Last Edit: May 16, 2013, 08:27:00 pm by marmad »
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: DSO sampling rate at slow time bases
« Reply #32 on: May 17, 2013, 08:54:45 am »
The only LeCroy documentation I can find describing 'binning' is in reference to their Math function of creating histograms - not quite what we've been discussing here. Do you have a link, perhaps?

No, unfortunately not. There is very little written info (not only from LeCroy) about this topic.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: DSO sampling rate at slow time bases
« Reply #33 on: May 17, 2013, 09:15:46 am »
No kiddin'  ;) I assume that we're talking about normal acquisition modes and not about Peak Detect Acquisition Mode (which as you correctly say is something different).

But then why would you have assumed my DSO would throw out captured samples? I.e. the first quoted sentence above.

I guess there's a major misunderstanding, as it seems you believe that "trowing out" refers to the sample memory. It doesn't. Decimation (when used for reducing the number of data points to a display with less pixels, which is what we discuss) does not affect the sample memory, it does this for the display buffer only. Even 1-n Decimation does not remove any captured data from the sample memory - it just ignores them ('throws them away') for creating the screen map.

And this is also the reason why on many better scopes you will still see all data points when zooming in, because the zoomed waveform is created from the sample memory data, it will go through a new Decimation process and then end up on your screen. And as you zoom into a waveform (i.e. enlarging a smaller section) fewer sample pixels have to go into a Decimation round which means more data will find its way to your screen.

Quote
Sure - but again: there's a big difference between decimating for sample memory - and decimating for display memory. The latter only affects what I see - not the actual, captured data.

Decimation (for the intent we're discussion here) never decimates data in the sample memory, only the display buffer.

This is also the reason why the method of Decimation does not affect you scope's general ability to capture small glitches. However, it does affect your ability to see the glitches in normal operation (without zooming). With 1-n Decimation for example, chances are good that you will miss many/most short glitches unless you trigger/zoom in on the affected section (but this would require you to know that there actually is a glitch, which you won't as the waveform looked normal on your scope). Peak-to-Peak Decimation will increase your chances to see many glitches you wouldn't with 1-n Decimation. LeCroy's Binning will let you see all glitches (within the limits set by sample rate etc of course), even at dreadfully slow waveform update rates that are the result of the amount of processing that is required for this technique.

But no matter what Decimation technique is used, your sample memory retains all sampled data points.
« Last Edit: May 17, 2013, 09:20:30 am by Wuerstchenhund »
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: DSO sampling rate at slow time bases
« Reply #34 on: May 17, 2013, 10:33:12 am »
I guess there's a major misunderstanding, as it seems you believe that "trowing out" refers to the sample memory.

Huh??!??!?? I never thought or wrote any such thing - but you did. Wuerstchenhund, I appreciate your posts, but either you don't remember what you've written - or you just can't seem to admit when you've obviously made a mistake.

Once again, I wrote:
Peak-to-Peak is just compression from the sample data to the display data in Normal mode: all samples gathered within the period of time represented by one screen pixel are reduced to a single vertical line of pixels. Nothing is lost from what is sampled - if I stop the DSO and 'zoom' in, all of the original sampled data should still be visible at it's correct position in time.

To which you responded:
Try it, you will find that it's gone.

This is both wrong AND clearly shows that you (at least initially) thought Peak-to-Peak resulted in data loss, despite my earlier attempts to explain otherwise. Anyway, this is now just silly; I'll leave it to other readers here to work out who thought what and when. I made the initial post about the different types of display decimation, and I've known from the start that neither one affected sample memory.

Quote
LeCroy's Binning will let you see all glitches (within the limits set by sample rate etc of course), even at dreadfully slow waveform update rates that are the result of the amount of processing that is required for this technique.

Well, I'm afraid without ANY information available anywhere to suggest that this real-time 'binning' decimation to the display either exists - or - that LeCroy is using something like it - I will have to assume it's a "belief" of yours about their DSOs.  ;)
« Last Edit: May 17, 2013, 11:40:23 am by marmad »
 

Offline Yaksaredabomb

  • Regular Contributor
  • *
  • Posts: 124
  • Country: us
Re: DSO sampling rate at slow time bases
« Reply #35 on: May 17, 2013, 12:21:55 pm »
But if all data points are being mapped to the same vertical line of pixels (which they must be) - and if the minimum and maximum data points are connected by a vertical line - how are possible data points in-between visibly lost? Where would they be shown otherwise[emphasis added]?
The thing is that Peak-to-Peak only considers the Extrema, and throws away the points in between. They are lost. ....
I'm not sure you really answered Marmad's question [emphasis added].  I understand what information is lost with peak detect, but not how "binning" would show anything additional [emphasis added].  There is only 1 pixel column with which to represent all the samples.
I suppose intensity grading could be used. ....
I believe I did [emphasis added]. It shows that with what Yokogawa calls 'Peak-to-Peak' you loose the time resolution (so even if the line would contain all data points in the specific sample set then still the weighting would be lost).
It appears to me that you have not answered Marmad's question, or at least, not entirely.  He asked where the in-between data points would be shown, if not in that "minimum and maximum data points...connected by a [single] vertical line".  You only responded talking about the points Peak-to-Peak loses, not how binning displays more data.

Exactly, intensity grading is one way (and the most predominant method) to store the weighting. But Binning Decimation as for example used by the larger LeCroy scopes (WaverunnerLT /WavePro/9300/LC) is a little bit more complicated [emphasis added] as it just doesn't bin data points, it uses a set of algorithms to modify the binning process.
The only LeCroy documentation I can find describing 'binning' is in reference to their Math function of creating histograms - not quite what we've been discussing here. Do you have a link, perhaps [emphasis added]?
No, unfortunately not. There is very little written info (not only from LeCroy) about this topic [emphasis added].

Here you acknowledged intensity grading is one way to show more information, but said binning decimation is "more complicated".  I take this to mean binning doesn't use intensity grading, or at least, doesn't only use intensity grading.  Marmad asked for more info - still not understanding how binning "ensures that any data point ends up on the display and no information is lost" - and you admitted you could not explain it (saying there is "very little written info" and providing no links).

Interestingly, you went on to reiterate your claims about binning.  It "will let you see all glitches" but causes "dreadfully slow waveform update rates".  With still no links though and only your unsupported assertions of course Marmad can't help but continue to be skeptical:

LeCroy's Binning will let you see all glitches (within the limits set by sample rate etc of course), even at dreadfully slow waveform update rates that are the result of the amount of processing that is required for this technique.
Well, I'm afraid without ANY information available anywhere to suggest that this real-time 'binning' decimation to the display either exists - or - that LeCroy is using something like it - I will have to assume it's a "belief" of yours about their DSOs.  ;)
You could be 100% correct that this "third method" "really ensures that any data point ends up on the display and no information is lost."  I admit I'm pretty curious about what it is and how it works.  It sure is odd though that references to such an advantageous feature don't appear to be more readily available.
 
Edit: Added emphasis for clarity
« Last Edit: May 17, 2013, 12:25:28 pm by jneumann »
My display name changed June 6th from "jneumann" to "Yaksaredabomb"
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf