Author Topic: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)  (Read 9366 times)

0 Members and 1 Guest are viewing this topic.

Offline CarlG

  • Regular Contributor
  • *
  • Posts: 153
  • Country: se
I agree with Dave (in video @6m45s) that the third most important (basic) parameter for DSOs is the memory depth, but as always there's a catch (or more). Don't just look at the memory depth specification!

Large memory depth makes it possible to sustain the effective sample rate at slow time base settings. Ideally, a 1GSa/s DSO with 1Mpts memory (per channel) would retain it's 1GSa/s from min time base setting to 100us/div, given 10 div screen width (10x 100us x 1GSa/s = 1 MSa) Thus reduced to 1MSa/s at 100 ms/div and so on.

However, this is not necessarily the case. It depends on the design of the scope, and it may begin to reduce the sample rate for instance at 10 us/div (#1). So, don't just look at the memory depth specification to decide which scope you should choose; be sure to test the scope and find out the break point yourself.

Also, note that a scope may not always use all it's available memory in real time mode! The specified memory depth may apply to single shot mode only (#2).

I use the term effective sample rate here, since the digitizer probably in most DSOs probably runs at full speed (as jahonen observed on the Agilent MSO6034A) here. This is necessary for the glitch capture/trigger function.

I think that an "Effective sample rate" parameter could be an interesting figure in a scope specification. This parameter would reflect the sample rate used for displaying the data (typically Normal mode; not Average, etc, etc). That would also reveal weaknesses e.g. reduced (effective) sample rate in zoom mode. Or the other way round; to emphasize that the zoom mode utilizes the digitizer's max speed, as also shown by jahonen for the MSO6034A in the same post as above.

The importance of the memory depth I think must be based on the assumption that "everything else is equal", which often is not true. Large depth may not be compatible with high capture rate (#3), which also is a very important parameter.

 #1: e.g. Agilent 1000-series (only 10kpts/ch but the principle applies). I had a look at it once and was truly disappointed...but then I didn't know it actually was a Rigol design ;)
 #2: e.g. Agilent 6000/7000-series. It uses only half memory in real time mode (at least with 8Meg option,  I haven't checked the initial 2M standard depth)
 #3: e.g. Rigol 2072

//C
« Last Edit: December 31, 2012, 01:36:23 am by CarlG »
 

Offline EEVblog

  • Administrator
  • *****
  • Posts: 29701
  • Country: au
    • EEVblog
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #1 on: December 31, 2012, 03:58:57 am »
However, this is not necessarily the case. It depends on the design of the scope, and it may begin to reduce the sample rate for instance at 10 us/div (#1). So, don't just look at the memory depth specification to decide which scope you should choose; be sure to test the scope and find out the break point yourself.

Any reputable scope will list any memory depth changes based on timebase in the specs.

Dave.
 

Offline CarlG

  • Regular Contributor
  • *
  • Posts: 153
  • Country: se
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #2 on: December 31, 2012, 11:08:53 am »
However, this is not necessarily the case. It depends on the design of the scope, and it may begin to reduce the sample rate for instance at 10 us/div (#1). So, don't just look at the memory depth specification to decide which scope you should choose; be sure to test the scope and find out the break point yourself.

Any reputable scope will list any memory depth changes based on timebase in the specs.

Dave.

I'm not sure I get your point...granted, your statement is correct, but the decrease in sample rate does not, as you know, come from changed memory depth, it's due to the changed time base. At timebases where the memory depth sets the sample rate limit, the memory depth is expected to be the same when the timebase setting is changed.

Still, what's really interesting is, at what timebase the sample rate drops to a level where the scope's real time performance is affected. Or, when comparing scopes,  what sample rate you actually get at specific timebase settings.

//C
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #3 on: January 15, 2013, 05:37:44 am »
The importance of the memory depth I think must be based on the assumption that "everything else is equal", which often is not true. Large depth may not be compatible with high capture rate (#3), which also is a very important parameter.

 #3: e.g. Rigol 2072
You're not quite getting my table you linked to.

EVERY DSO that has large memory depths must drop it's update rate when using it. Why? Well, there is a slight barrier in the way which is known as TIME. At 2 GSa/s (the fastest sample rate of the Rigol) it takes 28ms to fill 56MB of memory. Why does the Rigol do 35 wfrms/s when using 56MB? Guess what 28ms * 35 equals?

Using deep memory in continuous mode is the process of trading detail for depth - when you segment it, you can have a little bit of the best of both worlds.
« Last Edit: January 15, 2013, 06:09:59 am by marmad »
 

Offline CarlG

  • Regular Contributor
  • *
  • Posts: 153
  • Country: se
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #4 on: January 15, 2013, 09:35:27 pm »
The importance of the memory depth I think must be based on the assumption that "everything else is equal", which often is not true. Large depth may not be compatible with high capture rate (#3), which also is a very important parameter.

 #3: e.g. Rigol 2072
You're not quite getting my table you linked to.

EVERY DSO that has large memory depths must drop it's update rate when using it. Why? Well, there is a slight barrier in the way which is known as TIME. At 2 GSa/s (the fastest sample rate of the Rigol) it takes 28ms to fill 56MB of memory. Why does the Rigol do 35 wfrms/s when using 56MB? Guess what 28ms * 35 equals?

Using deep memory in continuous mode is the process of trading detail for depth - when you segment it, you can have a little bit of the best of both worlds.
The key here is "when using it", i.e using large memory depth.

EVERY scope is not bound to use all the available memory.  But if the scope is designed to always fill up all available (possibly user selected) memory, of course it will affect the update rate.

For continuous mode (which I assume equals what I call real time mode) I don't see any point in acquiring data for any longer than is takes to fill the screen. If I have 2 GSa/s, timebase 1us/div, and 10 div screen width, I want the scope to capture 20kpts, no more, and then retrigger ASAP, regardless of how much memory is available. Sort of "if I can't see it, I'm not interested". (In single shot mode it may very well fill up all available memory, even though the same timebase setting is used.)

I'm referring to the DS2072's update rates merely as an example of that the memory depth may affect the update rate (at a certain timebase), if the scope is designed to use all the available memory (or more than needed to fill the screen). It sure looks like that going from 14kpts to 56 Mpts for DS2072.

Just curious: were you tired when writing your comment?
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #5 on: January 15, 2013, 10:20:04 pm »
EVERY scope is not bound to use all the available memory.  But if the scope is designed to always fill up all available (possibly user selected) memory, of course it will affect the update rate.

You are, in essence, re-writing what you wrote before. You wrote "Large depth may not be compatible with high capture rate". Large depth IS NOT compatible with high capture rate - period - you don't need a table to see that - only an understanding of math and time. If, when you wrote it, you meant something to the effect of, "Large depth may not be compatible with high capture rate [if you use the large depth]", then I'll just chalk it up to imperfect usage of a language which is not your native tongue.

Quote
I don't see any point in acquiring data for any longer than is takes to fill the screen. If I have 2 GSa/s, timebase 1us/div, and 10 div screen width, I want the scope to capture 20kpts, no more, and then retrigger ASAP, regardless of how much memory is available. Sort of "if I can't see it, I'm not interested".

Well, regardless of what you do or don't see the point in, other people sometimes find it useful to be capturing larger amounts of data than what is visible on the tiny display for a variety of reasons. In any case, that is why these options are user-selectable,

Just curious: were you tired when writing your comment?

Not at all - quite the contrary. I immediately spotted the bad logic - or as it appears now, in lieu of your explanation, a poorly written or expressed point.
« Last Edit: January 15, 2013, 10:22:01 pm by marmad »
 

Offline Architect_1077

  • Regular Contributor
  • *
  • Posts: 150
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #6 on: March 22, 2013, 01:35:26 pm »
So, in trying to understand Oscilloscope memory depth vs performance... how exactly is the Agilent 2000x series with "only" 100kpts better than the Rigol 2000 series with 14mpts(or even 56mpts)? Wouldn't the reduced amount of memory actually hinder it a bit compared to the Rigol?
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #7 on: March 23, 2013, 07:37:08 pm »
So, in trying to understand Oscilloscope memory depth vs performance... how exactly is the Agilent 2000x series with "only" 100kpts better than the Rigol 2000 series with 14mpts(or even 56mpts)? Wouldn't the reduced amount of memory actually hinder it a bit compared to the Rigol?

It's not necessarily 'better', but much of the time you're using only a small portion of the memory - so having more is irrelevant during those uses. And the Agilent X2000 series now offers memory upgrades as well.
 

Offline Architect_1077

  • Regular Contributor
  • *
  • Posts: 150
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #8 on: March 24, 2013, 09:45:01 pm »
Quote
It's not necessarily 'better', but much of the time you're using only a small portion of the memory - so having more is irrelevant during those uses. And the Agilent X2000 series now offers memory upgrades as well.

OK, gotcha. Also, having more memory but not having the hardware to perform smoothly with that memory is nearly the same as not having it to begin with, I guess?
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Comments on digital oscilloscope memory depth (EEVblog #13 part 1/2)
« Reply #9 on: March 24, 2013, 10:08:29 pm »
OK, gotcha. Also, having more memory but not having the hardware to perform smoothly with that memory is nearly the same as not having it to begin with, I guess?

Well, let's distinguish between the three basic ways of using memory.

1) Normal (Continual trigger) - examining the waveform real-time. This usually involves a smaller portion of the total available memory - although on some DSOs you can be using the entire memory to sample into continuously while using a Delayed Sweep (Zoom) to examine smaller portions of the waveform. Some lower cost DSOs become a bit unresponsive when trying to use all (or large portions) of the memory continually at certain timebases - or don't allow it.

2) Single (Single trigger) - examining the captured waveform non-real time. This usually involves using the total memory since you capture the most time post-trigger. I'm fairly certain that all DSOs handle this usage of large memory fine.

3) Segmented (A combination of single + continual) - examining a group of captured waveforms non-real time. The memory is divided into segments which are then used to store single-trigger captures one by one. This is a reasonably rare feature in lower cost DSOs, so if it has it, you can be fairly certain it performs correctly.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf