There are certainly applications where a long record length is required but they are in the minority.
Not really. For example, just watch how quickly the sample rate (and thereby the useable BW) drops on scopes with small sample memories when you extend the timebase. A scope with deep memory can sustain a high sample rate even at long timebase settings.
But not long delay settings as we discovered with the Rigol DS1000Z. (1) In that case, long record lengths are great as long as what you want to see what lies within them and if it does not, the sample rate has to be decreases anyway.
Oscilloscopes with short acquisition memories use features like peak detection and delayed acquisition (sweep) to apply their maximum sample rate exactly where the user wants.
More processing power was also required to allow deep acquisition memories
Maybe high end DSOs avoid this problem but my experience with the DSO/MDSO5000 series is that they do not; using high record lengths results in waiting for processing of each record which is fine for single shot applications where long record lengths are especially useful but it is aggravatingly slow otherwise.
This processing power problem with long record lengths is not new.
(1) The Rigol DS1000Z series brings up another question. Exactly what is the record length of a DS1000Z? Measurements are only made upon the display record which is 600 or 1200 points long yet the specifications say 3 Mpoints/channel. Shouldn't they say something like 600 or 1200 points operating in real time and 3 Mpoints/channel when stopped? How many other DSOs which make measurements on the display record are like this?
More processing power was also required to allow deep acquisition memories but both were the result of increases integration and processing power has fallen behind making very deep acquisition memories *less* useful in a general sense. Maybe high end DSOs avoid this problem but my experience with the DSO/MDSO5000 series is that they do not; using high record lengths results in waiting for processing of each record which is fine for single shot applications where long record lengths are especially useful but it is aggravatingly slow otherwise.
This is kind of a typical Tektronix problem which cannot be extrapolated to oscilloscopes in general. Besides that there are several affordable scopes on the market which have enough processing power to deal with tens of Mpts quickly.
I have played with other DSOs
and I have yet to fine *one* where "quickly" was quick enough. See above about display record length.
Which ones (list some make/models)?
Well, I have no idea if the DS1000z, a bottom-of-the-barrel scope which it's biggest feature being cheap, does something different here but on a decent scope with "deep memory" the sample rate drops a lot later than on a scope with just a few thousand kpts of memory.
Which ones (list some make/models)?
I do not keep an itemized list (and do not get enough opportunities to test DSOs) and the DPO/MSO5000 series were the only memorable ones and no LeCroys.
Often you can tell from a review video that something weird and unspecified is going on.
Well, I have no idea if the DS1000z, a bottom-of-the-barrel scope which it's biggest feature being cheap, does something different here but on a decent scope with "deep memory" the sample rate drops a lot later than on a scope with just a few thousand kpts of memory.
Weren't we talking about affordable DSOs?
After finding about the display record thing in the DS1000Z and other people saying that most modern DSOs make display record measurements, I am not sanguine that the statement "the sample rate drops a lot later than on a scope with just a few thousand kpts of memory" has much meaning.
Lets have a closer look at how both scopes perform at various timebase settings:
Tektronix TDS694C with standard (30k) and "long" (120k) memory Timebase Setting Sample Rate (std memory) Frequency limit (fsample/2) (std memory) Sample Rate (long memory) Frequency limit (fsample/2) ("long" memory) 10ns/div 10GS/s 3GHz (bw limit) 10GS/s 3GHz (bw limit) 20ns/div 10GS/s 3GHz (bw limit) 10GS/s 3GHz (bw limit) 30ns/div 10GS/s 3GHz (bw limit) 10GS/s 3GHz (bw limit) 50ns/div 10GS/s 3GHz (bw limit) 10GS/s 3GHz (bw limit) 100ns/div 10GS/s 3GHz (bw limit) 10GS/s 3GHz (bw limit) 200ns/div 10GS/s 3GHz (bw limit) 10GS/s 3GHz (bw limit) 300ns/div 10GS/s 3GHz (bw limit) 10GS/s 3GHz (bw limit) 500ns/div 5GS/s 2.5GHz 10GS/s 3GHz (bw limit) 1us/div 2.5GS/s 1.25GHz 10GS/s 3GHz (bw limit) 2us/div 2.5GS/s 1.25GHz 5GS/s 2.5GHz 3us/div 1GS/s 500MHz 2.5GS/s 1.25GHz 5us/div 500MS/s 250MHz 2.5GS/s 1.25GHz 10us/div 250MS/s 125MHz 1GS/s 500MHz 20us/div 125MS/s 62.5MHz 500MS/s 250MHz 30us/div 100MS/s 50MHz 250MS/s 125MHz
The table clearly shows that the small memory causes huge 3GHz bandwidth and the fast 10GSa/s sample rate to drop dramatically beyond 1us/div (long memory) or even 200ns/div (std memory), and with it the useful bandwidth, i.e. at 10us it's essentially just a 500MHz (long memory) or even just a 125MHz (std memory) scope.
Lets see how the WP960 performs:
LeCroy WavePro 960 quad channel with standard (250k) and long (16M) memory Timebase Setting Sample Rate (std memory) Frequency limit (fsample/2) (std memory) Sample Rate (long memory) Frequency limit (fsample/2) (long memory) 10ns/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 20ns/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 30ns/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 50ns/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 100ns/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 200ns/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 300ns/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 500ns/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 1us/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 2us/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 3us/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 5us/div 4GS/s 2GHz (bw limit) 4GS/s 2GHz (bw limit) 10us/div 2GS/s 1GHz 4GS/s 2GHz (bw limit) 20us/div 1GS/s 500MHz 4GS/s 2GHz (bw limit) 30us/div 1GS/s 500MHz 4GS/s 2GHz (bw limit) 50us/div 500MS/s 250MHz 4GS/s 2GHz (bw limit) 100us/div 250MS/s 125MHz 4GS/s 2GHz (bw limit) 200us/div 125MS/s 62.5MHz 4GS/s 2GHz (bw limit) 300us/div 50MS/s 25MHz 4GS/s 2GHz (bw limit) 500us/div 50MS/s 25MHz 2GS/s 1GHz 1ms/div 25MS/s 12.5MHz 1GS/s 500MHz
The initial bandwidth of the WP960 is of course lower (2GHz vs 3GHz), however the WP960 maintains a fast sample rate for much longer than the TDS694C. Even with the reduced sample rate in 4 channel mode the WP960 with deep memory still captures at full analog bandwidth where a fully spec'd TDS694C only captures less than 100MHz. And this performance distance only gets larger when only two or a single channel is needed as the WP960 can combine sampling and memory sizes.
This also pretty much shows that a scope's performance can't be judged just by looking at two of the main parameters (analog bandwidth and sample rate). There's a lot more to it.
After finding about the display record thing in the DS1000Z and other people saying that most modern DSOs make display record measurements, I am not sanguine that the statement "the sample rate drops a lot later than on a scope with just a few thousand kpts of memory" has much meaning.
But it does. Because with the sample rate your useable BW also drops, and when your sampling BW drops below the (true) analog BW then any frequency component sitting in between will cause aliasing.
My point was that the display record processing makes these DSOs operate more like they are limited by the display record length than the record length given in the specifications which in only available for saved acquisitions. This is a deliberate tradeoff because they cannot process their full record length in an acceptable time.
The TDS694C (and all of the TDS600 models) is more specialized than the typical DSO of that time and has more in common with transient digitizers than oscilloscopes. It uses CCD sampling to achieve 10GS/s on every channel simultaneously.
My point was that the display record processing makes these DSOs operate more like they are limited by the display record length than the record length given in the specifications which in only available for saved acquisitions. This is a deliberate tradeoff because they cannot process their full record length in an acceptable time.
They can, as could even scopes back then (the same M68k that Tek used it its low memory TDS scopes was used by deep memory scopes like the HP 54645A/D with 1Mpts or the LeCroy 9300 Series with up to 8Mpts, and even the latter had no problems processing the full record length in acceptable time).
QuoteThe TDS694C (and all of the TDS600 models) is more specialized than the typical DSO of that time and has more in common with transient digitizers than oscilloscopes. It uses CCD sampling to achieve 10GS/s on every channel simultaneously.
Probably (well, the short memory makes the TDS694C useless for pretty much anything else than short transients), but for this discussion that's completely irrelevant as the same is true for pretty much any low memory scope vs a deep memory scope.
They can, as could even scopes back then (the same M68k that Tek used it its low memory TDS scopes was used by deep memory scopes like the HP 54645A/D with 1Mpts or the LeCroy 9300 Series with up to 8Mpts, and even the latter had no problems processing the full record length in acceptable time).
HP had their Megazoom ASIC doing the heavy processing in the HP 54645A
and I assume LeCroy was doing something similar. If only the 68000 processor had been available, then the performance with long record lengths would have been unacceptable except for a minority of long record length applications.
That is why I gave examples of old DSOs which did not support longer record lengths simply because of processing limitations.
They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.
QuoteProbably (well, the short memory makes the TDS694C useless for pretty much anything else than short transients), but for this discussion that's completely irrelevant as the same is true for pretty much any low memory scope vs a deep memory scope.
That series of oscilloscopes was intended for applications where bandwidth and real time sample rate were the only considerations. They had a specific market which in earlier time would have been using oscilloscopes like the 519, 7104, and scan converter based instruments.
I get your point that record length limits sampling rate and I have never disagreed. I just think long record lengths which have been enabled by increasing integration have been seized upon by marketing departments in a quest for specsmanship leading to deceptive practices like the Rigol example I gave.
They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.
Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?
Sample memory sizes haven't really been the prime marketing argument for the best part of a decade, and even before then were rarely so.
Maybe, and it shows that Tek didn't really 'get' digital scopes and was too fixated on their analog past, but as I said the TDS694C was only an example, and Tek has produced many more low memory scopes and not all of them have the excuse of being made for niche purposes.
Just do the math and see how far you'd get with the few K you believe are sufficient for a DSO these days.
Why do you think so many DSOs are making measurements on the display record? It is faster and requires less processing power because it limits the record length. It also sometimes produces deceptive results.
Why do you think so many DSOs are making measurements on the display record? It is faster and requires less processing power because it limits the record length. It also sometimes produces deceptive results.
So many? I know one brand who truly does that.
Then theres some others who use more complex approach, lets say "optimized dataset", but (much) larger than display.
Out of approx 10 scopes tested in auto-measurements thread there was only 1 using display record.
Also processing power deficit as of today is complete myth, as latest entry level scope tests show.
Its up to user if stick to CRO-like practices or soak in new possibilities, concentrate on task at hand and let scope processor do the dirty work.
Interesting that this has created situation where more conservative top-dollar tech may get beatings from low-end DSOs in low-freq applications.
Which one is that? We know Rigol is doing it and the Keysight guy at the end says their InfiniiVision DSOs do it which is not the first time I have heard that about them although I did not believe it the first time.
The transition time test is good and I have used it myself but it is not very relevant to practical applications. Someone would naturally zoom in when making this measurement.
The test I have started to use is RMS
QuoteAlso processing power deficit as of today is complete myth, as latest entry level scope tests show.I do not see where that was measured at all.
If the new possibilities include producing the wrong result, then it is hardly an alternative.
Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?
I explained it right here:
This processing power problem with long record lengths is not new. The ancient Tektronix 2230/2232 DSOs support 1k and 4k record lengths which seems laughably short by today's standards but why did they support a 1k record length at all?
See where it says 4M? Hmm, I don't. See where it says anything close to 4M? Hmm, it does not say that either.
Why do you think so many DSOs are making measurements on the display record?
QuoteSample memory sizes haven't really been the prime marketing argument for the best part of a decade, and even before then were rarely so.
For something that is so unimportant for marketing, they sure go out of their way to advertise their long record lengths while avoiding the subject of how those long record lengths do not apply except in specific operating modes.
QuoteMaybe, and it shows that Tek didn't really 'get' digital scopes and was too fixated on their analog past, but as I said the TDS694C was only an example, and Tek has produced many more low memory scopes and not all of them have the excuse of being made for niche purposes.
It shows Tektronix made those oscilloscopes for a specific market where the limit in record length was irrelevant and other considerations like sample rate and bandwidth were more important.
I'm not very familiar with LeCroy's products other than those from companies that they bought. How did the LeCroy DSOs which were contemporaries to the Tektornix TDS600 series compare? Wasn't LeCroy selling a lot of DSOs for high energy physics applications at the time? Maybe there wasn't much overlap with the market Tektronix was catering to.
When I have used modern DSOs which support long record lengths, I set them low enough for maximum performance unless a long record length is needed just like I do with my 20+ year old DSOs.
Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?
I explained it right here:
This processing power problem with long record lengths is not new. The ancient Tektronix 2230/2232 DSOs support 1k and 4k record lengths which seems laughably short by today's standards but why did they support a 1k record length at all?
I'm sorry but you didn't explain. While you think that it's processing which makes deep memory scopes slow, you seem to ignore the basic fact that at a given sample rate it simply takes more time to fill the larger memory. Processing has nothing to do with it.
I've no problem believing that small memory works for you, I guess you're probably used to it (and it seems you didn't really had much contact with any decent modern deep memory scope), and that's fine. It however doesn't invalidate my arguments.
Increasing the memory depth of those rigol scopes further reduces their acquisition rates, same with the Tektronix examples David is talking about, its like this with most scopes and well known.
The entire "argument" about needing memory depth controls is that the user needs it for some reason, in the Agilent/Keysight X series thats not needed because there isneverso rarely a reason for the user to capture a smaller memory depth (even though it could be nice for some applications where the data is being offloaded).
I've no problem believing that small memory works for you, I guess you're probably used to it (and it seems you didn't really had much contact with any decent modern deep memory scope), and that's fine. It however doesn't invalidate my arguments.
You really need to stop going out of your way to tell everyone that scopes with deep memory and advanced post capture analysis are so far superior for all possible uses than scopes with fast realtime analysis or displays.
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.
That isn't entirely true because what happens if you press stop and there is nothing more to trigger on? You can still use zoom to zoom into the signal. I'm pretty sure HPAK is using a dual acquisition technique which uses a short buffer to draw an intensity graded trace and switches to a deep memory mode when you change the timebase to zoom in. The giveaway is that the intensity graded trace dissapears once you change the timebase to zoom in/out.
Increasing the memory depth of those rigol scopes further reduces their acquisition rates, same with the Tektronix examples David is talking about, its like this with most scopes and well known.Thanks Captain Obvious, but the point was not if scopes get slower with larger sample memory (they do) but the why. David seems to believe it's because of processing, but in reality this is simply down to basic math.
They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?
Waveforms | wfms/s | |||
Rigol | Memory | Vector | Dots | Sample Rate |
1054Z | 12k | 363 | 624 | 10MS/s |
120k | 217 | 298 | 125MS/s | |
600k | 178 | 192 | 1GS/s | |
1200k | 160 | 170 | 1GS/s | |
12M | 60 | 61 | 1GS/s | |
24M | 35 | 36 | 1GS/s | |
Keysight | ||||
1000/2000X | 500k | 1800 | 1GS/s | |
3000X | 500k | 1746 | 1GS/s | |
3000X | 2M | 780 | 4GS/s |
The entire "argument" about needing memory depth controls is that the user needs it for some reason, in the Agilent/Keysight X series thats not needed because there isneverso rarely a reason for the user to capture a smaller memory depth (even though it could be nice for some applications where the data is being offloaded).
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.
That is fine for some tasks but not for others, i.e. sometimes you might want to capture a specific lenght only. After all, there's a reason why pretty much any other newer scope allows for manual setup of sample memory, and that includes even Keysight's own scopes (Infiniium), which indicates that there's some use for this feature.
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.
That isn't entirely true because what happens if you press stop and there is nothing more to trigger on? You can still use zoom to zoom into the signal. I'm pretty sure HPAK is using a dual acquisition technique which uses a short buffer to draw an intensity graded trace and switches to a deep memory mode when you change the timebase to zoom in. The giveaway is that the intensity graded trace dissapears once you change the timebase to zoom in/out.
What happens if there's no trigger after pressing STOP is an interesting question. I honestly don't know, and I'll have no access to a DSOX for a while so somebody else would need to test that out.
Increasing the memory depth of those rigol scopes further reduces their acquisition rates, same with the Tektronix examples David is talking about, its like this with most scopes and well known.Thanks Captain Obvious, but the point was not if scopes get slower with larger sample memory (they do) but the why. David seems to believe it's because of processing, but in reality this is simply down to basic math.There is no basic maths you can apply to determine how fast a particular scope will update
Lets go back to your point where this started:They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.
Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?
Ideally you wouldn't have to compromise on memory depth, it would always be as deep as possible for the horizontal window. You are limited by sample rate for short captures, and memory depth for long captures, but in the in-between where neither is limiting people still choose to have a shorter memory depth than they could capture because it slows down aspects of the scope such as the waveform display rate.
You can measure this so I took a rigol 1054 and did the comparison setting both scopes to 50us per division:
[...]
The theoretical zero blind time rate is 2000 wfms/s for the Keysight, and 1667 wfms/s for the Rigol (extra 2 divisions horizontal display). They're all maxing out at 1GS/s for this test but the Keysight gives you no options to change to other memory depths, while the Rigol with all its choices fails to match the realtime performance. It even lets you choose longer depths that are captured outside the display but not shown until you stop and zoom around the capture, at the shorter memory depths the Rigol is dropping its sample rate and not putting 1GS/s data onto the screen which is why comparisons need to be made carefully. Processing (and/or memory bandwidth) is limiting the ability to draw more information to the screen and the reason why many scopes offer the choice of shorter memory depths.
If you want to capture a specific length of data sure, its nice to have the controls available and I did mention that is one corner case.
But in general what people want to capture is a length of time and they would like to have as much memory and sample rate as possible, but for most scopes thats balancing against realtime waveform update rate. Or the user needs to capture elements with a particular frequency so they are constrained in their lowest possible sample rate, again the tradeoff appears.
Or we can take the Keysight X series scopes where they provide no choice, but there are so few cases where you would want to have shorter memory depths on them that that it seems reasonable they left the option out.
I find this is much easier to work with than memory depth as I'm generally concerned about the frequencies being captured, not the specific length of memory being used to do this. Again, when you always get as much memory as possible used in the captures you can forget about that parameter and focus on the ones that matter to your specific situation. Yes, going to a single capture doubles the memory depth in many situations (but not all) but when looking at the signal I can quickly asses if the sample rate is sufficient for the information I want to see and adjust the controls accordingly.
What happens if there's no trigger after pressing STOP is an interesting question. I honestly don't know, and I'll have no access to a DSOX for a while so somebody else would need to test that out.
Daniel from Keysight engaged on this and linked to a video:
https://www.eevblog.com/forum/testgear/new-keysight-scope-1st-march-2017/msg1125192/#msg1125192
Its not confusing or magic, while in run mode the memory is halved from maximum (generally)
and when pressing stop the memory is held
and when you press single it uses as much memory as possible for the next trigger.
Its not using the minimum possible to fill the display buffer, the data is aggregated into a 2d histogram (with vectors) at the running sample rate and there is a larger memory available when stopped to navigate/zoom through.
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.They don't cheat, it tells you the sample rate for the current mode clearly and plainly on the UI as do many other scopes:
OK, so on a DSO-X3kT with 4Mpts and 5GSa/s, that would mean a best case (single channel) 2Mpts in normal acquisition mode, which at 5GSa/s takes 400us to fill. Even on a perfect scope with zero blind time, 400us per acquisition translate into only 2,500 acquisitions per second. Which means to reach the very high waveforms the DSO-X3kT can achieve it would have to dramatically reduce the amount of memory used, i.e. at 500k acquisitions per second that just leaves 2us for acquisition + blind time, so even that perfect scope with no blind time would have to reduce the sample memory size to 10k.
OK, so on a DSO-X3kT with 4Mpts and 5GSa/s, that would mean a best case (single channel) 2Mpts in normal acquisition mode, which at 5GSa/s takes 400us to fill. Even on a perfect scope with zero blind time, 400us per acquisition translate into only 2,500 acquisitions per second. Which means to reach the very high waveforms the DSO-X3kT can achieve it would have to dramatically reduce the amount of memory used, i.e. at 500k acquisitions per second that just leaves 2us for acquisition + blind time, so even that perfect scope with no blind time would have to reduce the sample memory size to 10k.As I wrote before your math is too simplified.
You don't have to fill the entire acquisition memory if you know the data is not going to be used. This is the case when a new trigger arrives before the acquisition memory is completely filled. After all at short time/div settings you'll be looking at a fraction of the acquisition memory anyway. The rest is outside the screen.