Author Topic: New Tektronix TBS2000 oscilloscopes  (Read 80973 times)

0 Members and 2 Guests are viewing this topic.

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #225 on: August 07, 2017, 03:35:30 pm »
There are certainly applications where a long record length is required but they are in the minority.

Not really.  For example, just watch how quickly the sample rate (and thereby the useable BW) drops on scopes with small sample memories when you extend the timebase.  A scope with deep memory can sustain a high sample rate even at long timebase settings.

But not long delay settings as we discovered with the Rigol DS1000Z. (1) In that case, long record lengths are great as long as what you want to see what lies within them and if it does not, the sample rate has to be decreases anyway.

Well, I have no idea if the DS1000z, a bottom-of-the-barrel scope which it's biggest feature being cheap, does something different here but on a decent scope with "deep memory" the sample rate drops a lot later than on a scope with just a few thousand kpts of memory.

Quote
Oscilloscopes with short acquisition memories use features like peak detection and delayed acquisition (sweep) to apply their maximum sample rate exactly where the user wants.

Great. Peak Detect may detect glitches but it loses all timing correlation and the waveform on the screen doesn't necessarily look like the original signal. In addition, it's absolutely useless for measurements. Delayed acquisition also requires the user to know where to look at.

Both methods are crutches to overcome the lack of sample memory.

Quote
More processing power was also required to allow deep acquisition memories

Actually, no, longer memory doesn't require more processing.

Quote
Maybe high end DSOs avoid this problem but my experience with the DSO/MDSO5000 series is that they do not; using high record lengths results in waiting for processing of each record which is fine for single shot applications where long record lengths are especially useful but it is aggravatingly slow otherwise.

Yes, but that is simply only because Tek doesn't seem to know how to design a proper DSO and pretty much all their scopes lock up when the scope is busy doing something. And I do know how aggravating this can be as I had a MDO3000 once.

Other scopes will not lock up the UI until the acquisition is complete but enable any user changes instantly.

Quote
This processing power problem with long record lengths is not new.

Again, this has nothing to do with processing power, it's just idiotic software design by Tek. They must employ engineers that really hate humanity for them to come up with such products.

Quote
(1) The Rigol DS1000Z series brings up another question.  Exactly what is the record length of a DS1000Z?  Measurements are only made upon the display record which is 600 or 1200 points long yet the specifications say 3 Mpoints/channel.  Shouldn't they say something like 600 or 1200 points operating in real time and 3 Mpoints/channel when stopped?  How many other DSOs which make measurements on the display record are like this?

Again, I have no idea how the DS1000z works but on most decent scopes the memory used is the memory the scope was setup with, which is usually displayed somewhere on the screen. The only exception are the Agilent/Keysight DSO-X InfiniVision scopes which don't allow user control of the sample memory and which don't show how many memory is actually used.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #226 on: August 07, 2017, 03:45:27 pm »
More processing power was also required to allow deep acquisition memories but both were the result of increases integration and processing power has fallen behind making very deep acquisition memories *less* useful in a general sense.  Maybe high end DSOs avoid this problem but my experience with the DSO/MDSO5000 series is that they do not; using high record lengths results in waiting for processing of each record which is fine for single shot applications where long record lengths are especially useful but it is aggravatingly slow otherwise.

This is kind of a typical Tektronix problem which cannot be extrapolated to oscilloscopes in general. Besides that there are several affordable scopes on the market which have enough processing power to deal with tens of Mpts quickly.

I have played with other DSOs

Which ones (list some make/models)?

Quote
and I have yet to fine *one* where "quickly" was quick enough.  See above about display record length.

Pretty much any decent DSO interrupts the acquisition process to apply anu user settings that have been made. It may not be as quick as say changing the timebase on an analog scope but it's not far off.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
Re: New Tektronix TBS2000 oscilloscopes
« Reply #227 on: August 07, 2017, 04:19:05 pm »
Which ones (list some make/models)?

I do not keep an itemized list (and do not get enough opportunities to test DSOs) and the DPO/MSO5000 series were the only memorable ones and no LeCroys.  Often you can tell from a review video that something weird and unspecified is going on.

Well, I have no idea if the DS1000z, a bottom-of-the-barrel scope which it's biggest feature being cheap, does something different here but on a decent scope with "deep memory" the sample rate drops a lot later than on a scope with just a few thousand kpts of memory.

Weren't we talking about affordable DSOs?

After finding about the display record thing in the DS1000Z and other people saying that most modern DSOs make display record measurements, I am not sanguine that the statement "the sample rate drops a lot later than on a scope with just a few thousand kpts of memory" has much meaning.  It is true though when single shot acquisitions are made and I agree that long record lengths are very handy in that case.


 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #228 on: August 07, 2017, 07:38:54 pm »
Which ones (list some make/models)?

I do not keep an itemized list (and do not get enough opportunities to test DSOs) and the DPO/MSO5000 series were the only memorable ones and no LeCroys.

I was asking because I remember you always pretty much only list some Tek scopes and the Rigol DS1000z, but as nctnico and myself said repeatedly this is a Tek specific problem and should not be generalized as being common amongst DSOs - it is not! 

Quote
Often you can tell from a review video that something weird and unspecified is going on.

I wouldn't rely on a video review to assess the performance of a scope, no matter who does it. There's so much stuff even experienced reviewers can (and often do) miss, and then there are the ones where the presenter doesn't handle the scope correctly.

Quote
Well, I have no idea if the DS1000z, a bottom-of-the-barrel scope which it's biggest feature being cheap, does something different here but on a decent scope with "deep memory" the sample rate drops a lot later than on a scope with just a few thousand kpts of memory.

Weren't we talking about affordable DSOs?

Yes, but in my opinion that covers quite a bit more than just what pretty much is one of the cheapest scopes on the market. It includes scopes like the Keysight DSO-X1000A/G and DSO-X2000A, the GW Instek GDS-1000B and GDS-2000E, and maybe even the R&S RTB2004.

Quote
After finding about the display record thing in the DS1000Z and other people saying that most modern DSOs make display record measurements, I am not sanguine that the statement "the sample rate drops a lot later than on a scope with just a few thousand kpts of memory" has much meaning.

But it does. Because with the sample rate your useable BW also drops, and when your sampling BW drops below the (true) analog BW then any frequency component sitting in between will cause aliasing. Let me quote from a posting I made 2 years ago where I compared a low memory Tek TDS694C (3Ghz 10GSa/s, 30k standard, 120k max) with a long memory scope (LeCroy WavePro 960, 2Ghz 16GSa/s with 250k standard and 16M max) and which shows how quickly the BW advantage of the Tek melts away because of it's small memory1:

Quote
Lets have a closer look at how both scopes perform at various timebase settings:

Tektronix TDS694C with standard (30k) and "long" (120k) memory
Timebase SettingSample Rate (std memory)Frequency limit (fsample/2) (std memory)Sample Rate (long memory)Frequency limit (fsample/2) ("long" memory)
10ns/div10GS/s3GHz (bw limit)10GS/s3GHz (bw limit)
20ns/div10GS/s3GHz (bw limit)10GS/s3GHz (bw limit)
30ns/div10GS/s3GHz (bw limit)10GS/s3GHz (bw limit)
50ns/div10GS/s3GHz (bw limit)10GS/s3GHz (bw limit)
100ns/div10GS/s3GHz (bw limit)10GS/s3GHz (bw limit)
200ns/div10GS/s3GHz (bw limit)10GS/s3GHz (bw limit)
300ns/div10GS/s3GHz (bw limit)10GS/s3GHz (bw limit)
500ns/div5GS/s2.5GHz10GS/s3GHz (bw limit)
1us/div2.5GS/s1.25GHz10GS/s3GHz (bw limit)
2us/div2.5GS/s1.25GHz5GS/s2.5GHz
3us/div1GS/s500MHz2.5GS/s1.25GHz
5us/div500MS/s250MHz2.5GS/s1.25GHz
10us/div250MS/s125MHz1GS/s500MHz
20us/div125MS/s62.5MHz500MS/s250MHz
30us/div100MS/s50MHz250MS/s125MHz

The table clearly shows that the small memory causes huge 3GHz bandwidth and the fast 10GSa/s sample rate to drop dramatically beyond 1us/div (long memory) or even 200ns/div (std memory), and with it the useful bandwidth, i.e. at 10us it's essentially just a 500MHz (long memory) or even just a 125MHz (std memory) scope.

Lets see how the WP960 performs:

LeCroy WavePro 960 quad channel with standard (250k) and long (16M) memory
Timebase SettingSample Rate (std memory)Frequency limit (fsample/2) (std memory)Sample Rate (long memory)Frequency limit (fsample/2) (long memory)
10ns/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
20ns/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
30ns/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
50ns/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
100ns/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
200ns/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
300ns/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
500ns/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
1us/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
2us/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
3us/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
5us/div4GS/s2GHz (bw limit)4GS/s2GHz (bw limit)
10us/div2GS/s1GHz4GS/s2GHz (bw limit)
20us/div1GS/s500MHz4GS/s2GHz (bw limit)
30us/div1GS/s500MHz4GS/s2GHz (bw limit)
50us/div500MS/s250MHz4GS/s2GHz (bw limit)
100us/div250MS/s125MHz4GS/s2GHz (bw limit)
200us/div125MS/s62.5MHz4GS/s2GHz (bw limit)
300us/div50MS/s25MHz4GS/s2GHz (bw limit)
500us/div50MS/s25MHz2GS/s1GHz
1ms/div25MS/s12.5MHz1GS/s500MHz

The initial bandwidth of the WP960 is of course lower (2GHz vs 3GHz), however the WP960 maintains a fast sample rate for much longer than the TDS694C. Even with the reduced sample rate in 4 channel mode the WP960 with deep memory still captures at full analog bandwidth where a fully spec'd TDS694C only captures less than 100MHz. And this performance distance only gets larger when only two or a single channel is needed as the WP960 can combine sampling and memory sizes.

This also pretty much shows that a scope's performance can't be judged just by looking at two of the main parameters (analog bandwidth and sample rate). There's a lot more to it.


1  In the posting the usable BW was defined as fmax=0.5*fs to keep it simple, however the real useable BW after Nyquist-Shannon would be fmax<0.5*fs.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
Re: New Tektronix TBS2000 oscilloscopes
« Reply #229 on: August 07, 2017, 09:28:14 pm »
After finding about the display record thing in the DS1000Z and other people saying that most modern DSOs make display record measurements, I am not sanguine that the statement "the sample rate drops a lot later than on a scope with just a few thousand kpts of memory" has much meaning.

But it does. Because with the sample rate your useable BW also drops, and when your sampling BW drops below the (true) analog BW then any frequency component sitting in between will cause aliasing.

My point was that the display record processing makes these DSOs operate more like they are limited by the display record length than the record length given in the specifications which in only available for saved acquisitions.  This is a deliberate tradeoff because they cannot process their full record length in an acceptable time.

The TDS694C (and all of the TDS600 models) is more specialized than the typical DSO of that time and has more in common with transient digitizers than oscilloscopes.  It uses CCD sampling to achieve 10GS/s on every channel simultaneously.
 
The following users thanked this post: Someone

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #230 on: August 08, 2017, 08:33:59 am »
My point was that the display record processing makes these DSOs operate more like they are limited by the display record length than the record length given in the specifications which in only available for saved acquisitions.  This is a deliberate tradeoff because they cannot process their full record length in an acceptable time.

They can, as could even scopes back then (the same M68k that Tek used it its low memory TDS scopes was used by deep memory scopes like the HP 54645A/D with 1Mpts or the LeCroy 9300 Series with up to 8Mpts, and even the latter had no problems processing the full record length in acceptable time).

Quote
The TDS694C (and all of the TDS600 models) is more specialized than the typical DSO of that time and has more in common with transient digitizers than oscilloscopes.  It uses CCD sampling to achieve 10GS/s on every channel simultaneously.

Probably (well, the short memory makes the TDS694C useless for pretty much anything else than short transients), but for this discussion that's completely irrelevant as the same is true for pretty much any low memory scope vs a deep memory scope.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
Re: New Tektronix TBS2000 oscilloscopes
« Reply #231 on: August 08, 2017, 04:19:01 pm »
My point was that the display record processing makes these DSOs operate more like they are limited by the display record length than the record length given in the specifications which in only available for saved acquisitions.  This is a deliberate tradeoff because they cannot process their full record length in an acceptable time.

They can, as could even scopes back then (the same M68k that Tek used it its low memory TDS scopes was used by deep memory scopes like the HP 54645A/D with 1Mpts or the LeCroy 9300 Series with up to 8Mpts, and even the latter had no problems processing the full record length in acceptable time).

HP had their Megazoom ASIC doing the heavy processing in the HP 54645A and I assume LeCroy was doing something similar.  If only the 68000 processor had been available, then the performance with long record lengths would have been unacceptable except for a minority of long record length applications.

That is why I gave examples of old DSOs which did not support longer record lengths simply because of processing limitations.  They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.

Just having more fast acquisition memory is not sufficient.

Quote
Quote
The TDS694C (and all of the TDS600 models) is more specialized than the typical DSO of that time and has more in common with transient digitizers than oscilloscopes.  It uses CCD sampling to achieve 10GS/s on every channel simultaneously.

Probably (well, the short memory makes the TDS694C useless for pretty much anything else than short transients), but for this discussion that's completely irrelevant as the same is true for pretty much any low memory scope vs a deep memory scope.

That series of oscilloscopes was intended for applications where bandwidth and real time sample rate were the only considerations.  They had a specific market which in earlier time would have been using oscilloscopes like the 519, 7104, and scan converter based instruments.

I get your point that record length limits sampling rate and I have never disagreed.  I just think long record lengths which have been enabled by increasing integration have been seized upon by marketing departments in a quest for specsmanship leading to deceptive practices like the Rigol example I gave.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #232 on: August 08, 2017, 07:16:03 pm »
They can, as could even scopes back then (the same M68k that Tek used it its low memory TDS scopes was used by deep memory scopes like the HP 54645A/D with 1Mpts or the LeCroy 9300 Series with up to 8Mpts, and even the latter had no problems processing the full record length in acceptable time).

HP had their Megazoom ASIC doing the heavy processing in the HP 54645A

Yes but it shows that handling 1M of memory wasn't outside the scope of technology in 1995.

Quote
and I assume LeCroy was doing something similar.  If only the 68000 processor had been available, then the performance with long record lengths would have been unacceptable except for a minority of long record length applications.

No, LeCroy never settled on ASICs for waveform processing, and the 9300 Series only relies on the CPU for pretty much everything, as do later LeCroy scopes.

Quote
That is why I gave examples of old DSOs which did not support longer record lengths simply because of processing limitations.

Which wasn't a processing limitation at all.

Quote
They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.

Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?

Quote
Quote
Probably (well, the short memory makes the TDS694C useless for pretty much anything else than short transients), but for this discussion that's completely irrelevant as the same is true for pretty much any low memory scope vs a deep memory scope.

That series of oscilloscopes was intended for applications where bandwidth and real time sample rate were the only considerations.  They had a specific market which in earlier time would have been using oscilloscopes like the 519, 7104, and scan converter based instruments.

Maybe, and it shows that Tek didn't really 'get' digital scopes and was too fixated on their analog past, but as I said the TDS694C was only an example, and Tek has produced many more low memory scopes and not all of them have the excuse of being made for niche purposes.

Quote
I get your point that record length limits sampling rate and I have never disagreed.  I just think long record lengths which have been enabled by increasing integration have been seized upon by marketing departments in a quest for specsmanship leading to deceptive practices like the Rigol example I gave.

Sample memory sizes haven't really been the prime marketing argument for the best part of a decade, and even before then were rarely so. Sufficient memory is pretty much standard for any scope since the early 2000s, and rightfully so, similar to other features like color displays. As to Rigol, they simply put in the large memory in their scopes because A) it's cheap and B) especially on a budget scope like the DS1000z long memory prevents the measly sample rate in 4ch mode to drop further on longer timebase which would make the 4ch mode useless. Just do the math and see how far you'd get with the few K you believe are sufficient for a DSO these days.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
Re: New Tektronix TBS2000 oscilloscopes
« Reply #233 on: August 09, 2017, 03:15:37 pm »
They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.

Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?

I explained it right here:

This processing power problem with long record lengths is not new.  The ancient Tektronix 2230/2232 DSOs support 1k and 4k record lengths which seems laughably short by today's standards but why did they support a 1k record length at all?

See where it says 4M?  Hmm, I don't.  See where it says anything close to 4M?  Hmm, it does not say that either.

Of course the acquisition rate is impacted by long record lengths.  But back then, processing limitations impacted it even on short but not the shortest record lengths and it still does even now with exceptions for things like being able to process the digitizer output in real time to produce histograms.

Why do you think so many DSOs are making measurements on the display record?  It is faster and requires less processing power because it limits the record length.  It also sometimes produces deceptive results.

Quote
Sample memory sizes haven't really been the prime marketing argument for the best part of a decade, and even before then were rarely so.

For something that is so unimportant for marketing, they sure go out of their way to advertise their long record lengths while avoiding the subject of how those long record lengths do not apply except in specific operating modes.

Quote
Maybe, and it shows that Tek didn't really 'get' digital scopes and was too fixated on their analog past, but as I said the TDS694C was only an example, and Tek has produced many more low memory scopes and not all of them have the excuse of being made for niche purposes.

It shows Tektronix made those oscilloscopes for a specific market where the limit in record length was irrelevant and other considerations like sample rate and bandwidth were more important.

I'm not very familiar with LeCroy's products other than those from companies that they bought.  How did the LeCroy DSOs which were contemporaries to the Tektornix TDS600 series compare?  Wasn't LeCroy selling a lot of DSOs for high energy physics applications at the time?  Maybe there wasn't much overlap with the market Tektronix was catering to.

Quote
Just do the math and see how far you'd get with the few K you believe are sufficient for a DSO these days.

I do it all the time.  The only applications where it regularly comes up are the same applications where I would use a logic or protocol analyser or a strip chart recorder.

When I have used modern DSOs which support long record lengths, I set them low enough for maximum performance unless a long record length is needed just like I do with my 20+ year old DSOs.

 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: New Tektronix TBS2000 oscilloscopes
« Reply #234 on: August 09, 2017, 03:30:09 pm »
Why do you think so many DSOs are making measurements on the display record?  It is faster and requires less processing power because it limits the record length.  It also sometimes produces deceptive results.

So many? I know one brand who truly does that. Then theres some others who use more complex approach, lets say "optimized dataset", but (much) larger than display.
Out of approx 10 scopes tested in auto-measurements thread there was only 1 using display record. Also processing power deficit as of today is complete myth, as latest entry level scope tests show.
Its up to user if stick to CRO-like practices or soak in new possibilities, concentrate on task at hand and let scope processor do the dirty work.
Interesting that this has created situation where more conservative top-dollar tech may get beatings from low-end DSOs in low-freq applications.
« Last Edit: August 09, 2017, 03:34:11 pm by MrW0lf »
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16651
  • Country: us
  • DavidH
Re: New Tektronix TBS2000 oscilloscopes
« Reply #235 on: August 10, 2017, 10:19:48 am »
Why do you think so many DSOs are making measurements on the display record?  It is faster and requires less processing power because it limits the record length.  It also sometimes produces deceptive results.

So many? I know one brand who truly does that.

Which one is that?  We know Rigol is doing it and the Keysight guy at the end says their InfiniiVision DSOs do it which is not the first time I have heard that about them although I did not believe it the first time.

Quote
Then theres some others who use more complex approach, lets say "optimized dataset", but (much) larger than display.
Out of approx 10 scopes tested in auto-measurements thread there was only 1 using display record.

The transition time test is good and I have used it myself but it is not very relevant to practical applications.  Someone would naturally zoom in when making this measurement.

The test I have started to use is RMS which *should* work fine on a decimated acquisition record but often does not because of processing.  RMS has the advantage that it is more likely to be used as part of a windowed measurement at slow time/div settings to measure noise on part of a signal.

Quote
Also processing power deficit as of today is complete myth, as latest entry level scope tests show.

I do not see where that was measured at all.

Quote
Its up to user if stick to CRO-like practices or soak in new possibilities, concentrate on task at hand and let scope processor do the dirty work.
Interesting that this has created situation where more conservative top-dollar tech may get beatings from low-end DSOs in low-freq applications.

If the new possibilities include producing the wrong result, then it is hardly an alternative.
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: New Tektronix TBS2000 oscilloscopes
« Reply #236 on: August 10, 2017, 11:43:29 am »
Which one is that?  We know Rigol is doing it and the Keysight guy at the end says their InfiniiVision DSOs do it which is not the first time I have heard that about them although I did not believe it the first time.

Actually we have new member in "screen sampler" family:
Owon XDS3102A, according to to r-loop test here.
Folds in specific test at 200ns, just like DS1000Z. In comparison SDS1202X-E folds at 5ms.

About Keysight - have little interest what they say. I observe and calculate. According to observations *SOX3000T delivers end-result that implies about 20k record as base, at least in rise time department. This is way up from "screen sampler".

The transition time test is good and I have used it myself but it is not very relevant to practical applications.  Someone would naturally zoom in when making this measurement.

Transition time test is just an indication, along with pulse period test, which is also included. Those indicators expand to all horizontal measurements. Can be tens of those in modern scope.

Overall, with "screen sampler" you are limited to primitive single screen activities, while full or at least larger-than-screen-record scopes can be used almost like multi-timebase scopes because can "go high and low" at the same time.

For example here scope is doing 2Mpts FFT at same time on 24MHz and 32768Hz signal. It could be any other activity involving substantially different frequencies. One cannot miss functionality like that if its utterly impossible with tech at hand. Just like one probably mostly does not miss that DMM cannot graph and displays only single number.



The test I have started to use is RMS


Things that spoil RMS can be probably concluded from simpler tests. Original idea to base tests on risetime, period did occur because even Arduino can be used as signal source, so more people can investigate what they actually have sitting on desk.

Quote
Also processing power deficit as of today is complete myth, as latest entry level scope tests show.
I do not see where that was measured at all.

Non-deficit of processing power is again direct implication of scope doing full-record calculus on slow timebases. Low-end SDS1202X-E folds accurate calculus on timebase 5 orders of magnitude larger than DS1000Z. GDS1054B 3 orders of magnitude.

If the new possibilities include producing the wrong result, then it is hardly an alternative.

Whats wrong with my 2xFFT showcase? Or just go over auto-measurements thread test reports, can see that "Screen samplers" are not just wrong, when not in exactly right timebase they are exponentially wrong:

32768Hz square test signal:
SDS1202X-E, 500MSa/s, 2ms/: rise ~9ns, period 30.52us
DS1000Z, 250MSa/s, 2ms/: rise 80000ns, period 120us

Now if take calculator and analyze Rigol performance:
2ms/ * 12/ = 24ms = 24000us
24000us / 120us = 200
24000us / 80000ns = 300
So this is why effective dataset for Rigol can be considered as ~300 points.

« Last Edit: August 10, 2017, 12:02:28 pm by MrW0lf »
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #237 on: August 11, 2017, 03:55:54 pm »
Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?

I explained it right here:

This processing power problem with long record lengths is not new.  The ancient Tektronix 2230/2232 DSOs support 1k and 4k record lengths which seems laughably short by today's standards but why did they support a 1k record length at all?

I'm sorry but you didn't explain. While you think that it's processing which makes deep memory scopes slow, you seem to ignore the basic fact that at a given sample rate it simply takes more time to fill the larger memory. Processing has nothing to do with it.

Quote
See where it says 4M?  Hmm, I don't.  See where it says anything close to 4M?  Hmm, it does not say that either.

These were just examples.

Quote
Why do you think so many DSOs are making measurements on the display record?

I think MrW0lf has already covered this but fact is that at least as big brands' mid-range and high-end scopes are concerned measurements are done from the sampled data and not from the display record. The only exceptions I'm aware of are the LeCroy WaveAce 100/200 and 1000/2000, which are all rebadged Siglent low end scopes (SDS1000CL/CML,CFL), use screen data for measurements, as well as the Rigol-made Agilent scopes (DSO1000?).

Quote
Quote
Sample memory sizes haven't really been the prime marketing argument for the best part of a decade, and even before then were rarely so.

For something that is so unimportant for marketing, they sure go out of their way to advertise their long record lengths while avoiding the subject of how those long record lengths do not apply except in specific operating modes.

Well, it's part of a scope's specs, so it should be obvious why manufacturer list it. The same is true for having color screens, or the physical dimensions.

Pretty much all decent modern scopes, aside from maybe the Keysight DSOX and some ancient Teks, have more than enough sample memory for pretty much any task you can throw at them.  It's hardly a decisive factor when buying a scope, again similar to having a color screen.

Quote
Quote
Maybe, and it shows that Tek didn't really 'get' digital scopes and was too fixated on their analog past, but as I said the TDS694C was only an example, and Tek has produced many more low memory scopes and not all of them have the excuse of being made for niche purposes.

It shows Tektronix made those oscilloscopes for a specific market where the limit in record length was irrelevant and other considerations like sample rate and bandwidth were more important.

Yes, and no. Tek wanted to produce digital scopes that as much as possible behave like analog scopes. They never understood that the power in DSOs is not in being able to come up with better/cheaper  'analog-like' scopes but in the incountable number of ways to squeeze even more information out of a captured signal, way beyond what the squiggly line on the screen tells the user.

Quote
I'm not very familiar with LeCroy's products other than those from companies that they bought.  How did the LeCroy DSOs which were contemporaries to the Tektornix TDS600 series compare?  Wasn't LeCroy selling a lot of DSOs for high energy physics applications at the time?  Maybe there wasn't much overlap with the market Tektronix was catering to.

You're right, LeCroy started with making digitizers and later scopes aimed at high energy physics, which was their core market until the mid- '80s, and later also started making inroads into electronics engineering. The 9300 Series which came out I think somewhere around 1993 was pretty much designed for the EE market, with up to 4GSa/s and 8M memory. LeCroy also offered not only advanced math and analysis tools (for example, these scopes could already do 1M FFTs), but also specialist applications for storage companies (hard disks and optical storage).

Quote
When I have used modern DSOs which support long record lengths, I set them low enough for maximum performance unless a long record length is needed just like I do with my 20+ year old DSOs.

The point is that on a 20+ year old Tek with 4k or so you can't because long memory (in a modern sense) doesn't exist on these old scopes.

I've no problem believing that small memory works for you, I guess you're probably used to it (and it seems you didn't really had much contact with any decent modern deep memory scope), and that's fine. It however doesn't invalidate my arguments.
« Last Edit: August 11, 2017, 04:06:59 pm by Wuerstchenhund »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4545
  • Country: au
    • send complaints here
Re: New Tektronix TBS2000 oscilloscopes
« Reply #238 on: August 12, 2017, 12:27:25 am »
Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?

I explained it right here:

This processing power problem with long record lengths is not new.  The ancient Tektronix 2230/2232 DSOs support 1k and 4k record lengths which seems laughably short by today's standards but why did they support a 1k record length at all?

I'm sorry but you didn't explain. While you think that it's processing which makes deep memory scopes slow, you seem to ignore the basic fact that at a given sample rate it simply takes more time to fill the larger memory. Processing has nothing to do with it.
Yes, there is a fundamental limit to how fast the display can be drawn at slower timebases, but even there many scopes fall far below the theoretical maximum rates. But thats ignoring the faster timebases where there are significant limits to how fast waveforms can be captured and displayed in real time, where there is a wide diversity of scopes optimised for different purposes as was shown here:
https://www.eevblog.com/forum/testgear/rigol-mso4000-and-ds4000-tests-bugs-firmware-questions-etc/msg973064/#msg973064


Increasing the memory depth of those rigol scopes further reduces their acquisition rates, same with the Tektronix examples David is talking about, its like this with most scopes and well known. The entire "argument" about needing memory depth controls is that the user needs it for some reason, in the Agilent/Keysight X series thats not needed because there is never so rarely a reason for the user to capture a smaller memory depth (even though it could be nice for some applications where the data is being offloaded).


I've no problem believing that small memory works for you, I guess you're probably used to it (and it seems you didn't really had much contact with any decent modern deep memory scope), and that's fine. It however doesn't invalidate my arguments.
You really need to stop going out of your way to tell everyone that scopes with deep memory and advanced post capture analysis are so far superior for all possible uses than scopes with fast realtime analysis or displays. They're both useful for different purposes which the other cannot do, and while they have a lot of overlap where either could solve the same problem not everyone has the budget for multiple scopes or a scope with useful deep memory analysis.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #239 on: August 14, 2017, 10:10:38 am »
Increasing the memory depth of those rigol scopes further reduces their acquisition rates, same with the Tektronix examples David is talking about, its like this with most scopes and well known.

Thanks Captain Obvious, but the point was not if scopes get slower with larger sample memory (they do) but the why. David seems to believe it's because of processing, but in reality this is simply down to basic math.

Quote
The entire "argument" about needing memory depth controls is that the user needs it for some reason, in the Agilent/Keysight X series thats not needed because there is never so rarely a reason for the user to capture a smaller memory depth (even though it could be nice for some applications where the data is being offloaded).

The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.

That is fine for some tasks but not for others, i.e. sometimes you might want to capture a specific lenght only. After all, there's a reason why pretty much any other newer scope allows for manual setup of sample memory, and that includes even Keysight's own scopes (Infiniium), which indicates that there's some use for this feature.

Quote
I've no problem believing that small memory works for you, I guess you're probably used to it (and it seems you didn't really had much contact with any decent modern deep memory scope), and that's fine. It however doesn't invalidate my arguments.

You really need to stop going out of your way to tell everyone that scopes with deep memory and advanced post capture analysis are so far superior for all possible uses than scopes with fast realtime analysis or displays.

And you really need to start paying attention to and try to understand what has actually been written because this was mostly about sample memory sizes, not analysis capabilities, and I've already demonstrated how a small memory scope easily gets caught out by it's small memory.

Not that this is really a problem these days, as, aside from some ancient Tek offerings and some early HP Infiniiums, pretty much most somewhat modern scope (i.e. made since 2000), even those from the Chinese B-brands, offer more than enough sample memory to avoid the pitfalls. And that even includes the Keysight DSO-X Series.

They're both useful for different purposes which the other cannot do, and while they have a lot of overlap where either could solve the same problem not everyone has the budget for multiple scopes or a scope with useful deep memory analysis.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: New Tektronix TBS2000 oscilloscopes
« Reply #240 on: August 14, 2017, 04:09:14 pm »
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.
That isn't entirely true because what happens if you press stop and there is nothing more to trigger on? You can still use zoom to zoom into the signal. I'm pretty sure HPAK is using a dual acquisition technique which uses a short buffer to draw an intensity graded trace and switches to a deep memory mode when you change the timebase to zoom in. The giveaway is that the intensity graded trace dissapears once you change the timebase to zoom in/out.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #241 on: August 14, 2017, 05:42:49 pm »
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.

That isn't entirely true because what happens if you press stop and there is nothing more to trigger on? You can still use zoom to zoom into the signal. I'm pretty sure HPAK is using a dual acquisition technique which uses a short buffer to draw an intensity graded trace and switches to a deep memory mode when you change the timebase to zoom in. The giveaway is that the intensity graded trace dissapears once you change the timebase to zoom in/out.

What happens if there's no trigger after pressing STOP is an interesting question. I honestly don't know, and I'll have no access to a DSOX for a while so somebody else would need to test that out.

However, even if HPAK uses some dual acquisition mode as you suggest, the high waveform rate it reaches would make it physically impossible to use all the memory at the given max sample rate of these scopes. Which means if you press STOP and there's no trigger you'd end up with a record of unknown length which very likely is less than the full sample memory.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: New Tektronix TBS2000 oscilloscopes
« Reply #242 on: August 14, 2017, 06:36:19 pm »
Ofcourse there is more to it but I think an acquisition is terminated if a trigger arrives before it is finished. That way it always has a full record after the last trigger event but it can also achieve a very high update rate. There is no need to record data which isn't going to be used.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4545
  • Country: au
    • send complaints here
Re: New Tektronix TBS2000 oscilloscopes
« Reply #243 on: August 14, 2017, 11:48:39 pm »
Increasing the memory depth of those rigol scopes further reduces their acquisition rates, same with the Tektronix examples David is talking about, its like this with most scopes and well known.
Thanks Captain Obvious, but the point was not if scopes get slower with larger sample memory (they do) but the why. David seems to believe it's because of processing, but in reality this is simply down to basic math.
There is no basic maths you can apply to determine how fast a particular scope will update in realtime, none reach the theoretical ideal and all have some limitations which are not disclosed/revealed by the manufacturer. Lets go back to your point where this started:
They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.
Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?
Ideally you wouldn't have to compromise on memory depth, it would always be as deep as possible for the horizontal window. You are limited by sample rate for short captures, and memory depth for long captures, but in the in-between where neither is limiting people still choose to have a shorter memory depth than they could capture because it slows down aspects of the scope such as the waveform display rate. You can measure this so I took a rigol 1054 and did the comparison setting both scopes to 50us per division:

Waveformswfms/s
RigolMemoryVectorDotsSample Rate
1054Z12k36362410MS/s
120k217298125MS/s
600k1781921GS/s
1200k1601701GS/s
12M60611GS/s
24M35361GS/s
Keysight
1000/2000X500k18001GS/s
3000X500k17461GS/s
3000X2M7804GS/s

The theoretical zero blind time rate is 2000 wfms/s for the Keysight, and 1667 wfms/s for the Rigol (extra 2 divisions horizontal display). They're all able to run at 1GS/s for this test but the Keysight gives you no direct options to change to other memory depths, while the Rigol with all its choices fails to match the realtime performance. It even lets you choose longer depths that are captured outside the display but not shown until you stop and zoom around the capture, at the shorter memory depths the Rigol is dropping its sample rate and not putting 1GS/s data onto the screen which is why comparisons need to be made carefully. Processing (and/or memory bandwidth) is limiting the ability to draw more information to the screen and the reason why many scopes offer the choice of shorter memory depths.

The entire "argument" about needing memory depth controls is that the user needs it for some reason, in the Agilent/Keysight X series thats not needed because there is never so rarely a reason for the user to capture a smaller memory depth (even though it could be nice for some applications where the data is being offloaded).

The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.

That is fine for some tasks but not for others, i.e. sometimes you might want to capture a specific lenght only. After all, there's a reason why pretty much any other newer scope allows for manual setup of sample memory, and that includes even Keysight's own scopes (Infiniium), which indicates that there's some use for this feature.
If you want to capture a specific length of data sure, its nice to have the controls available and I did mention that is one corner case. But in general what people want to capture is a length of time and they would like to have as much memory and sample rate as possible, but for most scopes thats balancing against realtime waveform update rate. Or the user needs to capture elements with a particular frequency so they are constrained in their lowest possible sample rate, again the tradeoff appears. Or we can take the Keysight X series scopes where they provide no choice, but there are so few cases where you would want to have shorter memory depths on them that that it seems reasonable they left the option out.

The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.
They don't cheat, it tells you the sample rate for the current mode clearly and plainly on the UI as do many other scopes:

I find this is much easier to work with than memory depth as I'm generally concerned about the frequencies being captured, not the specific length of memory being used to do this. Again, when you always get as much memory as possible used in the captures you can forget about that parameter and focus on the ones that matter to your specific situation. Yes, going to a single capture doubles the memory depth in many situations (but not all) but when looking at the signal I can quickly asses if the sample rate is sufficient for the information I want to see and adjust the controls accordingly.
« Last Edit: August 15, 2017, 10:46:17 pm by Someone »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4545
  • Country: au
    • send complaints here
Re: New Tektronix TBS2000 oscilloscopes
« Reply #244 on: August 15, 2017, 12:05:54 am »
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.

That isn't entirely true because what happens if you press stop and there is nothing more to trigger on? You can still use zoom to zoom into the signal. I'm pretty sure HPAK is using a dual acquisition technique which uses a short buffer to draw an intensity graded trace and switches to a deep memory mode when you change the timebase to zoom in. The giveaway is that the intensity graded trace dissapears once you change the timebase to zoom in/out.

What happens if there's no trigger after pressing STOP is an interesting question. I honestly don't know, and I'll have no access to a DSOX for a while so somebody else would need to test that out.
Daniel from Keysight engaged on this and linked to a video:
https://www.eevblog.com/forum/testgear/new-keysight-scope-1st-march-2017/msg1125192/#msg1125192

Its not confusing or magic, while in run mode the memory is halved from maximum (generally) and when pressing stop the memory is held and when you press single it uses as much memory as possible for the next trigger. Its not using the minimum possible to fill the display buffer, the data is aggregated into a 2d histogram (with vectors) at the running sample rate and there is a larger memory available when stopped to navigate/zoom through.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #245 on: August 15, 2017, 01:34:45 pm »
Increasing the memory depth of those rigol scopes further reduces their acquisition rates, same with the Tektronix examples David is talking about, its like this with most scopes and well known.
Thanks Captain Obvious, but the point was not if scopes get slower with larger sample memory (they do) but the why. David seems to believe it's because of processing, but in reality this is simply down to basic math.
There is no basic maths you can apply to determine how fast a particular scope will update

There's basic math which tells you how long it takes to capture a specific segment, a time that no scope no matter how fast is able to beat, which is

Tcapture [seconds] = sizememory [Samples] / fsampling [Samples per second]

Knowing this, it should really be no surprise why a long memory scope will take more time to complete an acquisition cycle than a short memory scope.

Quote
Lets go back to your point where this started:
They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.

Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?

Ideally you wouldn't have to compromise on memory depth, it would always be as deep as possible for the horizontal window. You are limited by sample rate for short captures, and memory depth for long captures, but in the in-between where neither is limiting people still choose to have a shorter memory depth than they could capture because it slows down aspects of the scope such as the waveform display rate.

Correct, which again should be no surprise to anyone who knows about the mathematical relationship between time per acquisition, sample memory size and sample rate, as demonstrated above.

Quote
You can measure this so I took a rigol 1054 and did the comparison setting both scopes to 50us per division:

[...]

The theoretical zero blind time rate is 2000 wfms/s for the Keysight, and 1667 wfms/s for the Rigol (extra 2 divisions horizontal display). They're all maxing out at 1GS/s for this test but the Keysight gives you no options to change to other memory depths, while the Rigol with all its choices fails to match the realtime performance. It even lets you choose longer depths that are captured outside the display but not shown until you stop and zoom around the capture, at the shorter memory depths the Rigol is dropping its sample rate and not putting 1GS/s data onto the screen which is why comparisons need to be made carefully. Processing (and/or memory bandwidth) is limiting the ability to draw more information to the screen and the reason why many scopes offer the choice of shorter memory depths.

That is all great, but again you miss the point. This was about long memory being useful or, as David Hess claimed, just being a marketing gimmick. No-one argued that a scope must always be run at max memory settings, if you believe this then this is just your interpretation. The discussion was if long memory on a scope is a needed feature, for which I argue that in this day and age, yes it is.

Besides, the point about processing David and I have been discussing earlier was  pretty much about what processing was available in comparable scopes back then. Comparing the performance of one of the cheapest bottom-of-the-barrel scopes on the market with a not exactly cheap upper entry-level/lower mid-range DSOX3k with dedicated waveform ASIC (MegaZoom) and then concluding that the Rigol is limited by processing power (what a surprise!) has nothing to do with what David and I were discussing, and is also a bit silly, really.

Quote
If you want to capture a specific length of data sure, its nice to have the controls available and I did mention that is one corner case.

Considering that memory controls are standard on the majority of scopes I doubt it's just a "corner case". I use it regularly in a wide range of situations, as do my colleagues.

Quote
But in general what people want to capture is a length of time and they would like to have as much memory and sample rate as possible, but for most scopes thats balancing against realtime waveform update rate. Or the user needs to capture elements with a particular frequency so they are constrained in their lowest possible sample rate, again the tradeoff appears.

Probably right (at least for standard measurement situations), but most newer scopes that offer sample memory controls also offer an automatic mode which only uses enough memory as required to maintain a high waveform rate, so at least with these scopes this isn't really an issue.

Not all do, like the Tek MDO3000, and it can add to this scope's already overall very high frustration factor, but scopes like that are thankfully in the minority.

Quote
Or we can take the Keysight X series scopes where they provide no choice, but there are so few cases where you would want to have shorter memory depths on them that that it seems reasonable they left the option out.

You're right, but in cases where the auto selection isn't good enough usually you'd want more memory, not less. I guess the reason the DSO-X, like any InfiniVision scope right back to the old HP 54542A/D from 1995, lacks memory controls is probably down to their MegaZoom ASIC, which (like the comparatively small memory) wasn't considered a problem for a scope that was optimized for very high update rates at the cost of pretty much everything else. After all, any other HPAK scope back then and today does have sample memory settings, and newer ones also have an automatic mode.

Quote
I find this is much easier to work with than memory depth as I'm generally concerned about the frequencies being captured, not the specific length of memory being used to do this. Again, when you always get as much memory as possible used in the captures you can forget about that parameter and focus on the ones that matter to your specific situation. Yes, going to a single capture doubles the memory depth in many situations (but not all) but when looking at the signal I can quickly asses if the sample rate is sufficient for the information I want to see and adjust the controls accordingly.

Fair enough, and I can see why not having to care is practical in most standard measurement situations. But again, that is true for most somewhat newer scopes that offer user-controllable memory, as they also have an automatic mode. This isn't an either-or decision, these days you can have both, automatic memory management for everyday and manual controls when it's needed.

Anyways, this discussion wasn't about the DSO-X, which in the context of sample memory still counts as a deep memory scope, it was about if long memory is a worthwhile feature to have or not.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #246 on: August 15, 2017, 02:55:36 pm »
What happens if there's no trigger after pressing STOP is an interesting question. I honestly don't know, and I'll have no access to a DSOX for a while so somebody else would need to test that out.

Daniel from Keysight engaged on this and linked to a video:
https://www.eevblog.com/forum/testgear/new-keysight-scope-1st-march-2017/msg1125192/#msg1125192

I remember the thread, haven't seen the video (and where I am I can't see it) but I'll do so when I get the chance.

Quote
Its not confusing or magic, while in run mode the memory is halved from maximum (generally)

Yes, for the double buffering.

Quote
and when pressing stop the memory is held

So when you press STOP you end up with a record half the size of the max sample memory (at best).

Quote
and when you press single it uses as much memory as possible for the next trigger.

OK, so your only way to capture a complete max memory segment is SINGLE.

Quote
Its not using the minimum possible to fill the display buffer, the data is aggregated into a 2d histogram (with vectors) at the running sample rate and there is a larger memory available when stopped to navigate/zoom through.

OK, so on a DSO-X3kT with 4Mpts and 5GSa/s, that would mean a best case (single channel) 2Mpts in normal acquisition mode, which at 5GSa/s takes 400us to fill. Even on a perfect scope with zero blind time, 400us per acquisition translate into only 2,500 acquisitions per second. Which means to reach the very high waveforms the DSO-X3kT can achieve it would have to dramatically reduce the amount of memory used, i.e. at 500k acquisitions per second that just leaves 2us for acquisition + blind time, so even that perfect scope with no blind time would have to reduce the sample memory size to 10k.

I'll check out the video if it actually explains what happens if you press STOP and no new trigger appears, but I'd guess that there won't be a large sample size left to zoom around widely.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #247 on: August 15, 2017, 03:02:10 pm »
The DSO-X, like any HPAK InfiniVision scope, is cheating as the only time it acquires a long memory segment in normal acquisition is at the last acquisition made after pressing STOP, otherwise it uses just enough memory to fill the display record. Plus it doesn't even tell you how much memory it uses.
They don't cheat, it tells you the sample rate for the current mode clearly and plainly on the UI as do many other scopes:


Fine, but if you read what you quoted again you might notice (I did even highlight it) that I was talking about sample memory, *not* sample rate.

Can you point me where the sample memory in used is shown on your screenshot?
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27003
  • Country: nl
    • NCT Developments
Re: New Tektronix TBS2000 oscilloscopes
« Reply #248 on: August 15, 2017, 03:07:50 pm »
OK, so on a DSO-X3kT with 4Mpts and 5GSa/s, that would mean a best case (single channel) 2Mpts in normal acquisition mode, which at 5GSa/s takes 400us to fill. Even on a perfect scope with zero blind time, 400us per acquisition translate into only 2,500 acquisitions per second. Which means to reach the very high waveforms the DSO-X3kT can achieve it would have to dramatically reduce the amount of memory used, i.e. at 500k acquisitions per second that just leaves 2us for acquisition + blind time, so even that perfect scope with no blind time would have to reduce the sample memory size to 10k.
As I wrote before your math is too simplified. You don't have to fill the entire acquisition memory if you know the data is not going to be used. This is the case when a new trigger arrives before the acquisition memory is completely filled. After all at short time/div settings you'll be looking at a fraction of the acquisition memory anyway. The rest is outside the screen.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: New Tektronix TBS2000 oscilloscopes
« Reply #249 on: August 15, 2017, 05:09:54 pm »
OK, so on a DSO-X3kT with 4Mpts and 5GSa/s, that would mean a best case (single channel) 2Mpts in normal acquisition mode, which at 5GSa/s takes 400us to fill. Even on a perfect scope with zero blind time, 400us per acquisition translate into only 2,500 acquisitions per second. Which means to reach the very high waveforms the DSO-X3kT can achieve it would have to dramatically reduce the amount of memory used, i.e. at 500k acquisitions per second that just leaves 2us for acquisition + blind time, so even that perfect scope with no blind time would have to reduce the sample memory size to 10k.
As I wrote before your math is too simplified.

No, it's not. It's simply one of the critical limitations in linear sampling systems.

Quote
You don't have to fill the entire acquisition memory if you know the data is not going to be used. This is the case when a new trigger arrives before the acquisition memory is completely filled. After all at short time/div settings you'll be looking at a fraction of the acquisition memory anyway. The rest is outside the screen.

Yes, you don't have to fill the entire memory, that is clear. But if the scope doesn't then where should it get its data to zoom out from?

You can't have it both ways. Either the scope uses little memory and is then able to achieve a high update rate, or it uses more memory which means the update rate goes down.

Now as the DSO-X3kT is concerned that means it either uses a big part of the available memory or it doesn't to achieve fast update rates.

We know that the DSO-X uses all memory (4Mpts) in SINGLE mode.

We know that the DSO-X uses half memory (2Mpts) in NORMAL/AUTO mode on the *last* acquisition made after pressing STOP.

We know that the DSO-X uses a lot less than the available memory (which would be 2Mpts best case) for all acquisitions in NRM/AUTO except for the last one to maintain its very high update rates (again, it uses the full available memory on the last acquisition after pressing STOP).

So what about when the scope is in NRM and you press STOP and no further trigger appears? The scope will not have made a longer 'last' acquisition as it hasn't been triggered anymore. And the acquisitions made before STOP was pressed would only have used a small part of the available memory as otherwise the update rates would have dropped like a rock (see the formula I stated, for which there is no way around). And with the last acquisition only using a small part of the memory, there simply is no data to zoom out.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf