Products > Test Equipment
New Tektronix 3 Series MDO
<< < (27/35) > >>
2N3055:

--- Quote from: snoopy on April 30, 2020, 12:49:35 pm ---
--- Quote from: 2N3055 on April 30, 2020, 09:59:36 am ---
--- Quote from: snoopy on April 30, 2020, 02:03:22 am ---
--- Quote from: Wuerstchenhund on April 29, 2020, 11:37:57 am ---

--- Quote from: snoopy on April 29, 2020, 02:29:19 am ---
--- Quote from: Wuerstchenhund on April 28, 2020, 03:22:57 pm ---InstaVu was a crutch where high update rates were achieved in a special mode using data reduction, and which made it impossible to run measurements or any other analysis on the waveform.

It was only an "industry first" in a sense that no-one else implemented such a mode, very likely because of it's limitations. At around the same time, HP came out with its first MegaZoom equipped scope (HP 54645A/D, the 'D' also being the "industry first" MSO), which achieved excessive update rates in normal operation, with no limitations on measurements.

And when it comes to emulating analog functionality, there simply is nothing which better resembles an analog scope than MegaZoom (if that's what you want). It's as simple as that.

--- End quote ---

That's not why you would use InstaVu. InstaVu was used to show up rarely occurring glitches that other scopes were blind to or may take hours sitting in front of the scope before  you would capture a single glitch !
--- End quote ---

So in which way is this different than any other high waveform rate technology like MegaZoom?

And while your trust in InstaVu is admirable, the reality is that even at 400k wfms/s your scope is still blind >90% of the time! Even scopes like the Keysight DSO-X3000T which achieve up to 1'030'000 waveforms/s are blind 89.70% of the time. Which means there is a 9 out of 10 chance your scope will miss an event on every acquisition.

Which means the *only* way to find rare events (or to make sure there are none!) is to use triggers.

And this is the reason why the only market segment that actually cares about update rates is the low-end/entry-level segment, mostly because this is what serves people coming from analog scopes and who prefer analog scope derived methodology. Above that, the update rate is pretty much irrelevant, and most high end scopes achieve only comparably low trigger rates. Which, again, doesn't matter, because no-one spends $3k on a scope to search for glitches by staring at a screen.


--- End quote ---

Yes but you have to know what kind of glitch to trigger on otherwise you are poking around in the dark and that's if you even have the ability to trigger on it ! But you still didn't answer my question about the original megazoom acquisition rate ? Be interested to know ;) Here is a comparison between an early Tek scope and apparently still current model Keysight scope ! Not bad for a mid 90's Tek scope ;)

https://youtu.be/uUM7UDWifWw?t=1809

--- End quote ---

What "apparently still current model Keysight scope !", Agilent MSO6104A ?
That thing is dead and gone, replaced by MSOX3000 series many moons ago...

And what "magical glitches" are everybody talking about? Runts, too short pulses, dropouts, rise time anomalies ? What?
All of those are well covered by triggers. 
This was discussed ad nauseam many times, like Someone nicely said.
Using on screen persistence to capture signal anomalies can be used but has limited usability.
Only information you get is that you caught something, but not when and in correlation to what.
It can be used only as a proof that there are some anomalies, and hopefully give enough information for operator to devise triggering scenario to reliably capture such anomalies every time. So you can count how many are there, what is distribution and try to correlate with system state and other signals to try to find a source.
Also, if you don't catch anything on screen, it is NOT a proof all is well, because you maybe didn't wait long enough...

I personally use screen persistence, but first go through a set of well known triggers (rise time, pulse width, runt), that is really quick thing to do,  and if those don't catch anything, i might let it run in infinite persistence mode for few hours just to be sure...
You can also set mask mode, and use that too. Nobody mentions this in this context. But it is probably best way to do it. It is a built in anomaly detector, that will detect any deviation of the signal.  And it will give you much more info than display persistence, because it will give you stats and confidence interval...

--- End quote ---

What exactly is your point ?? Tek had this functionality in the mid 90's that no other scope vendor had at the time. I have one of these scopes and have used it for that purpose many times. I don't worry about hunting through all of the triggers and trigger parameters in order to find a glitch when I can just push a single button and sit back and watch the side show on the screen ;)

cheers

--- End quote ---

I apologize, for, something... :-+
Wuerstchenhund:

--- Quote from: Someone on April 29, 2020, 11:56:12 pm ---
--- Quote from: Wuerstchenhund on April 29, 2020, 10:53:52 am ---
--- Quote from: james_s on April 28, 2020, 05:05:02 pm ---I don't really grasp what you mean by it being slow. I have a TDS3054 and a TDS784C and the only time I've ever noticed any kind of slowness in either one is using deep memory on the TDS784. The TDS3000 feels very snappy to me, what do I need to do to see this "painfully slow" lag you refer to? I'm genuinely curious and don't know what you're talking about.
--- End quote ---

It's not about 'lag' or general controls. There isn't any input lag when operating the scope. But unfortunately the user interface isn't everything.

For example, try mask testing on the TDS3000. Or FFT. The TDS3000 is also slow when it comes to waveform rates, as in normal mode it's trigger rate is some 450 wfms/sec. This raises to 3k wfms/s or so in Fast trigger mode but then the sample memory (with 10kpts not exactly large) is limited to a measly 500pts. It's not a big problem if you can make it with the available trigger suite (which is quite good if the advanced trigger option is installed) but that doesn't change the fact that the scope *is* slow, and when used in an 'analog scope' manner (like searching for glitches through trace persistence) then it will perform poorly.
--- End quote ---

Slow waveform update rates make it bad, got it...
--- End quote ---

You clearly didn't 'get' it. I didn't say that the low update rate in normal operation was a problem, I actually said it isn't (and, just for you, I highlighted above where I did that so you can easily find it ;) ).

The point I was making is that the scope might feel OK if you twidle the knobs, it's still a very slow scope. And while the waveform rate isn't really a problem, the slow architecture is for tasks like mask testing, math or FFT.

It should also be remembered that the TDS3000, while looking a lot like the entry-level scopes of today, wasn't a an entry level or even particularly cheap scope (the 500MHz version without any options ran some $18k+, even the 100MHz 2ch base model was over $7k!). Back then in 1999 it's competitors were not common bench scopes like the Agilent 54622A (which was around $4k back then if I remember right) but other expensive scopes like the Agilent Infiniium 54800 Series or the LeCroy WaveRunner LT (and for the 500MHz models even the LC Series). Just to put this into some context.


--- Quote ---
--- Quote from: Wuerstchenhund on April 29, 2020, 11:37:57 am ---So in which way is this different than any other high waveform rate technology like MegaZoom?

And while your trust in InstaVu is admirable, the reality is that even at 400k wfms/s your scope is still blind >90% of the time! Even scopes like the Keysight DSO-X3000T which achieve up to 1'030'000 waveforms/s are blind 89.70% of the time. Which means there is a 9 out of 10 chance your scope will miss an event on every acquisition.

Which means the *only* way to find rare events (or to make sure there are none!) is to use triggers
--- End quote ---
Wait, waveform update rates are useless? (others will disagree on this point). Wash my fur but don't get me wet?
--- End quote ---

I always said that update rates are pretty meaningless, yes.


--- Quote ---The reality is there is a balance, triggers can find some sorts of problems, and realtime viewing others,
--- End quote ---

Nope. The reality is that glitch finding via persistence mode is a crutch from a time where scopes were so primitive that it literally was the only tool available. Sophisticated triggers as we have them today didn't exist, storage (where it was even available) was utterly poor, and measurement capabilities nonexistent.

Persistence mode does has its place but only where it is ensured that the events of interest occur within the actual acquisition phase, which means that some basic understanding of the event must have been established first. For example eye diagrams.


--- Quote ---its all application specific and neither is better than the other for everything.
--- End quote ---

Simple math says otherwise. The only way you can be sure that you captured every event within the time period of observation is by using triggers.

Just to be sure, we're talking about "gllitch hunting", i.e. finding rare events. Persistence mode of course has some use for other tasks, i.e. mask tests.


--- Quote ---You've been consistently coy about highlighting example applications or methods to enlighten us readers as to specific advantages.
--- End quote ---

What is there to highlight? With a good scope I can trigger of *any* kind of event, no matter what. Runt? Missing pulses? Too many pulses? Wrong data bits in a serial transmission? Slew rates outside spec? Malformed pulses? Anything else? Doesn't matter, with a good scope I can trigger on it. How, depends on the scope (that's where knowing your instrument comes in), but even a TDS3000, if it has the advanced trigger option installed, can go a long way finding stuff with triggers.

So I'm really curious as what kind of sporadic events you believe can only be found with persistence modes.


--- Quote ---Ideally a scope would be capable in both areas, luckily those exist too.
--- End quote ---

Sure, for a standard entry-level or low-midrange bench scope (simple scope), but mostly because these scopes are often limited in what triggers they offer (although that is becoming less and less an issue, as even many cheap scopes offer a surprisingly versatile range of triggers) and because these are scopes which often fall in the hand of hobbyists and other people who want to treat it like an analog scope of back then (which will continue as long as outdated methodology is still passed on as 'best practise').

For anything above the lower mid-range the focus has always been on triggers and analysis capabilities, and high end scopes all came with often paltry trigger rates. Which, again, isn't a problem because no-one pays $20+ for a scope to start glitch hunting by staring at a persistence screen. Relying on persistence mode to find rare events is also completely useless for qualification, e.g. demonstrating the absence of a specific type of event, or even that the number of events is within a certain range.

Over the years waveform rates of high-end scopes have improved, but that is mostly a side effect of the need to process ever more data (generated by very fast ADCs often operating at increased resolution, and from the various analysis and processing tools) as quickly as possible. Technical progress already has the same effect on entry-level scopes, where newer models achieve respectable update rates without relying on special modes or proprietary ASICs, and this will only continue. At the same time, trigger capabilities of entry-level scopes are constantly improving, which means persistence mode glitch hunting is becoming as obsolete in this class as it has been for more expensive scopes.
Wuerstchenhund:

--- Quote from: nctnico on April 29, 2020, 04:47:42 pm ---
--- Quote from: Wuerstchenhund on April 29, 2020, 11:37:57 am ---It's not an issue with *any* DSO! ADC resolution is completely independet on what is shown on the screen. It doesn't matter if you have one, two, three, four or eight traces, the ADC resolution doesn't change. Why should it?

--- End quote ---
It depends on how the traces are shown. If you take one grid and change the v/div so you can fit 4 traces you'll lose ADC resolution (and thus math precission). An alternative is to have multiple grids (split display) in which each trace can be shown at full height (IOW: using a lower v/div setting); in this case you won't lose ADC resolution. But this isn't a modern feature.

--- End quote ---

You are right of course, I was assuming that the vertical div setting isn't changed when adding traces. of course if you change the v/div setting then you change the effective dynamic range which is used for the signal.

You'd only do this to visually separate different traces on a simpler scope which only has a single graticule. as better scopes usually offer two or multiple graticules where traces don't have to share the same graticule but where every trace has its own, and these are automatically scaled by the scope so everything fits on screen.

0culus:
I got an email from Tektronix this afternoon offering discounts on the 3 and 4 series. Apparently they are offering 16 digital channels fore free and 75% off the Software bundle for the 3 series.

https://www.tek.com/promotions/Spring-Summer-2020-Sale
Someone:

--- Quote from: Wuerstchenhund on April 30, 2020, 01:12:11 pm ---
--- Quote from: Someone on April 29, 2020, 11:56:12 pm ---You've been consistently coy about highlighting example applications or methods to enlighten us readers as to specific advantages.
--- End quote ---

What is there to highlight? With a good scope I can trigger of *any* kind of event, no matter what. Runt? Missing pulses? Too many pulses? Wrong data bits in a serial transmission? Slew rates outside spec? Malformed pulses? Anything else? Doesn't matter, with a good scope I can trigger on it. How, depends on the scope (that's where knowing your instrument comes in), but even a TDS3000, if it has the advanced trigger option installed, can go a long way finding stuff with triggers.

So I'm really curious as what kind of sporadic events you believe can only be found with persistence modes.
--- End quote ---
Still being coy, some things never change. If you're trying to find an unknown problem searching through n-number of triggers that are an abstraction of what might be causing the problem, for instance when exactly does a runt qualify if its only a partial height on an edge?. There are automated tools to step through multiple triggers and check them at realtime sampling, but crucially they don't run in parallel, so your argument of blind time applies equally (often worse) to them as soon as you have a non-trivial number of different trigger conditions to check. If its offline analysis, again, the speed at which that occurs is important to compare with your same blind time example. Deep memory is great for one off events, but trying to extract maximum information from realtime streams can be done more effectively with waveform accumulation in many real world applications (note the lack of any absolute statement such as all).

And you assume that triggers can describe the problem, which may be true for some digital signal analysis but there are a world of other signals out there such as power and analog, which are hard to describe their faults/problems. Your statements are not absolute and universal as you try to present them, but then endlessly try to argue that they are irrefutable truths. We're not cherry pricking out of context points, you are the one active highlight how universal your "truths" are:

--- Quote from: Wuerstchenhund on April 29, 2020, 11:37:57 am ---Which means the *only* way to find rare events (or to make sure there are none!) is to use triggers.
--- End quote ---

If you narrowly frame where/why such approaches are superior you might have a point, but you never do. And as soon as the alternatives are presented factually and in context you have to discount them as not applicable to your imagined and non-specific application.


--- Quote from: Wuerstchenhund on April 30, 2020, 01:12:11 pm ---
--- Quote from: Someone on April 29, 2020, 11:56:12 pm ---its all application specific and neither is better than the other for everything.
--- End quote ---
Simple math says otherwise. The only way you can be sure that you captured every event within the time period of observation is by using triggers.

Just to be sure, we're talking about "gllitch hunting", i.e. finding rare events. Persistence mode of course has some use for other tasks, i.e. mask tests.
--- End quote ---
Mask testing and building eye diagrams are adjacent and often tightly coupled to the realtime waveform update rate, they could be considered almost synonymous given what they present. But you try and separate them to drive your narrative.

"simple math" is what you don't use to compare the alternatives, you're quick to point out that many waveforms per second /= 100% visibility. But then fail to show that alternatives are superior. Checking through 10 trigger configurations in realtime (not counting the configuration overhead) would be at least 90% blind, the same figure as is often settled on for high waveform capture rates. The question then is can the problem be surely found with only 10 triggers? or, does a 2D histogram of the signal (itself triggered to already narrow what your are looking at) contain more information?

Also noting that dead time can approach 0% for slower signals (the rate may be slower but the blind time is lower), but you always drive the discussion exclusively to highspeed digital with short acquisition windows where the (effectively fixed) re-arm period makes the blind time look as bad as possible. More subtle and unspoken framing that you don't explain against the applications. This is the sort of sly and misleading sales tactics that salespeople resort to, slowly moving someone away from what they actually wanted by convincing them something else is shiny and impressively higher performance, except none of those points actually apply to the customers application, or are compared only in corner cases to make everything seem rosy to their advantage.

If you only need to look for a single trigger, sure you can claim the 100% but that misses any other characteristics that are unknown prior, and doesn't build a statistical measure. In reality both methods can be comparable with one or the other more effective depending on the specific application.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod