Keysight said...
Who cares what some Keysight marketer said?
I would think that rise and fall time might also be a factor that determines what you call faster or slower
No, that's called "bandwidth".
"Fastest" means "highest waveform update rate" (number of waveforms per second that can be acquired and displayed). By that measure, the MXO4 is an order of magnitude faster than the UXR.
A lot of people think that sampling rate = speed but don't realize that most scopes actually discard a very large percentage of the samples they acquire and thus are "blind" most of the time.
AFAIK the MXO4 is the only scope to actually display the blind time on screen.
If I was them I'd be bragging too.
13 years now since the megazoom IV ASIC...
Yes, you gotta wonder who is ever going to buy these at this price, so why bother displaying them in the catalog? 
(If you're going to make prototypes you're probably still not going to buy them through this channel.)
Military or government. They need to fix a bit of kit and need a spare chip from stock and an already approved vendor.
"Fastest" means "highest waveform update rate" (number of waveforms per second that can be acquired and displayed). By that measure, the MXO4 is an order of magnitude faster than the UXR.
A lot of people think that sampling rate = speed but don't realize that most scopes actually discard a very large percentage of the samples they acquire and thus are "blind" most of the time.
AFAIK the MXO4 is the only scope to actually display the blind time on screen.
If I was them I'd be bragging too.
13 years now since the megazoom IV ASIC... 
Pretty impressive it took 13 years for someone to beat the Megazoom IV ASIC.
Mainly because high waveform update rates are more of a marketing gimmick than actually usefull. Features like reverse brightness are much more handy to spot deviations in a signal. But other than that, using triggers is the way to go. Together with segmented recording so you can capture each individual event with several other signals if you like and study cause & effect in detail afterwards.
"Fastest" means "highest waveform update rate" (number of waveforms per second that can be acquired and displayed). By that measure, the MXO4 is an order of magnitude faster than the UXR.
A lot of people think that sampling rate = speed but don't realize that most scopes actually discard a very large percentage of the samples they acquire and thus are "blind" most of the time.
AFAIK the MXO4 is the only scope to actually display the blind time on screen.
If I was them I'd be bragging too.
13 years now since the megazoom IV ASIC... 
Blind Time (didn't know it was called that) must be pretty long for our 8 channel Pico-Scope.
Mainly because high waveform update rates are more of a marketing gimmick than actually usefull. Features like reverse brightness are much more handy to spot deviations in a signal. But other than that, using triggers is the way to go. Together with segmented recording so you can capture each individual event with several other signals if you like and study cause & effect in detail afterwards.
It's hard to trigger on something you never saw in the first place.
Mainly because high waveform update rates are more of a marketing gimmick than actually usefull. Features like reverse brightness are much more handy to spot deviations in a signal. But other than that, using triggers is the way to go. Together with segmented recording so you can capture each individual event with several other signals if you like and study cause & effect in detail afterwards.
It's hard to trigger on something you never saw in the first place.
Back when Tektronix came out with their Digital Phosphor Oscilloscopes which capture a histogram directly to the frame buffer, the idea was to use DPO mode to find the glitch, and then advanced triggering and segmented memory to capture it.
Mask testing should be able to do it all however at least back then, mask testing was much slower than advanced triggering, and I suspect that is still the case.
Mainly because high waveform update rates are more of a marketing gimmick than actually usefull. Features like reverse brightness are much more handy to spot deviations in a signal. But other than that, using triggers is the way to go. Together with segmented recording so you can capture each individual event with several other signals if you like and study cause & effect in detail afterwards.
It's hard to trigger on something you never saw in the first place.
Back when Tektronix came out with their Digital Phosphor Oscilloscopes which capture a histogram directly to the frame buffer, the idea was to use DPO mode to find the glitch, and then advanced triggering and segmented memory to capture it.
Mask testing should be able to do it all however at least back then, mask testing was much slower than advanced triggering, and I suspect that is still the case.
Yes, you can also use infinite persistence mode on any scope.
The problem of blind time still remains through on DPO or infinite persistence mode. The longer the blind time the longer your have to theoretically wait for your glitch to become visible.
And that's only
IF you suspect an infrequent glitch in the first place and enable those modes.
The whole push towards fast updating started by Keysight Megazoom and now bested by R&S MXO-EP ASIC is that you stand a better chance of seeing something infrequent when you are just regularly using the scope.
Mainly because high waveform update rates are more of a marketing gimmick than actually usefull. Features like reverse brightness are much more handy to spot deviations in a signal. But other than that, using triggers is the way to go. Together with segmented recording so you can capture each individual event with several other signals if you like and study cause & effect in detail afterwards.
It's hard to trigger on something you never saw in the first place.
if its still not obvious to them... the other term is... dead time... in which time triggering circuit is shutoff and acquisition memory is not enough to capture the rest...
https://www.ni.com/en-my/shop/electronic-test-instrumentation/oscilloscopes/what-are-oscilloscopes/every-measurement-starts-with-a-trigger.html but instead of marketing confusion as "fastest" scope, its more practical and "educational" to say "the least" blind time scope, or 10% or 5% blind, or even better if it can achieve 0% blind/dead time... 0% blind time is not a gimmick, its a 100% guarantee you'll capture glitch on first
observation occurrence (if infinite persistence feature is turned on, a feature that is common in even cheap dsos today).
if its still not obvious to them... the other term is... dead time
When I made my video and whitepaper explaining these concepts (in anticipation of the MXO4 launch), I struggled with what terminology to use: "blind time" vs. "dead time" and then all of the possible permutations of the words "acquisition," "update," "capture," "waveform," and "rate", e.g.
- acquisition rate
- waveform acquisition rate
- update rate
- waveform update rate
- capture rate
- etc. etc.
So much easier making content on spec ans and VNAs where everyone uses (more or less) the same terminology for everything ...
The whole push towards fast updating started by Keysight Megazoom and now bested by R&S MXO-EP ASIC is that you stand a better chance of seeing something infrequent when you are just regularly using the scope.
When did HP release their first Megazoom DSO? Tektronix released their first DPO DSO in 1995 (TDS500B and TDS700A InstaVu) but they did not call it that then. There is a gap in HP catalogs from 1994 to 1996 available online. The HP 1997 catalog discusses HP Megazoom but not the 1993 catalog.
Mainly because high waveform update rates are more of a marketing gimmick than actually usefull. Features like reverse brightness are much more handy to spot deviations in a signal. But other than that, using triggers is the way to go. Together with segmented recording so you can capture each individual event with several other signals if you like and study cause & effect in detail afterwards.
It's hard to trigger on something you never saw in the first place.
I can't keep my eyes open and stare at a screen for that long. A glitch is gone in a blink of an eye. If I really want to know whether a signal is not going out of bounds, I set a scope to infinite persistence and let it run overnight as a last resort. Otherwise I set a trigger to a limit (pulse width, height, runt, etc). The more time / periods you can fit on a screen, the better the ratio blind time versus capture time becomes. So a lower waveform update rate can actually be better in terms of blind time to captured time ratio! I demonstrated that in the RTM3004 review I did a couple of years ago (combined with the rather unique reverse brightness feature which I suspect is also present in the MXO4).
What is the thing near the center of the picture that has blue near the terminals, a black body, and three red dots on it? I haven't the faintest. Offset in video 8:44.
Thanks!
What is the thing near the center of the picture that has blue near the terminals, a black body, and three red dots on it? I haven't the faintest. Offset in video 8:44.
Thanks!
Those are SMT inductors, probably from Coilcraft and probably in the 10 nH-10 uH range.
They are sort of a pain in production because they are wound with wire that makes the proverbial RCH look like LMR600 coax. The windings are covered with blue plastic on top but are exposed on the bottom. They are easily damaged during assembly and the welds at the contact ends aren't very stout either. As if that weren't enough, they are difficult to inspect because the solder contact area may not be visible from overhead.
I can't keep my eyes open and stare at a screen for that long. A glitch is gone in a blink of an eye. If I really want to know whether a signal is not going out of bounds, I set a scope to infinite persistence and let it run overnight as a last resort.
If it's only one or two fast glitches and you've got dead time in your 'scope then it might not be enough.
Mainly because high waveform update rates are more of a marketing gimmick than actually usefull. Features like reverse brightness are much more handy to spot deviations in a signal. But other than that, using triggers is the way to go. Together with segmented recording so you can capture each individual event with several other signals if you like and study cause & effect in detail afterwards.
It's hard to trigger on something you never saw in the first place.
I can't keep my eyes open and stare at a screen for that long. A glitch is gone in a blink of an eye. If I really want to know whether a signal is not going out of bounds, I set a scope to infinite persistence and let it run overnight as a last resort. Otherwise I set a trigger to a limit (pulse width, height, runt, etc). The more time / periods you can fit on a screen, the better the ratio blind time versus capture time becomes. So a lower waveform update rate can actually be better in terms of blind time to captured time ratio! I demonstrated that in the RTM3004 review I did a couple of years ago (combined with the rather unique reverse brightness feature which I suspect is also present in the MXO4).
Again, the point is that a faster updating scope has a much greater chance of showing you something
you are not expecting, and it has a much greater chance of doing that
during normal operation of the scope.
I don't know why this simple fact is questionable. Sure, you can argue about the need, and bang-per-buck and what have you, but all things being equal a faster updating scope is better than a slower one.
This is why scopes manufacturers have been focussing on and improving this over many decades now.
I know how early Tektronix DPOs work, however are there any detailed descriptions of how HP Megazoom and now whatever R&S is doing work? Google results were not very informative. I suspect HP released a detailed article but it is too old to be considered relevant in search results.
When I made my video and whitepaper explaining these concepts (in anticipation of the MXO4 launch), I struggled with what terminology to use: "blind time" vs. "dead time" and then all of the possible permutations of the words "acquisition," "update," "capture," "waveform," and "rate", e.g.
- acquisition rate
- waveform acquisition rate
- update rate
- waveform update rate
- capture rate
- etc. etc.
So much easier making content on spec ans and VNAs where everyone uses (more or less) the same terminology for everything ... 
memory depth / acquisition rate = time frame of signal captured (T1), time to rearm trigger (T2). 100 x (T2 - T1) / T2 is blind time percentage. i'm sure you are a lot better than me, or maybe you did provide in datasheet i havent look, i cant afford it anyway. cheers.
memory depth / acquisition rate = time frame of signal captured (T1), time to rearm trigger (T2). 100 x (T2 - T1) / T2 is blind time percentage. i'm sure you are a lot better than me, or maybe you did provide in datasheet i havent look, i cant afford it anyway. cheers.
When I was looking at this in detail a couple years ago with the Tektronix DPO design in mind, I concluded that double buffering with two separate acquisition records was required to minimize blind time, which means halving record length.
I think the old Tektronix DPO design was limited by the rearm time of its analog trigger, which could have been made much faster.
memory depth / acquisition rate = time frame of signal captured (T1), time to rearm trigger (T2). 100 x (T2 - T1) / T2 is blind time percentage. i'm sure you are a lot better than me, or maybe you did provide in datasheet i havent look, i cant afford it anyway. cheers.
When I was looking at this in detail a couple years ago with the Tektronix DPO design in mind, I concluded that double buffering with two separate acquisition records was required to minimize blind time, which means halving record length.
Dual buffering is the generic approach to do acquisitions in every DSO. If you look close to the specs of a DSO you can see some have more memory in single shot and/or sequence mode.