The trigger rate and waveform update rate can be completely disconnected. They are not in any way equivalent. Keysight have to clarify this in their marketing material to avoid your sorts of misleading claims:
When reviewing the update rate process, it seems like trigger rate could be used interchangeably with update rate; however, some oscilloscopes will trigger multiple times while the data is being processed and ignore the newly triggered event, making the trigger rate different than the oscilloscope update rate. The faster the update rate, the more events are being captured and analyzed by the oscilloscope
Yes, this can happen on scopes with unlocked triggers when the processing time exceeds the re-arm time (I vaguely remember I saw this effect on the I think the HP 54700 Series, or might have been one of the early Infiniiums).
I doubt this is an issue for most scopes built after say this side of 2000, and I challenge you to find any A-brand scope which came out since 2010 which does that.
Waveform update rate is widely understood to be the number of acquisitions per second that are drawn to the screen.
Yes, on analog scopes. Not on digital scopes, though, where there is no correlation between the time and frequency a scope updates it's view of the signal and the time and frequency of screen updates (which usually happen at low rates like 60Hz). On DSOs, the update rate means therefore the rate of how many times per second a scope can update its waveform record.
Segmented mode in most scopes doesn't draw all the data to the screen as its assumed you will go back through them after the capture sequence is complete.
Indeed. Because the point of sequence mode is essentually to stretch the existing memory to capture repetitive events which are spaced too far out for a normal long acquisition. So the scope essentially runs a series of short acquisitions, but saving on processing time.
Its as stark as the difference between "filing" documents to the in-tray, or reading them.
Not sure what you're on about "filing documents" here.
Dumping acquisitions to memory without them being seen, is in no way comparable, to putting their information onto the screen in realtime. One is trivial, the other is resource intensive. Just as pushing through acquisition data without putting it to the screen is not comparable to drawing it there.
This is where you go wrong again.
In general, every event that is captured by the scope and ends up in acquisition memory is displayed. The excessive triggers KS was talking about happen during the blind time, not during acquisition.
There are some exceptions, though:
- Some scopes have some kind of 'pre-roll and 'post-roll' padding, i.e. they start their acuisition a short time before they actually capture data which is processed, and continue capturing for a short time after they stop acuiring data for processing. It's often a consequence of certain aspects of the scope's internal architecture. But this 'padding' is completely transparent and not visible to the user. So for all intends of purposes, the true acquisition period is what the user selected (the padding is, essentially, just a bit more dead-time).
- Some scopes, again like Keysight infiniVision scopes, perform a full memory capture as their last acquisition after pressing STOP or in SINGLE acquisition mode, thereby (depending on the timebase), acquiring excess data outside the displayed timeframe. This is outside normal operation (the scope must have been halted), and even the data outside the view is available for viewing.
- Some scopes can be set to use more acquisition memory than required to capture for the displayed timebase, resulting them to capture data which is outside the displayed view. This data is lost for viewing unless the scope is halted and the timebase changed, in which case the data of the last acuisition before the scope entered STOP mode becomes visible. This effect has recently been topic of an extensive discussion and I'm sure you agree with me that this is a niche situation which doesn't reflect any normal or recommended use.
- Tek's MDO Series (and it seems some other models such as the DPO Series) have a function called 'Auto'Magnify' where, when the user selects a short time base, they capture a long time base and present the user with a window representing the selected shorter timebase. This window can then be moved around to view the data that is outside the original selected timebase window, i.e. all the acquired data can be viewed. It's unique in a way that it presents a mode where in normal operation not every captured event immediately ends up on the screen. I am not aware of any other scopes which offer a similar function, so it's a slighty special case.
But for any normal operation (RUN mode) without any specific modes or use cases, every event which occurs during the effective acquisition period will be displayed.
You can keep pointing to a video of a scope showing an impressive trigger rate, there is no evidence its drawing or processing the data from those triggers. We could go to the manual for that scope, which nowhere claims anything other than a high trigger rate.
Why should they, after all the update rate isn't something that is high on the list of priorities for buyers of this class of scope.
Also, this scope was equipped with a stronger 1.8GHz P-M processor instead of the stock 1.2GHz Celeron. The performance of X-Stream very much depends on the main processor (and its cache size), and the update rate was *a lot* lower with the stock Celeron, which always kept the CPU load at 100% almost constantly, ham-stringing performance in many other modes, too. I never understood why LeCroy skimped so much when it came to CPU power on the scopes, which after all rely on an architecture where the CPU is most critical. The 1.8GHz processor was later offered as an upgrade for that scope from LeCroy.
Also, on every 'real' LeCroy scope (i.e. which isn't just a rebadged variant of something else) every event which is captured in memory is displayed (if that channel display is active, of course, because you can acquire without having the channel displayed on screen). The scope in the video will show every event that occurs during the acuisition phase on the screen, as it's been one of LeCroy's design mottos since back when they were mostly serving the science market.
Why would they include a special waveform/second optimized mode if the normal mode could outperform it? Note the manufacturer claim, 8000wfms/second.
WaveStream is an analog scope like persistence mode which is a bit like Tek's FastAcq DPO mode, just without all the drawbacks (it runs at full sample rate (10GSa/s) and you can use all measurements and analysis tools on it, although they will only use the data of the last acquisition, not history data). As far as I remember it circumvents certain processing steps and pushes data directly to the main processor (also, don't forget that X-Stream uses data compression, so it doesn't have to transfer and process every sample which hasn't changed again and again like other scopes). I guess the idea was that if you wanted to do simple eye diagrams you'd just press one button and that's it.
I've seen the 'above 8k wfms/s) figure in early documents describing the technology (maybe around 2003/2004 time frame), later no numbers were given probably because the actual rate varied so much depending on the CPU and also on the software version (in earlier software versions the update rate was also quite inconsistent). Considering the dependencies, it was probably seen as futile to list max numbers and update them everytime the software improves or a faster CPU has been qualified for the scope, so why bother? LeCroy customers didn't seem to care much for update rates anyways, and for what it is WaveStream has been more than fast enough.
Considering that the WRXi and it's successors sold rather well, it doesn't seem they were wrong.
We have lots of LeCroy scopes (although no WRXi's or any of the older ones), and WaveStream was useful occasionally to show a colleague some instability or continuous changes in a signal. But I haven't seen seen it exactly been widely used.
Competitive comparisons have consistently found very poor updates rates for that model of scope:
https://www.tek.com/document/competitive/tektronix-mso-dpo4000-series-vs-lecroy-waverunner-xi-fact-sheet-0
"Competitive comparisons" have found all kind of nonsense which is, quite often, the result of (intentional?) mis-operation. I wouldn't trust them as far as I could throw their brand's heaviest scope.
The Waveform Update rate (which is identical with the Trigger Rate) is a measure of how many times a scope can update its waveform record.
You can keep trying to redefine industry standard terms to suit your misleading arguments, but we'll keep calling it out and pointing to that nonsense. Your emotive rubbish that follows on from that is you "standard" claims which are getting old.
Yeah, whatever bro. You keep swallowing that marketing stuff
Which means relying on excessively high update rates to capture rare events has roughly a 1 in 10 chance that your scope actually sees it.
Excessively high but still not high enough? Ok, you just want to say its a bad thing no matter if its a high number or a low number.
Math clearly isn't your strong point. Because if so you'd understand that even with 1 Billion wfms/s there would still be a blind time, and it would be close to 100%
The simple fact you can't understand is that if you have to rely on multiple acquisitions there will be a certain amount if blind time, simple as that. And because the blind time percentage actually increases with waveform update rate, a high update rate means the chance of your scope actually capturing the event goes down.
Which contradicts the idea that high waveform rates would be somehow useful to find rare events, especially when the uncertainty is so large.
Secondly, yes, you can certainly reduce the percentage the blind time presents in the wavform update cycle. You just have to increase the acquisition time, either by lowering the sample rate or by increasing the sample memory. Eventually, your blind time will be smaller than your acquisition time. But then your waveform rate will also have dropped like a rock. And, even when it's smaller, there is still a blind time where your scope will miss events between acquisition cycles.
There is no technical barrier to faster update rates than what is currently offered.
There certainly is, because an increase in update rate means a reduction in the update period (the time slice in which acquisition and blind time must fit), and even if there was no blind time (which, with scopes, there always is) there is still a hard limit in the time for the ADC to fill the dedicated sample memory.
There are commercial products (but not general purpose oscilloscopes) with zero blind time that guarantee they draw 100% of the samples to the histogram, on sustained XXGsa/s streams.
Yes, streaming digitizers. Which, actually, are digitizers, not scopes.
Updates rates are not fixed in stone and imply all the extra conditions you keep claiming they do.
Indeed, they are not set in stone, they are limited by simple math.
Which means relying on repeated updates to capture rare events is still a gamble, even your odds may have increased (e.g. 9 out of 10 chance your scope sees it). However, if you can increase the acuisition time to capture your period of interest in one acquisition, and then set a trigger for the event of interest, your the odds of your scope seeing the event will be 100%.
Assuming there is a single trigger which you can configure to catch the event. Great, you captured a single event.
You can capture as many events as you like, as long as they are not coming faster than the scope needs for a complete acquisition cycle with blind time (like on these rare events that, supposedly, high update rates are so good for).
You can keep falling back to this extreme position which claims everything is solvable with triggers, but its not true for all applications.
OK, name one. Describe a situation which you can only solve with a high update rate scopes and persistence mode.
Some of which require a statistical measure built from an eye diagram.
So how does this square with the fact that those scopes which are predominantly used for eye diagrams (like the various Infiniums, i.e. all the scopes which are offered with options for exact this application) have mostly max update rates in the few thousands?