One thing about the WaveJet history mode is that you know that it has captured all the waveforms because you can run through them individually. Though you do have to assume that the time stamp is accurate (as for as I can ascertain it is very accurate).
i] Perhaps it need some magic camera... normal Canon can not see them and my eyes can not see these.
(generate fast changing signal and install camera taking picture with enough long shutter time or use example very fast video camera and then count what is really displayed. - this is perhaps good idea for Dave - take some amazing method for detect real displayed wfrm/s and remove mystery woman "wfrm/s" pants. However, I am afraid that not everyone will like it - I mean, some of the prominent manufacturers.)
you can take even the light speed camera, the display is refreshing with complettly different refresh rate than the data refresh in sample buffer. Take TDS700, enable DPO/InstaVu
and you will see persistently lot of events, but when you capture them with cam, you will not see 400k wfms/s (even if you can measure such value on trigger out).
QuoteOne manufacturer long time ago tell that there is max 2000 waveforms per second. I can see barely just over 20. If there was 2000 waveforms captured per second where they are displayed. Perhaps it need some magic camera... normal Canon can not see them and my eyes can not see these.
I understand your point, but your figures are missing some information:
Tests with Air Force pilots have shown that they could identify a plane on a picture that was flashed for as little as 1/220th of a second (i.e. make a distinction between a non-airplane shape and an airplane shape) - so there is evidence to support the theory that humans can identify discrete pieces of information in something close to ~1/250th of a second. So this would imply that we could notice a glitch that appears and disappears in the space of what is currently around the fastest refresh rate of an LCD: approximately 240Hz (I don't mean in DSO LCDs yet - I just mean in consumer goods).
More importantly (given current DPO technology), according to research, we can identify ~100 levels of intensity. So the obvious reason to have intensity grading is to increase the amount of information we could perceive on the DSO screen per second by a factor of ~100.
QuoteOne manufacturer long time ago tell that there is max 2000 waveforms per second. I can see barely just over 20. If there was 2000 waveforms captured per second where they are displayed. Perhaps it need some magic camera... normal Canon can not see them and my eyes can not see these.
I understand your point, but your figures are missing some information:
Tests with Air Force pilots have shown that they could identify a plane on a picture that was flashed for as little as 1/220th of a second (i.e. make a distinction between a non-airplane shape and an airplane shape) - so there is evidence to support the theory that humans can identify discrete pieces of information in something close to ~1/250th of a second. So this would imply that we could notice a glitch that appears and disappears in the space of what is currently around the fastest refresh rate of an LCD: approximately 240Hz (I don't mean in DSO LCDs yet - I just mean in consumer goods).
More importantly (given current DPO technology), according to research, we can identify ~100 levels of intensity. So the obvious reason to have intensity grading is to increase the amount of information we could perceive on the DSO screen per second by a factor of ~100.
Human vision "speed" is difficult to characterize by a simple figure. If you have a suitable "single pulse generator", you can test it by yourself by connecting a led to the generator output. It is possible to see surprisingly short flashes of LED. IIRC, when I tested this some time ago, something like few hundred nanoseconds long flash was perfectly well visible. Of course, the effect is similar than connecting a Jim Williams pulser to a low bandwidth scope, one gets just a small bump on a trace.
Regards,
Janne
Human vision "speed" is difficult to characterize by a simple figure. If you have a suitable "single pulse generator", you can test it by yourself by connecting a led to the generator output. It is possible to see surprisingly short flashes of LED. IIRC, when I tested this some time ago, something like few hundred nanoseconds long flash was perfectly well visible.
Since we've been discussing acquisition cycles and blind times, it occurred to me that I wasn't 100% sure about the precise definition of the active acquisition time (screen time or samples?). I had assumed it was whichever was longer (in time) - but I wanted to check the literature.
After re-reading the two most oft-quoted documents on the subject, this and this, it appears, strangely enough, that Agilent and Rohde & Schwarz define this critical piece of information differently!
Agilent's lit. states:
"A scope’s dead-time percentage is based on the ratio of the scope’s acquisition cycle time minus the on-screen acquisition time, all divided by the scope’s acquisition cycle time."
Rohde & Schwarz's lit. states:
"The acquisition cycle consists of an active acquisition time and a blind time period. During the active acquisition time the oscilloscope acquires the defined number of waveform samples and writes them to the acquisition memory. e.g. 100 ns (1000 Sa, 10 GSa/s). "
So Agilent considers points captured, but not displayed, as part of the blind time - but Rohde & Schwarz doesn't.
So for example, given the following settings:
1GSa/s sample rate
1k sample length (so sample time is 1us)
10ns/div. time base
10 divisions on screen (so onscreen time is 100ns)
100k wfrm/s
Agilent's lit. states:
"% DT = Scope’s dead-time percentage
= 100 x [(1/U) – W]/(1/U)
= 100 x (1 – UW)
where
U = Scope’s measured update rate
and
W = Display acquisition window = Timebase setting x 10"
So according to Agilent's specifications and formula, the blind time is 99%.
R&S's lit. states:
"acquisition rate = 1 / acquisition cycle time
blind time ratio = blind time / acquisition cycle time"
So according to R&S's specifications and formula, the blind time is 90%.
Strange. But perhaps it's just a question of semantics: Agilent feels if you can't see it on the display, it's not relevant - but R&S feels that it's still data that's been captured and can be analyzed if needed?
Presumably if you are looking for glitches it is the display window that matters.
Presumably if you are looking for glitches it is the display window that matters.
If by 'looking' you mean literally with your eyes, then yes. But I could capture, for example, 8128 segments of acquisition time (while having a much smaller display window) and analyze them after the fact for glitches.
That is true of data that is saved, but with most scopes they save only data that is within the display time frame. I think there is some data that is captured in the sense of being sampled into a buffer but is not saved into longer term memory.
That is true of data that is saved, but with most scopes they save only data that is within the display time frame. I think there is some data that is captured in the sense of being sampled into a buffer but is not saved into longer term memory.
Longer term memory? Maybe you're confused a bit based on the way your Wavejet is doing things, but virtually every modern DSO work's in pretty much the same way. The acquisition time of a DSO is always either the display window (as Agilent calls it) or the sample length - whichever is longer in real-time. For example, when my sample rate is 2GSa/s, my DSO is grabbing a sample every 500ps. If I have the sample length set to 140k, it takes the DSO 70us to fill it. If my time base is set to 10us/div, the display window is 140us (100us x 14), so the DSO halves the sampling rate clock (or, like the Agilent X-Series, throws out every other sample) to match the sample length to the display window time - thus making my acquisition time 140us. But if my time base is set to 10ns, the display window is only 140ns, but the DSO is still capturing 70us - so 70us is my acquisition time. If I stop the DSO at any time - I can 'zoom' out and see (or analyze) all the samples.
If you're trying to see glitches on the screen, the most common way to look for them I would have thought, then you're not going to be zooming out anyway.
How To measure the refresh rate of a DSO.
between them
the two pulses must not be equal in width
I genarate the first 250nS and second 500nS
while the time between them is less than the refresh time you'll see the first