Author Topic: The battle: High waveform update rate versus low screen refresh rate  (Read 17435 times)

0 Members and 1 Guest are viewing this topic.

Offline pascal_swedenTopic starter

  • Super Contributor
  • ***
  • Posts: 1539
  • Country: no
Maybe I miss something here, but isn't it true that a high waveform update rate will not necessarily help to see all glitches on the screen due to the low screen refresh rate?

For example: 100.000 waveforms/sec versus a screen refresh rate of 100 Hz

My understanding is that you will only see those random infrequent glitches if you have enabled a trigger on them, but that otherwise you might as well miss out on seeing them due to this bottleneck of high waveform update rate versus low screen refresh rate.

Without a proper trigger configuration, the software can not be that intelligent that it ensures that a particular glitch waveform out of these 100.000 waveforms should make it in one of the 100 displayed waveforms that are part of the screen refresh rate sequence.

The only way to see all those random infrequent glitches, even without a proper trigger configuration, would be to play back the captured waveform in slow-motion mode, just like you would do with a high-speed camera for recording fast-moving objects. But this is impractical, and not in line with the general troubleshooting practices that are done with an oscilloscope on a live signal.
« Last Edit: January 14, 2017, 01:51:36 pm by pascal_sweden »
 

Offline Daruosha

  • Regular Contributor
  • *
  • Posts: 181
  • Country: ir
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #1 on: January 14, 2017, 01:32:31 pm »
let's assume you have a glitch happening 1 in a million cycle and your screen refresh rate is 100 Hz, if you are very lucky and scope captures that glitch and displaying in just one frame of 100 frames per second, your eyes cannot capture that particular frame. So:

I guess all the DSO's have some sort of display persitancy (in the menus it's "min" not 0). So it all depends on the amount of persistency and manufacture decision to convert waveform capture to screen display timing.


That is my guess, could be totally wrong.
 

Online ebastler

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: de
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #2 on: January 14, 2017, 01:59:15 pm »
I'm seconding Daruosha's answer, just trying to paraphrase it:

Your DSO will always accumulate multiple waveform scans in its screen buffer and display them together. With graded intensity displays, a pixel shown on the screen will become brighter if it has been scanned multiple times; with the simple non-graded displays, all pixels which have been scanned at least once will light up in the same intensity. In either case, the scope's persistence setting will determine how long the display "remembers" a scanned pixel, before letting it fade away or switching it back to black entirely.

Hence, even with a slow screen refresh rate you will see rare event traces light up (dimly in case of a graded intensity display) if the persistence setting is adequate. The screen refresh rate is not relevant for seeing rare events. It has other benefits: A high screen refresh rate will make for a smoother interactive handling of the scope, and will make it more pleasant to visually follow changes of the waveform over time (say on a 1..5 Hz timescale) without getting a "jerky" picture.

Of course, when you are looking for really rare events, you should set a suitable trigger to freeze the rare waveform when it occurs. That's why you are using a DSO and not an analog scope, right?
« Last Edit: January 14, 2017, 02:02:26 pm by ebastler »
 

Offline H.O

  • Frequent Contributor
  • **
  • Posts: 816
  • Country: se
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #3 on: January 14, 2017, 02:04:03 pm »
(ebastler beat me to it but since I've already written it I'll let it go thru).

If the screen updates at 100Hz and the waveform update rate is 100.000 wfrms/s, ie the scope triggers 100.000 per second with the particular signal you happen to have connected at the moment then the scope aquires 1000 waveforms and overlays them on top of each other in the part of the the memory that the screen rendering code reads from.

Exactly how this is done probably differs between makes, models etc etc etc.
 

Offline pascal_swedenTopic starter

  • Super Contributor
  • ***
  • Posts: 1539
  • Country: no
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #4 on: January 14, 2017, 02:22:42 pm »
Thanks for all the explanations! Now I finally understand how it works.

Essentially the digital implementation with overlaying multiple waveforms on each other, resembles the phosphor persistence on an analog scope.

But still there are blind times on a digital oscilloscope, where it can miss out on a trigger.
Why don't they use a pipelining mechanism like in a microprocessor architecture to avoid this?
This would avoid blind times completely!
« Last Edit: January 14, 2017, 02:32:54 pm by pascal_sweden »
 

Online ebastler

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: de
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #5 on: January 14, 2017, 02:35:28 pm »
Why don't they use a pipelining mechanism like in a microprocessor architecture to avoid this?
This would avoid blind times completely!

Well, if the time it takes to process a chunk of data (e.g. check whether there was a trigger event) is longer than the time it takes for that chunk of data to be acquired, then all pipelining does not help you. If your data processing cannot keep up with the incoming data rate, you will need to drop part of the incoming data, resulting in blind times.

R&S has an application note which tries to illustrate that, and discusses the effect of blind times. (And, of course, explains the great quality of R&S scopes in this respect.  ;))
https://cdn.rohde-schwarz.com/pws/dl_downloads/dl_application/application_notes/1er02/1ER02_1e.pdf
 
The following users thanked this post: nugglix

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #6 on: January 14, 2017, 07:16:28 pm »
Maybe I miss something here, but isn't it true that a high waveform update rate will not necessarily help to see all glitches on the screen due to the low screen refresh rate?

For example: 100.000 waveforms/sec versus a screen refresh rate of 100 Hz
OMG  :palm: This has been explained a million times already. What you see on the screen is an accumulation of sweeps. The real update rate of the traces could be 5 times per second and it still looks instant.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: CustomEngineerer

Offline BravoV

  • Super Contributor
  • ***
  • Posts: 7547
  • Country: 00
  • +++ ATH1
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #7 on: January 14, 2017, 07:18:46 pm »
Hey pascal_sweden, what is your current scope ?

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13748
  • Country: gb
    • Mike's Electric Stuff
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #8 on: January 14, 2017, 07:29:20 pm »
This is what the intensity control is for. At maximum it will show everywhere there had been a 'pixel' in that screen refresh period at full brightness ( maybe plus some persistence). At lower settings, the brightness indicates how often there has been a pixel at a particular location.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 28381
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #9 on: January 14, 2017, 08:24:21 pm »
Thanks for all the explanations! Now I finally understand how it works.

Essentially the digital implementation with overlaying multiple waveforms on each other, resembles the phosphor persistence on an analog scope.

But still there are blind times on a digital oscilloscope,...
And on a CRO.......  :popcorn:
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #10 on: January 15, 2017, 01:13:16 am »
Why don't they use a pipelining mechanism like in a microprocessor architecture to avoid this?
This would avoid blind times completely!
You cant freely add pipeline stages to the computation of a histogram because consecutive points (or many of them) might lie in the same bin. The "fix" for this grows with a power law of at least 2 and would blow out in resources beyond a few clocks of pipeline+latency. Or you can look at just how much data flies around in a modern scope:
https://www.eevblog.com/forum/testgear/comparing-agilent-infiniivision-2000-and-3000-x-series-oscilloscopes/msg1010059/#msg1010059
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #11 on: January 15, 2017, 01:21:02 am »
Pipelining isn't going to help to avoid blind time because there will always be the time between the end of the screen and the next trigger.
In theory you could stop writing the trace on a screen when a new trigger occurs and start writing that trace from the trigger point. That leaves an interesting debate on what to do with the rest of the trace. Copy it so the screen is filled until the end or leave the rest of the screen blank. Copying it isn't as bad as it sounds because you have multiple sweeps (aqcuisitions) on the screen anyway but it could be the same anomaly is displayed twice which could confuse the operator.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #12 on: January 15, 2017, 02:14:54 am »
Pipelining isn't going to help to avoid blind time because there will always be the time between the end of the screen and the next trigger.
In theory you could stop writing the trace on a screen when a new trigger occurs and start writing that trace from the trigger point. That leaves an interesting debate on what to do with the rest of the trace. Copy it so the screen is filled until the end or leave the rest of the screen blank. Copying it isn't as bad as it sounds because you have multiple sweeps (aqcuisitions) on the screen anyway but it could be the same anomaly is displayed twice which could confuse the operator.
How about you read the link above, where its discussed how a scope could have zero blind time. But the technology is prohibitively expensive for now. If you put every valid trigger from a signal overlaid you wouldn't see multiple anomalies but several different perspectives of it.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #13 on: January 15, 2017, 03:27:44 am »
Your method still doesn't deal with the situation where an anomaly may occur between the edge of the screen and the next trigger. And there is more wrong with the line of reasoning. Creating an intensity graded display shouldn't need that much memory or bandwidth because the display data needs 1000 pixels in the X direction at most. It is easy to run two acquisition systems in parallel. One which does the high speed intensity graded display and the other which does the long acquisition (which is close to the method I used to build an oscilloscope-ish system for one of my customers).
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #14 on: January 15, 2017, 04:32:05 am »
Your method still doesn't deal with the situation where an anomaly may occur between the edge of the screen and the next trigger. And there is more wrong with the line of reasoning. Creating an intensity graded display shouldn't need that much memory or bandwidth because the display data needs 1000 pixels in the X direction at most. It is easy to run two acquisition systems in parallel. One which does the high speed intensity graded display and the other which does the long acquisition (which is close to the method I used to build an oscilloscope-ish system for one of my customers).
Sounds like you're falling into the hole everyone else who walks into this does, if the data is arriving at 10GS/s and you can paint each sample to a 1000px display how do you aggregate that down? A video memory the same size as the display with read-modify-write cycles is limited by the available bandwidth, and hiding any latency or pipelining requires more resources again.

Circular/paged buffers to capture every single trigger is a nice idea that is possible and could have zero blind time, you'd only miss what was too far away from a trigger to fit on the screen which is a problem no matter how fast the update rate is.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #15 on: January 15, 2017, 04:54:59 am »
You don't really need to use memory acquisition pipeline to do real-time processing. You can have a separate pipeline which continuously looks for the trigger, and as soon as the trigger is found sends the waveform data centered over the trigger (while all the data points are spread along the pipeline) to the processing. This way you can avoid all gaps and even acquire overlapping screens. And this doesn't require any extensive resources.

This process will spew out frames, which can come out really fast. Depending on what you're doing, you may not be able to process all the frames coming out. Although if the process only requires bin counting then you probably can easily do 100MFrames/s with a decent FPGA, and you probably don't need any more than that. Someone just need to sit down and write an optimized code to do so.

So, eventually we'll get gap-free and even overlapped acquisitions with unlimited waveforms. I don't see why it cannot be done today, but I would guess there's not enough marketing reasons.

 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #16 on: January 15, 2017, 05:56:18 am »
You don't really need to use memory acquisition pipeline to do real-time processing. You can have a separate pipeline which continuously looks for the trigger, and as soon as the trigger is found sends the waveform data centered over the trigger (while all the data points are spread along the pipeline) to the processing. This way you can avoid all gaps and even acquire overlapping screens. And this doesn't require any extensive resources.

This process will spew out frames, which can come out really fast. Depending on what you're doing, you may not be able to process all the frames coming out. Although if the process only requires bin counting then you probably can easily do 100MFrames/s with a decent FPGA, and you probably don't need any more than that. Someone just need to sit down and write an optimized code to do so.

So, eventually we'll get gap-free and even overlapped acquisitions with unlimited waveforms. I don't see why it cannot be done today, but I would guess there's not enough marketing reasons.
Again, if you're so sure its easy then do the "back of the envelope" design for us and share. Getting giga samples per second to a screen is very hard, which is why the easy way out is to just dump triggers to segmented memory and then go through them at a leisurely pace. Doing it sustained is what needs the impossibly large bandwidths.
 

Offline djnz

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #17 on: January 15, 2017, 07:04:38 am »
... But still there are blind times on a digital oscilloscope,...
And on a CRO.......  :popcorn:

Just out of curiosity, any idea what the blind time or the wfms/sec spec of old Tektronix analog scopes (like the 2xxx series or the 7000 series) is?
 

Online ebastler

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: de
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #18 on: January 15, 2017, 07:38:49 am »
Just out of curiosity, any idea what the blind time or the wfms/sec spec of old Tektronix analog scopes (like the 2xxx series or the 7000 series) is?

Found this measurement right here on the forum:
https://www.eevblog.com/forum/testgear/waveform-update-rate-digital-vs-analog-(ds2000-vs-tek-2465)/
 
The following users thanked this post: djnz

Offline tautech

  • Super Contributor
  • ***
  • Posts: 28381
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #19 on: January 15, 2017, 07:53:17 am »
Just out of curiosity, any idea what the blind time or the wfms/sec spec of old Tektronix analog scopes (like the 2xxx series or the 7000 series) is?

Found this measurement right here on the forum:
https://www.eevblog.com/forum/testgear/waveform-update-rate-digital-vs-analog-(ds2000-vs-tek-2465)/
Which implies the Tek would have a better chance of displaying some random non-repetitive glitch......not without persistence phosphors it wouldn't.
This the tradeoff between CRO's and DSO's, with witch glitches can be captured.
In a DSO you have a few friends to narrow the gap...Trigger suite, GSa/s, Mpts and waveform update rate, all of which when high spec'ed the better.
Avid Rabid Hobbyist
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 4105
  • Country: fi
  • Born in Finland with DLL21 in hand
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #20 on: January 15, 2017, 09:17:34 am »
Just out of curiosity, any idea what the blind time or the wfms/sec spec of old Tektronix analog scopes (like the 2xxx series or the 7000 series) is?

Found this measurement right here on the forum:
https://www.eevblog.com/forum/testgear/waveform-update-rate-digital-vs-analog-(ds2000-vs-tek-2465)/
Which implies the Tek would have a better chance of displaying some random non-repetitive glitch......not without persistence phosphors it wouldn't.
This the tradeoff between CRO's and DSO's, with witch glitches can be captured.
In a DSO you have a few friends to narrow the gap...Trigger suite, GSa/s, Mpts and waveform update rate, all of which when high spec'ed the better.

With analog scopes it is more complex.
This example table (DS2k vs  Tek 2465) is calculated, not true. This table forget totally one important thing.  Phosphor write speed. In this table phosphor visual writing speed default looks like infinite. This table forget totally one important thing.  Phosphor write speed.

Tek 2465B  Phosphor P31, Visual writing speed >20 divisions/µs. (really slow)

With this data, try one single shot 20ns pulse using 5ns/div or 500ps/div and intensity max. Result: visually total blind (but trig led indicate it - of course).  This table can not use at all for thinking how it can display random single glitches in practice.  If include phosphor writing speed to calculation in table, it looks totally different.  Of course example 2467B or 7104 are different due to different tube, MCP and its visual drawing speed is in different class. Around 4 division/ns.  Visual blind time is really very bad in analog oscilloscopes fir single random glitches. Nearly hopeless to even try hunt rare fast glitches using analog scope. With MCP tube it is bit better.

How about if there is analog memory/variable persistence CRT. Like example in HP 1741A.  Yes, there is persistence / memory.  Try single shot 10ns 6div height pulse. Result: totallly blind. In normal mode and in memory/persistence mode. (CRT)memory writing speed is also quite slow 200cm/µs.

Edit: repaired ALT+230 = µ
« Last Edit: January 15, 2017, 12:10:23 pm by rf-loop »
I drive a LEC (low el. consumption) BEV car. Smoke exhaust pipes - go to museum. In Finland quite all electric power is made using nuclear, wind, solar and water.

Wises must compel the mad barbarians to stop their crimes against humanity. Where have the wises gone?
 
The following users thanked this post: Performa01, Someone, djnz

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16670
  • Country: 00
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #21 on: January 15, 2017, 10:31:30 am »
But still there are blind times on a digital oscilloscope, where it can miss out on a trigger.

But nowhere near as many as on a CRO. Whichever way you look at it, a DSO is better in this regard.

Why don't they use a pipelining mechanism like in a microprocessor architecture to avoid this?
This would avoid blind times completely!

It would make the 'scope very expensive and you'd still have to sit for hours staring at the screen trying not to blink in case you miss something.

Better idea: They could let you set a pass/fail mask on screen. If any wave goes outside the defined area then it gets flagged/counted. That way you can go away and have a sandwich while the 'scope does all the work.

 

Online ebastler

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: de
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #22 on: January 15, 2017, 11:12:03 am »
But still there are blind times on a digital oscilloscope, where it can miss out on a trigger.
But nowhere near as many as on a CRO. Whichever way you look at it, a DSO is better in this regard.

That statement does not seem right. As discussed above and in the CRO/DSO comparison I had linked to, the actual duty cycle of CROs is typically better than for a DSO. I.e. the percentage of blind time is shorter on a CRO, and it should miss fewer triggers, right?

But, as explained by rf_loop, the fact that the CRO has actually noticed that rare event and scanned it once may not help you much -- since you probably won't be be able to see the resulting, very dim trace.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #23 on: January 15, 2017, 11:15:43 am »
Your method still doesn't deal with the situation where an anomaly may occur between the edge of the screen and the next trigger. And there is more wrong with the line of reasoning. Creating an intensity graded display shouldn't need that much memory or bandwidth because the display data needs 1000 pixels in the X direction at most. It is easy to run two acquisition systems in parallel. One which does the high speed intensity graded display and the other which does the long acquisition (which is close to the method I used to build an oscilloscope-ish system for one of my customers).
Sounds like you're falling into the hole everyone else who walks into this does, if the data is arriving at 10GS/s and you can paint each sample to a 1000px display how do you aggregate that down? A video memory the same size as the display with read-modify-write cycles is limited by the available bandwidth, and hiding any latency or pipelining requires more resources again.
The only thing which needs to happen in real time is counting which sample value gets hit how many times. If you use the sample value as the address then a 1024*256*Y bit memory is more than enough which is in the realm of fast (FPGA) memory. In an ASIC it is a piece of cake nowadays but there is probably a more clever way of doing this which needs way less memory. Using double buffering techniques you can have this running continuously while the post processing paints a pretty picture on screen. You should not design a system like this in a way it needs to write to something like video memory directly because you'll run into bandwidth problems indeed. There has to be a step in between which deals with the accumulated data.
« Last Edit: January 15, 2017, 11:17:25 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16670
  • Country: 00
Re: The battle: High waveform update rate versus low screen refresh rate
« Reply #24 on: January 15, 2017, 12:45:43 pm »
That statement does not seem right. As discussed above and in the CRO/DSO comparison I had linked to, the actual duty cycle of CROs is typically better than for a DSO. I.e. the percentage of blind time is shorter on a CRO, and it should miss fewer triggers, right?

OK, I guess you're right there. The overall blind time of a CRO could be less.

On balance I'd say the DSO would win in a straight competition to find runt pulses though:
a) A DSO can set infinite persistence so the traces don't fade away (no need to worry about blinking).
b) Even entry level DSOs have pass/fail testing these days. You can go for lunch and it will count them for you.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf