Products > Test Equipment
How does waveform updates on an oscilloscope work? Why do they work that way?
<< < (12/14) > >>
Someone:

--- Quote from: 2N3055 on April 20, 2023, 11:18:32 am ---
--- Quote from: Someone on April 20, 2023, 10:14:40 am ---
--- Quote from: 2N3055 on April 20, 2023, 07:54:56 am ---As for missing packets  in real time, if scope has hardware serial triggers, if you miss packets on the screen it won't be because trigger missed it, but because you got fast burst of 3-4 packets in 100 µs, and you will be able to see only the last one because previous 3 went by in such a hurry you didn't see them.
--- End quote ---
Finally you are actually getting to the point, the dead time can miss valid triggers. Scopes can run with segmented or waveform history modes to collect such bursts for inspection, but they are still limited by finite dead time.... which can be a different length when the serial trigger or decode is enabled.

--- End quote ---

No you missed the point: none of the triggers were missed. I used my very fast MSOX3104T and was monitoring trigger out. It was triggering at 100 packets per second. On screen, you could see waveform frantically flickering, but decoded string under waveform was standing still.. It didn't change. Scope didn't refresh display of DECODED data because changed too fast..
Only thing that was pointing to data being non static was waveform.

So in order to make sense what is happening here I either have to : segment capture some number op packets and analyse then offline (STOP mode) or capture longer single capture to get multiple packets that way, STOP and analyse online.
Again I'm not talking waveform but decoded strings.

Real time looking at scope decoding data in Normal RUN mode is useful only if data we are triggering on is maybe up to 5 Hz. That is as fast as I could actually catch a glimpse  of something useful.

Which funny enough is same for Siglent SDS6000ProH12 (6000A family) and SDS2000HD. They are slower than Keysight but fast enough
if triggers are slower than 5 per second. If you go faster than that both Siglents and Keysight start skipping decoded data in normal RUN mode. Still faster, Siglent will start loosing packets but you won't be able to see that on screen. I had to connect both Keysight and Siglent in parallel and monitor Trig Out with another scope.... But on screen, both will just have some waveform flickering and mostly static decoded data.  But looking at screen both are equally useless, decoding wise. You might as well disable decoding and simply look at waveform..

So if data to be decoded is slower than some speed both types of scopes will do. If data is coming fast, you need to use segmented or long capture for both... That is what I'm trying to say...

--- End quote ---
Losing packets/triggers to dead time is exactly what the OP was asking about and you've spent 2 pages trying to talk about anything but that. You can contrive all sorts of special cases or say the only thing important to you personally is reading the decoded values, but that is not true for everyone. There are uses for faster triggering on serial data, which the dead time affects, and is not something specified in data sheets.
2N3055:

--- Quote from: Someone on April 20, 2023, 11:27:24 am ---
--- Quote from: 2N3055 on April 20, 2023, 11:18:32 am ---
--- Quote from: Someone on April 20, 2023, 10:14:40 am ---
--- Quote from: 2N3055 on April 20, 2023, 07:54:56 am ---As for missing packets  in real time, if scope has hardware serial triggers, if you miss packets on the screen it won't be because trigger missed it, but because you got fast burst of 3-4 packets in 100 µs, and you will be able to see only the last one because previous 3 went by in such a hurry you didn't see them.
--- End quote ---
Finally you are actually getting to the point, the dead time can miss valid triggers. Scopes can run with segmented or waveform history modes to collect such bursts for inspection, but they are still limited by finite dead time.... which can be a different length when the serial trigger or decode is enabled.

--- End quote ---

No you missed the point: none of the triggers were missed. I used my very fast MSOX3104T and was monitoring trigger out. It was triggering at 100 packets per second. On screen, you could see waveform frantically flickering, but decoded string under waveform was standing still.. It didn't change. Scope didn't refresh display of DECODED data because changed too fast..
Only thing that was pointing to data being non static was waveform.

So in order to make sense what is happening here I either have to : segment capture some number op packets and analyse then offline (STOP mode) or capture longer single capture to get multiple packets that way, STOP and analyse online.
Again I'm not talking waveform but decoded strings.

Real time looking at scope decoding data in Normal RUN mode is useful only if data we are triggering on is maybe up to 5 Hz. That is as fast as I could actually catch a glimpse  of something useful.

Which funny enough is same for Siglent SDS6000ProH12 (6000A family) and SDS2000HD. They are slower than Keysight but fast enough
if triggers are slower than 5 per second. If you go faster than that both Siglents and Keysight start skipping decoded data in normal RUN mode. Still faster, Siglent will start loosing packets but you won't be able to see that on screen. I had to connect both Keysight and Siglent in parallel and monitor Trig Out with another scope.... But on screen, both will just have some waveform flickering and mostly static decoded data.  But looking at screen both are equally useless, decoding wise. You might as well disable decoding and simply look at waveform..

So if data to be decoded is slower than some speed both types of scopes will do. If data is coming fast, you need to use segmented or long capture for both... That is what I'm trying to say...

--- End quote ---
Losing packets/triggers to dead time is exactly what the OP was asking about and you've spent 2 pages trying to talk about anything but that. You can contrive all sorts of special cases or say the only thing important to you personally is reading the decoded values, but that is not true for everyone. There are uses for faster triggering on serial data, which the dead time affects, and is not something specified in data sheets.

--- End quote ---

I keep talking exactly about that. But you cannot be reasoned with. What I demonstrated is that you completely don't understand anything I said....

There is no packet loss if you do it right. You can contrive all kinds of special cases and excuses but that doesn't change the facts.



nctnico:

--- Quote from: tggzzz on April 20, 2023, 09:22:08 am ---
--- Quote from: nctnico on April 19, 2023, 10:20:50 pm ---
--- Quote from: 2N3055 on April 17, 2023, 12:35:35 pm ---So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...

--- End quote ---
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

--- End quote ---

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.

--- End quote ---
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.
tggzzz:

--- Quote from: nctnico on April 20, 2023, 03:38:05 pm ---
--- Quote from: tggzzz on April 20, 2023, 09:22:08 am ---
--- Quote from: nctnico on April 19, 2023, 10:20:50 pm ---
--- Quote from: 2N3055 on April 17, 2023, 12:35:35 pm ---So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...

--- End quote ---
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

--- End quote ---

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.

--- End quote ---
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.

--- End quote ---

I disagree in every respect, and stand by my statements.

Ensure signal integrity, then flip to the digital domain.

One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers.
Someone:

--- Quote from: 2N3055 on April 20, 2023, 11:52:49 am ---There is no packet loss if you do it right. You can contrive all kinds of special cases and excuses but that doesn't change the facts.
--- End quote ---
Guarantee of zero packet loss is a special case, that's you making the generalisation which is incorrect.

Yes it is true for many cases where you know the specific system and what's going to occur and all these non-specified characteristics of the specific oscilloscope used. But that's a whole lot of specifics!

Give us a scope and I'll quickly find a way for it to miss triggers and/or data.

When debugging an unknown problem how can you be sure that the conditions for "perfect" capture exist? That's the whole point, you are looking for something which is unexpected or outside your existing assumptions. Even in routine production testing where all the parameters should be locked down and the same each time, you still want the highest chance to catch anything unexpected.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod