Author Topic: How does waveform updates on an oscilloscope work? Why do they work that way?  (Read 7732 times)

0 Members and 1 Guest are viewing this topic.

Offline pdenisowski

  • Frequent Contributor
  • **
  • Posts: 929
  • Country: us
  • Product Management Engineer, Rohde & Schwarz
    • Test and Measurement Fundamentals Playlist on the R&S YouTube channel
If you're looking at a relatively low speed protocol where the physical layer is (assumed to be) clean and reliable, then a cheap USB logic analyzer will often be good enough.  I have a whole collection of these and for the most part, there isn't much difference between them.

In most other cases, a scope has significant advantages for serial decodes, most of which other people have already touched on, such as hardware triggering.  One of the biggest ones is the mixed signal (analog + digital) aspect.

For example, if I'm trying to debug an issue on a CAN bus, I might want to only capture frames with a bad CRC or a recessive ACK bit.  This requires a hardware trigger unless I want to somehow try to capture and post-process a huge number of frames (and assuming the error is frequent enough to make this practical). 

But having captured an errored frame, how do I determine the root cause of the error?  Was the CRC incorrectly calculated by the sender?  Or were bits corrupted on the bus?  If I trigger on errored frames, I can then look at the analog signal(s) to see what happened during the errored frame - noise, misbehaving node, etc.  This is what makes a mixed signal oscilloscope such a powerful tool:  the ability to correlate events in two domains.

There's a reason why pretty much every modern oscilloscope - even "hobbyist" scopes - has an MSO option, even though this option is always considerably more expensive than a $20 USB logic analyzer.

Again, there are plenty of cases where that $20 USB logic analyzer is more than sufficient, but there are also plenty of cases where it isn't :)


« Last Edit: April 20, 2023, 10:39:19 am by pdenisowski »
Test and Measurement Fundamentals video series on the Rohde & Schwarz YouTube channel:  https://www.youtube.com/playlist?list=PLKxVoO5jUTlvsVtDcqrVn0ybqBVlLj2z8
 
The following users thanked this post: Someone

Offline jonpaul

  • Super Contributor
  • ***
  • Posts: 3656
  • Country: fr
  • Analog, magnetics, Power, HV, Audio, Cinema
    • IEEE Spectrum
Research the many fine digital Oscilloscope application notes simcem1980s..1990s from

Tektronix
Hewlett Packard
Lecroy

These detail the issues of real time, sampling, ADC sampling, memory depth.

See Tekwiki, W140, etc

Bon journée

Jon
An Internet Dinosaur...
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.

I agree but premise of the question is a scope for that use.. Not what are alternatives.

If people are discussing bicycle's characteristics and how they can be problematic when wanting to cycle over the Alps/Rockies, I think it is useful to point out that cars are a more appropriate tool.

Don't you?

Quote
I did agree with you many messages ago that you can even use analog CRT scope for SI and some sort of LA for actual decoding. How it was done for ages..
When you don't want to look at lots of decoded data, digital scope with decoding is useful because you can do both at the same time. Saves time and fiddling with connections...

But I don't want to look at 1000 messages on scope screen. Even super expensive scopes with HUGE 15"  :-DD screens are joke compared to the screen on a PC.. So for that job, for me it is MSO Pico or a LA... occasional print to uart. For stuff I'm making I bridge Pico's trigger deficiencies by toggling a pin of µC at critical points and trigger of that. Kind of "breakpoints" for scope.....

If you can be certain that a tool will meet the requirements of your use case, then by all means use that tool.

But if you think a tool's characteristic weaknesses might make your task difficult or impossible, don't obsessively discuss fine details of such tools. Far better to simply choose a different type of tool without the weaknesses.

Not rocket science :)
« Last Edit: April 20, 2023, 11:04:59 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
As for missing packets  in real time, if scope has hardware serial triggers, if you miss packets on the screen it won't be because trigger missed it, but because you got fast burst of 3-4 packets in 100 µs, and you will be able to see only the last one because previous 3 went by in such a hurry you didn't see them.
Finally you are actually getting to the point, the dead time can miss valid triggers. Scopes can run with segmented or waveform history modes to collect such bursts for inspection, but they are still limited by finite dead time.... which can be a different length when the serial trigger or decode is enabled.

No you missed the point: none of the triggers were missed. I used my very fast MSOX3104T and was monitoring trigger out. It was triggering at 100 packets per second. On screen, you could see waveform frantically flickering, but decoded string under waveform was standing still.. It didn't change. Scope didn't refresh display of DECODED data because changed too fast..
Only thing that was pointing to data being non static was waveform.

So in order to make sense what is happening here I either have to : segment capture some number op packets and analyse then offline (STOP mode) or capture longer single capture to get multiple packets that way, STOP and analyse online.
Again I'm not talking waveform but decoded strings.

Real time looking at scope decoding data in Normal RUN mode is useful only if data we are triggering on is maybe up to 5 Hz. That is as fast as I could actually catch a glimpse  of something useful.

Which funny enough is same for Siglent SDS6000ProH12 (6000A family) and SDS2000HD. They are slower than Keysight but fast enough
if triggers are slower than 5 per second. If you go faster than that both Siglents and Keysight start skipping decoded data in normal RUN mode. Still faster, Siglent will start loosing packets but you won't be able to see that on screen. I had to connect both Keysight and Siglent in parallel and monitor Trig Out with another scope.... But on screen, both will just have some waveform flickering and mostly static decoded data.  But looking at screen both are equally useless, decoding wise. You might as well disable decoding and simply look at waveform..

So if data to be decoded is slower than some speed both types of scopes will do. If data is coming fast, you need to use segmented or long capture for both... That is what I'm trying to say...
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr


I hear you , but it is called "questions and answers",  not "lets' ignore questions  and show them  the errors of their ways.."
That is more like priest work...  :-DD

And you and others already mentioned it competently, I had nothing to add...
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
As for missing packets  in real time, if scope has hardware serial triggers, if you miss packets on the screen it won't be because trigger missed it, but because you got fast burst of 3-4 packets in 100 µs, and you will be able to see only the last one because previous 3 went by in such a hurry you didn't see them.
Finally you are actually getting to the point, the dead time can miss valid triggers. Scopes can run with segmented or waveform history modes to collect such bursts for inspection, but they are still limited by finite dead time.... which can be a different length when the serial trigger or decode is enabled.

No you missed the point: none of the triggers were missed. I used my very fast MSOX3104T and was monitoring trigger out. It was triggering at 100 packets per second. On screen, you could see waveform frantically flickering, but decoded string under waveform was standing still.. It didn't change. Scope didn't refresh display of DECODED data because changed too fast..
Only thing that was pointing to data being non static was waveform.

So in order to make sense what is happening here I either have to : segment capture some number op packets and analyse then offline (STOP mode) or capture longer single capture to get multiple packets that way, STOP and analyse online.
Again I'm not talking waveform but decoded strings.

Real time looking at scope decoding data in Normal RUN mode is useful only if data we are triggering on is maybe up to 5 Hz. That is as fast as I could actually catch a glimpse  of something useful.

Which funny enough is same for Siglent SDS6000ProH12 (6000A family) and SDS2000HD. They are slower than Keysight but fast enough
if triggers are slower than 5 per second. If you go faster than that both Siglents and Keysight start skipping decoded data in normal RUN mode. Still faster, Siglent will start loosing packets but you won't be able to see that on screen. I had to connect both Keysight and Siglent in parallel and monitor Trig Out with another scope.... But on screen, both will just have some waveform flickering and mostly static decoded data.  But looking at screen both are equally useless, decoding wise. You might as well disable decoding and simply look at waveform..

So if data to be decoded is slower than some speed both types of scopes will do. If data is coming fast, you need to use segmented or long capture for both... That is what I'm trying to say...
Losing packets/triggers to dead time is exactly what the OP was asking about and you've spent 2 pages trying to talk about anything but that. You can contrive all sorts of special cases or say the only thing important to you personally is reading the decoded values, but that is not true for everyone. There are uses for faster triggering on serial data, which the dead time affects, and is not something specified in data sheets.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
As for missing packets  in real time, if scope has hardware serial triggers, if you miss packets on the screen it won't be because trigger missed it, but because you got fast burst of 3-4 packets in 100 µs, and you will be able to see only the last one because previous 3 went by in such a hurry you didn't see them.
Finally you are actually getting to the point, the dead time can miss valid triggers. Scopes can run with segmented or waveform history modes to collect such bursts for inspection, but they are still limited by finite dead time.... which can be a different length when the serial trigger or decode is enabled.

No you missed the point: none of the triggers were missed. I used my very fast MSOX3104T and was monitoring trigger out. It was triggering at 100 packets per second. On screen, you could see waveform frantically flickering, but decoded string under waveform was standing still.. It didn't change. Scope didn't refresh display of DECODED data because changed too fast..
Only thing that was pointing to data being non static was waveform.

So in order to make sense what is happening here I either have to : segment capture some number op packets and analyse then offline (STOP mode) or capture longer single capture to get multiple packets that way, STOP and analyse online.
Again I'm not talking waveform but decoded strings.

Real time looking at scope decoding data in Normal RUN mode is useful only if data we are triggering on is maybe up to 5 Hz. That is as fast as I could actually catch a glimpse  of something useful.

Which funny enough is same for Siglent SDS6000ProH12 (6000A family) and SDS2000HD. They are slower than Keysight but fast enough
if triggers are slower than 5 per second. If you go faster than that both Siglents and Keysight start skipping decoded data in normal RUN mode. Still faster, Siglent will start loosing packets but you won't be able to see that on screen. I had to connect both Keysight and Siglent in parallel and monitor Trig Out with another scope.... But on screen, both will just have some waveform flickering and mostly static decoded data.  But looking at screen both are equally useless, decoding wise. You might as well disable decoding and simply look at waveform..

So if data to be decoded is slower than some speed both types of scopes will do. If data is coming fast, you need to use segmented or long capture for both... That is what I'm trying to say...
Losing packets/triggers to dead time is exactly what the OP was asking about and you've spent 2 pages trying to talk about anything but that. You can contrive all sorts of special cases or say the only thing important to you personally is reading the decoded values, but that is not true for everyone. There are uses for faster triggering on serial data, which the dead time affects, and is not something specified in data sheets.

I keep talking exactly about that. But you cannot be reasoned with. What I demonstrated is that you completely don't understand anything I said....

There is no packet loss if you do it right. You can contrive all kinds of special cases and excuses but that doesn't change the facts.



"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.
« Last Edit: April 20, 2023, 04:14:55 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: 2N3055, Martin72

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.

I disagree in every respect, and stand by my statements.

Ensure signal integrity, then flip to the digital domain.

One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
There is no packet loss if you do it right. You can contrive all kinds of special cases and excuses but that doesn't change the facts.
Guarantee of zero packet loss is a special case, that's you making the generalisation which is incorrect.

Yes it is true for many cases where you know the specific system and what's going to occur and all these non-specified characteristics of the specific oscilloscope used. But that's a whole lot of specifics!

Give us a scope and I'll quickly find a way for it to miss triggers and/or data.

When debugging an unknown problem how can you be sure that the conditions for "perfect" capture exist? That's the whole point, you are looking for something which is unexpected or outside your existing assumptions. Even in routine production testing where all the parameters should be locked down and the same each time, you still want the highest chance to catch anything unexpected.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.
If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.
I disagree in every respect, and stand by my statements.

Ensure signal integrity, then flip to the digital domain.

One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers.
Signal integrity is not always static/consistent, sure you can hit interfaces with designed pathological conditions but even they may miss some nasty bug/interaction in the shape or timing say non monotonic edges :'(
The question of system completeness arises again here, is it practical to have signal integrity guaranteed for every end use/state ? Unlikely. Such tests are usually done to some level of confidence + margin but could well have unexpected conditions that consistently fail.
There are a range of tools and they are all complementary and best suited for specific situations.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.
If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.
I disagree in every respect, and stand by my statements.

Ensure signal integrity, then flip to the digital domain.

One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers.
Signal integrity is not always static/consistent, sure you can hit interfaces with designed pathological conditions but even they may miss some nasty bug/interaction in the shape or timing say non monotonic edges :'(

Signal integrity must, by definition, include rare edge cases. Eye diagrams are a classic way of capturing that.

Quote
The question of system completeness arises again here, is it practical to have signal integrity guaranteed for every end use/state ? Unlikely. Such tests are usually done to some level of confidence + margin but could well have unexpected conditions that consistently fail.

If you don't have signal integrity then you are building castles on sand.

Yes, I realise modern software is like that, and we all see the consequences every day.

Quote
There are a range of tools and they are all complementary and best suited for specific situations.

Indeed.

The principal purpose of abstraction and layering of systems is that when working at "higher more abstract" levels there are all sorts of "lower level" phenomena that you can ignore. If you can't apply that to design and debugging then you are in the "unfortunate" position of - for example - having to take account of transistor behaviour when designing FSMs[1].

Thus: rigorously keep conceptual levels separate, ensure "lower levels" are solid foundations, and use the right tool for the level.

[1] a more extreme example is the necessity of taking into account the shape of conductors when designing circuits.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
Ensure signal integrity, then flip to the digital domain.
Signal integrity is not always static/consistent, sure you can hit interfaces with designed pathological conditions but even they may miss some nasty bug/interaction in the shape or timing say non monotonic edges :'(
Signal integrity must, by definition, include rare edge cases. Eye diagrams are a classic way of capturing that.
Quote
The question of system completeness arises again here, is it practical to have signal integrity guaranteed for every end use/state ? Unlikely. Such tests are usually done to some level of confidence + margin but could well have unexpected conditions that consistently fail.
If you don't have signal integrity then you are building castles on sand.
I don't think it is practical to encompass every possible case before moving forward, even small systems may never explore their entire possible states. Product specifications can hide this, something can claim an extremely low bit error rate and compliance to various EMC standards without mentioning that the bit error performance is degraded under worst case EMI. Eye diagrams are only testing the conditions they observed some subset of the operation (with blind time too). I've not seen any product claim perfect signal integrity under all end use cases....

You can show a product working correctly under your ideal/assumed conditions situations all day long but if the end user is having crashes/bugs under their use case they won't be happy (includes internal customers for larger teams/systems).
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Ensure signal integrity, then flip to the digital domain.
Signal integrity is not always static/consistent, sure you can hit interfaces with designed pathological conditions but even they may miss some nasty bug/interaction in the shape or timing say non monotonic edges :'(
Signal integrity must, by definition, include rare edge cases. Eye diagrams are a classic way of capturing that.
Quote
The question of system completeness arises again here, is it practical to have signal integrity guaranteed for every end use/state ? Unlikely. Such tests are usually done to some level of confidence + margin but could well have unexpected conditions that consistently fail.
If you don't have signal integrity then you are building castles on sand.
I don't think it is practical to encompass every possible case before moving forward, even small systems may never explore their entire possible states. Product specifications can hide this, something can claim an extremely low bit error rate and compliance to various EMC standards without mentioning that the bit error performance is degraded under worst case EMI. Eye diagrams are only testing the conditions they observed some subset of the operation (with blind time too). I've not seen any product claim perfect signal integrity under all end use cases....

You can show a product working correctly under your ideal/assumed conditions situations all day long but if the end user is having crashes/bugs under their use case they won't be happy (includes internal customers for larger teams/systems).

And there, in a nutshell, is why so many hardware and/or software products are crap and - after wasting people's life and money - end up in the bin.

It is true, of course, that if your product doesn't work, then you can get it to market much more quickly.

But an old "Bill and Dave" story is that when a minicomputer didn't live up to expectations, the project manager received a memo from the CEO saying "In future please ensure our products meet the specification before they are released". The manager framed the memo, hung it on his wall, and went on to enjoy a good career. You can guess which company.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.

I disagree in every respect, and stand by my statements.

Ensure signal integrity, then flip to the digital domain.

One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers.
Sorry, but with these statements are from a person that has zero real world problem finding and diagnostics skills.

In the real world 100% signal integrity doesn't exist. Any communication path will be disrupted, packets will be losts, messages will become corrupted. Over time and number of products in the field, the chance this happens is 1 (100%). This is partly by external influences and partly due to software / hardware interactions (*). In a complex system (which may include third party pieces) a lot can go wrong and when starting to figure out such a problem I go through all the layers of a system and check the interaction between the layers one by one. Where it comes to protocols though, you can have both software & hardware introduced signalling problems. So just looking at a signal and saying it is OK is not enough. Not by a long shot.  Because you are looking at a snapshot that covers less than 1ppb of the possible cases. If I would have followed your advice, I would have never found certain issues where software and hardware didn't work well together.

* This is also why error detection & recovery is so important to get right. Detecting when a communication bus locks up and recovering from that situation are important to make robust & reliable products. For example: I have seen systems go wrong due to short communication link interruptions which where not handled properly.
« Last Edit: April 21, 2023, 09:08:28 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Someone, pdenisowski, Martin72

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.

I disagree in every respect, and stand by my statements.

Ensure signal integrity, then flip to the digital domain.

One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers.
Sorry, but with these statements are from a person that has zero real world problem finding and diagnostics skills.

In the real world 100% signal integrity doesn't exist. Any communication path will be disrupted, packets will be losts, messages will become corrupted. Over time and number of products in the field, the chance this happens is 1 (100%). This is partly by external influences and partly due to software / hardware interactions (*). In a complex system (which may include third party pieces) a lot can go wrong and when starting to figure out such a problem I go through all the layers of a system and check the interaction between the layers one by one. Where it comes to protocols though, you can have both software & hardware introduced signalling problems. So just looking at a signal and saying it is OK is not enough. Not by a long shot.  Because you are looking at a snapshot that covers less than 1ppb of the possible cases. If I would have followed your advice, I would have never found certain issues where software and hardware didn't work well together.

* This is also why error detection & recovery is so important to get right. Detecting when a communication bus locks up and recovering from that situation are important to make robust & reliable products. For example: I have seen systems go wrong due to short communication link interruptions which where not handled properly.

If you think about it, the points in your last post do not conflict with the strategy I outlined.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
They do because your strategy is flawed by making assumptions before you can be 100% sure. Making assumptions is deadly when dealing with bugs in systems that contain many parts / modules. I get the eagerness to simplify a problem by ruling out possible causes beforehand but it is counter-productive in the long run.

Only a systematic approach that covers all parts/ modules and their interfaces will uncover any existing problem. Having multiple problems with similar symptoms is not uncommon; people trip over this quite a lot because they assume there is only 1 problem at play.
« Last Edit: April 21, 2023, 03:27:01 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Someone

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
They do because your strategy is flawed by making assumptions before you can be 100% sure.

Strawman argument: I never suggested that.

Pointless argument: testing can never make anything 100% sure. At best testing can indicate that you haven't yet found a problem. That is, of course, for your strategy.


Quote
Making assumptions is deadly when dealing with bugs in systems that contain many parts / modules. I get the eagerness to simplify a problem by ruling out possible causes beforehand but it is counter-productive in the long run.

Strawman argument: I never suggested that you could rule out possible causes by testing.

I did, correctly, state that there's an SI problem, then there is no point proceeding further.

Quote
Only a systematic approach that covers all parts/ modules and their interfaces will uncover any existing problem. Having multiple problems with similar symptoms is not uncommon; people trip over this quite a lot because they assume there is only 1 problem at play.

Been there, seen that, got the scars.

Nonetheless:
  • verify signal integrity using a scope. If insufficient, fix that before proceeding
  • flip to digital domain and use LA/PA/printf() to debug digital signals including bits, bytes, numbers, packets, messages
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf