Products > Test Equipment
How does waveform updates on an oscilloscope work? Why do they work that way?
<< < (14/14)
tggzzz:

--- Quote from: nctnico on April 21, 2023, 09:00:08 am ---
--- Quote from: tggzzz on April 20, 2023, 09:35:32 pm ---
--- Quote from: nctnico on April 20, 2023, 03:38:05 pm ---
--- Quote from: tggzzz on April 20, 2023, 09:22:08 am ---
--- Quote from: nctnico on April 19, 2023, 10:20:50 pm ---
--- Quote from: 2N3055 on April 17, 2023, 12:35:35 pm ---So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...

--- End quote ---
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

--- End quote ---

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.

--- End quote ---
Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose.

Same for getting a visual on what bits change to get a starting point on how a protocol is constructed.

--- End quote ---

I disagree in every respect, and stand by my statements.

Ensure signal integrity, then flip to the digital domain.

One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers.

--- End quote ---
Sorry, but with these statements are from a person that has zero real world problem finding and diagnostics skills.

In the real world 100% signal integrity doesn't exist. Any communication path will be disrupted, packets will be losts, messages will become corrupted. Over time and number of products in the field, the chance this happens is 1 (100%). This is partly by external influences and partly due to software / hardware interactions (*). In a complex system (which may include third party pieces) a lot can go wrong and when starting to figure out such a problem I go through all the layers of a system and check the interaction between the layers one by one. Where it comes to protocols though, you can have both software & hardware introduced signalling problems. So just looking at a signal and saying it is OK is not enough. Not by a long shot.  Because you are looking at a snapshot that covers less than 1ppb of the possible cases. If I would have followed your advice, I would have never found certain issues where software and hardware didn't work well together.

* This is also why error detection & recovery is so important to get right. Detecting when a communication bus locks up and recovering from that situation are important to make robust & reliable products. For example: I have seen systems go wrong due to short communication link interruptions which where not handled properly.

--- End quote ---

If you think about it, the points in your last post do not conflict with the strategy I outlined.
nctnico:
They do because your strategy is flawed by making assumptions before you can be 100% sure. Making assumptions is deadly when dealing with bugs in systems that contain many parts / modules. I get the eagerness to simplify a problem by ruling out possible causes beforehand but it is counter-productive in the long run.

Only a systematic approach that covers all parts/ modules and their interfaces will uncover any existing problem. Having multiple problems with similar symptoms is not uncommon; people trip over this quite a lot because they assume there is only 1 problem at play.
tggzzz:

--- Quote from: nctnico on April 21, 2023, 03:23:13 pm ---They do because your strategy is flawed by making assumptions before you can be 100% sure.

--- End quote ---

Strawman argument: I never suggested that.

Pointless argument: testing can never make anything 100% sure. At best testing can indicate that you haven't yet found a problem. That is, of course, for your strategy.



--- Quote ---Making assumptions is deadly when dealing with bugs in systems that contain many parts / modules. I get the eagerness to simplify a problem by ruling out possible causes beforehand but it is counter-productive in the long run.

--- End quote ---

Strawman argument: I never suggested that you could rule out possible causes by testing.

I did, correctly, state that there's an SI problem, then there is no point proceeding further.


--- Quote ---Only a systematic approach that covers all parts/ modules and their interfaces will uncover any existing problem. Having multiple problems with similar symptoms is not uncommon; people trip over this quite a lot because they assume there is only 1 problem at play.

--- End quote ---

Been there, seen that, got the scars.

Nonetheless:

* verify signal integrity using a scope. If insufficient, fix that before proceeding
* flip to digital domain and use LA/PA/printf() to debug digital signals including bits, bytes, numbers, packets, messages
Navigation
Message Index
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod