| Products > Test Equipment |
| How does waveform updates on an oscilloscope work? Why do they work that way? |
| << < (13/14) > >> |
| Someone:
--- Quote from: tggzzz on April 20, 2023, 09:35:32 pm --- --- Quote from: nctnico on April 20, 2023, 03:38:05 pm --- --- Quote from: tggzzz on April 20, 2023, 09:22:08 am --- --- Quote from: nctnico on April 19, 2023, 10:20:50 pm --- --- Quote from: 2N3055 on April 17, 2023, 12:35:35 pm ---So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference... --- End quote --- In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol. --- End quote --- If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs. A scope is a non-optimum tool for that, even if it can be bent to the purpose. --- End quote --- Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose. Same for getting a visual on what bits change to get a starting point on how a protocol is constructed. --- End quote --- I disagree in every respect, and stand by my statements. Ensure signal integrity, then flip to the digital domain. One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers. --- End quote --- Signal integrity is not always static/consistent, sure you can hit interfaces with designed pathological conditions but even they may miss some nasty bug/interaction in the shape or timing say non monotonic edges :'( The question of system completeness arises again here, is it practical to have signal integrity guaranteed for every end use/state ? Unlikely. Such tests are usually done to some level of confidence + margin but could well have unexpected conditions that consistently fail. There are a range of tools and they are all complementary and best suited for specific situations. |
| tggzzz:
--- Quote from: Someone on April 20, 2023, 10:26:14 pm --- --- Quote from: tggzzz on April 20, 2023, 09:35:32 pm --- --- Quote from: nctnico on April 20, 2023, 03:38:05 pm --- --- Quote from: tggzzz on April 20, 2023, 09:22:08 am --- --- Quote from: nctnico on April 19, 2023, 10:20:50 pm --- --- Quote from: 2N3055 on April 17, 2023, 12:35:35 pm ---So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference... --- End quote --- In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol. --- End quote --- If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs. A scope is a non-optimum tool for that, even if it can be bent to the purpose. --- End quote --- Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose. Same for getting a visual on what bits change to get a starting point on how a protocol is constructed. --- End quote --- I disagree in every respect, and stand by my statements. Ensure signal integrity, then flip to the digital domain. One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers. --- End quote --- Signal integrity is not always static/consistent, sure you can hit interfaces with designed pathological conditions but even they may miss some nasty bug/interaction in the shape or timing say non monotonic edges :'( --- End quote --- Signal integrity must, by definition, include rare edge cases. Eye diagrams are a classic way of capturing that. --- Quote ---The question of system completeness arises again here, is it practical to have signal integrity guaranteed for every end use/state ? Unlikely. Such tests are usually done to some level of confidence + margin but could well have unexpected conditions that consistently fail. --- End quote --- If you don't have signal integrity then you are building castles on sand. Yes, I realise modern software is like that, and we all see the consequences every day. --- Quote ---There are a range of tools and they are all complementary and best suited for specific situations. --- End quote --- Indeed. The principal purpose of abstraction and layering of systems is that when working at "higher more abstract" levels there are all sorts of "lower level" phenomena that you can ignore. If you can't apply that to design and debugging then you are in the "unfortunate" position of - for example - having to take account of transistor behaviour when designing FSMs[1]. Thus: rigorously keep conceptual levels separate, ensure "lower levels" are solid foundations, and use the right tool for the level. [1] a more extreme example is the necessity of taking into account the shape of conductors when designing circuits. |
| Someone:
--- Quote from: tggzzz on April 20, 2023, 10:51:18 pm --- --- Quote from: Someone on April 20, 2023, 10:26:14 pm --- --- Quote from: tggzzz on April 20, 2023, 09:35:32 pm ---Ensure signal integrity, then flip to the digital domain. --- End quote --- Signal integrity is not always static/consistent, sure you can hit interfaces with designed pathological conditions but even they may miss some nasty bug/interaction in the shape or timing say non monotonic edges :'( --- End quote --- Signal integrity must, by definition, include rare edge cases. Eye diagrams are a classic way of capturing that. --- Quote ---The question of system completeness arises again here, is it practical to have signal integrity guaranteed for every end use/state ? Unlikely. Such tests are usually done to some level of confidence + margin but could well have unexpected conditions that consistently fail. --- End quote --- If you don't have signal integrity then you are building castles on sand. --- End quote --- I don't think it is practical to encompass every possible case before moving forward, even small systems may never explore their entire possible states. Product specifications can hide this, something can claim an extremely low bit error rate and compliance to various EMC standards without mentioning that the bit error performance is degraded under worst case EMI. Eye diagrams are only testing the conditions they observed some subset of the operation (with blind time too). I've not seen any product claim perfect signal integrity under all end use cases.... You can show a product working correctly under your ideal/assumed conditions situations all day long but if the end user is having crashes/bugs under their use case they won't be happy (includes internal customers for larger teams/systems). |
| tggzzz:
--- Quote from: Someone on April 20, 2023, 11:15:18 pm --- --- Quote from: tggzzz on April 20, 2023, 10:51:18 pm --- --- Quote from: Someone on April 20, 2023, 10:26:14 pm --- --- Quote from: tggzzz on April 20, 2023, 09:35:32 pm ---Ensure signal integrity, then flip to the digital domain. --- End quote --- Signal integrity is not always static/consistent, sure you can hit interfaces with designed pathological conditions but even they may miss some nasty bug/interaction in the shape or timing say non monotonic edges :'( --- End quote --- Signal integrity must, by definition, include rare edge cases. Eye diagrams are a classic way of capturing that. --- Quote ---The question of system completeness arises again here, is it practical to have signal integrity guaranteed for every end use/state ? Unlikely. Such tests are usually done to some level of confidence + margin but could well have unexpected conditions that consistently fail. --- End quote --- If you don't have signal integrity then you are building castles on sand. --- End quote --- I don't think it is practical to encompass every possible case before moving forward, even small systems may never explore their entire possible states. Product specifications can hide this, something can claim an extremely low bit error rate and compliance to various EMC standards without mentioning that the bit error performance is degraded under worst case EMI. Eye diagrams are only testing the conditions they observed some subset of the operation (with blind time too). I've not seen any product claim perfect signal integrity under all end use cases.... You can show a product working correctly under your ideal/assumed conditions situations all day long but if the end user is having crashes/bugs under their use case they won't be happy (includes internal customers for larger teams/systems). --- End quote --- And there, in a nutshell, is why so many hardware and/or software products are crap and - after wasting people's life and money - end up in the bin. It is true, of course, that if your product doesn't work, then you can get it to market much more quickly. But an old "Bill and Dave" story is that when a minicomputer didn't live up to expectations, the project manager received a memo from the CEO saying "In future please ensure our products meet the specification before they are released". The manager framed the memo, hung it on his wall, and went on to enjoy a good career. You can guess which company. |
| nctnico:
--- Quote from: tggzzz on April 20, 2023, 09:35:32 pm --- --- Quote from: nctnico on April 20, 2023, 03:38:05 pm --- --- Quote from: tggzzz on April 20, 2023, 09:22:08 am --- --- Quote from: nctnico on April 19, 2023, 10:20:50 pm --- --- Quote from: 2N3055 on April 17, 2023, 12:35:35 pm ---So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference... --- End quote --- In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol. --- End quote --- If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs. A scope is a non-optimum tool for that, even if it can be bent to the purpose. --- End quote --- Unfortunately you are completely wrong here. Malformed packet can have any kind of cause. When looking for a rare problem for which you can't even create a trigger condition, you will want to capture as much information as you can and an oscilloscope with decode is THE tool for that purpose. Same for getting a visual on what bits change to get a starting point on how a protocol is constructed. --- End quote --- I disagree in every respect, and stand by my statements. Ensure signal integrity, then flip to the digital domain. One area where that might fail is with trellis/Viterbi decoders, but a scope won't help with such cases: you need access to the decoder internal numbers. --- End quote --- Sorry, but with these statements are from a person that has zero real world problem finding and diagnostics skills. In the real world 100% signal integrity doesn't exist. Any communication path will be disrupted, packets will be losts, messages will become corrupted. Over time and number of products in the field, the chance this happens is 1 (100%). This is partly by external influences and partly due to software / hardware interactions (*). In a complex system (which may include third party pieces) a lot can go wrong and when starting to figure out such a problem I go through all the layers of a system and check the interaction between the layers one by one. Where it comes to protocols though, you can have both software & hardware introduced signalling problems. So just looking at a signal and saying it is OK is not enough. Not by a long shot. Because you are looking at a snapshot that covers less than 1ppb of the possible cases. If I would have followed your advice, I would have never found certain issues where software and hardware didn't work well together. * This is also why error detection & recovery is so important to get right. Detecting when a communication bus locks up and recovering from that situation are important to make robust & reliable products. For example: I have seen systems go wrong due to short communication link interruptions which where not handled properly. |
| Navigation |
| Message Index |
| Next page |
| Previous page |