| Products > Test Equipment |
| How does waveform updates on an oscilloscope work? Why do they work that way? |
| << < (9/14) > >> |
| nctnico:
Where it comes to software triggering on protocols: most decent benchtop oscilloscopes starting from $350 will have hardware protocol triggering. In the end it is not difficult to implement; it comes down to triggering on a specific pattern. There are some exceptions though: The Picoscopes and AFAIK also some older Lecroy models (like the Wavepro 7k series). A much more interesting seperation between models is the amount of packets they can decode, how much the signal is decimated and minimum required samplerate versus protocol bitrate. These limits determine how effective you can use an oscilloscope's memory. --- Quote from: tggzzz on April 18, 2023, 08:01:14 am ---This whole discussion appears to be ending up down a traditional rathole, equivalent to trying to use a hammer to insert a screw. Use a scope to ensure signal integrity, i.e. that the analogue waveforms will be correctly interpreted as digital signals. Then flip to the digital domain, and use a logic analyser's superior clocking, triggering and filtering to discard (i.e. don't store) the irrelevant crap. --- End quote --- In theory. But in practise you don't always know what you are looking for so it can be helpfull to capture a lot of messages on an oscilloscope, save to disk for analysis on a PC and go back to the scope to inspect the waveform for the malformed packet. |
| nctnico:
--- Quote from: 2N3055 on April 17, 2023, 12:35:35 pm ---So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference... --- End quote --- In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol. |
| 2N3055:
--- Quote from: Someone on April 19, 2023, 10:04:41 pm --- --- Quote from: 2N3055 on April 19, 2023, 11:41:14 am --- --- Quote from: Someone on April 19, 2023, 11:13:02 am ---Those sorts of differences were clearly expanded upon already: --- Quote from: Someone on April 17, 2023, 11:08:37 pm ---There are scopes out there with no hardware serial trigger, but do have software serial decode, or the serial trigger only frames the packet and does not qualify/inspect the contents. Yet this thread is a mess of people not making that key separation which answers part of the question of the OP. --- End quote --- So why are you making so much effort to try and say they are all the same? Scopes without hardware serial triggers cant perform many debugging options that hardware triggers expose, it is a key separator that you seem to be trying to discount/hide/dismiss with faulty reasoning (such as bringing up the best case blind time when the OP is asking about protocol decode). --- End quote --- It is now obvious to me that it is not MY English that is the problem... I said and done none of the things you say I did. Your conclusions are your hallucinations of what my sentences mean. Read again and again until you understand. --- End quote --- A scope without hardware protocol triggers is exactly what is framed in the above quote. That would be things like the Picoscope and older Tektronix platforms (dunno about the current tek models). If a scope such as the Picoscope has the decodes run in software, then it is the rate of that which produces the blind time for serial analysis. Serial decode was the OPs question. This (as noted above by tggzzz) is almost identical to the many threads where people would ask about realtime update rate, then have particular members consistently misleadingly talk at length about blind time in segmented mode and say how its all the same thing. That the acquisitions can enter memory but none of that information reaches the screen/user/result, makes them part of the dead time for that analysis. You want to talk about dead time? then discuss it accurately and honestly rather than pushing your "side" as somehow universal and superior. --- End quote --- You keep repeating same thing WHICH is the reason for repetition.. And miss the point. Let's define terms so you can finally see that we are both saying same thing in different way. First, what is "real time serial decode (analysis)? Is that when you are looking at the scope screen waiting for scope to show messages as they run over comm bus we are decoding? You can have scenario where you have thousands of packets per second and a really fast scope that will render all of those (all 1000) to the screen in real time, keeping each on screen for 1 ms... That would be obviously useless b'cause I can't read that fast. I might need to capture all of them because I wan't to reverse engineer what is going on in detail. This will need one long capture with all of them, stop and then I have few hours of work to make sense of all that data. This is not real time. Or I'm interested only in messages sent by only one of sensors... In which case I need the scope to only TRIGGER on certain messages (set by trigger condition) and ignore all the rest. And if my sensor of interest sends data only once a second (or when I press key or something) then I leave scope running, and wait for it to catch a message and show it to the screen. This is interactive work, this is real time, but only if message rate is slow enough for a slow human operator. 3rd scenario is same as 2nd but although we are capturing only one type of data packet, there are still 10-20 a second... We are not interested in anything else, but poor human is too slow.. In this case we set trigger and capture some to segments. And again look at the data in offline mode. Scenario 1 is perfect for Picoscopes. And many of Saelae type USB analyisers. It is not good for short memory scopes. Scenario 2 is perfect for fast hardware trigger/decode scopes. But most modern hardware trigger/ software decode scopes will be fast enough to keep the pace with human.. If it is too fast for human it will be too fast for them. Scenario 3 is good for both hardware trig/decode and hardware trig/ soft decode scopes, because triggers run in hardware in both but segmented capture postpones all processing until predefined number of segments (packets) is captured, negating any blind time difference in decoding. On soft decode scope you might have a short pause after capture is done for scope to decode while hardware one will show data much faster. But neither will miss any packets.. There is also marginal scenario between 2 and 3 where some people will be able to read messages faster or slower, and where some will prefer full hardware trigger/decode scope because they will be able to show things faster.. |
| 2N3055:
--- Quote from: nctnico on April 19, 2023, 10:20:50 pm --- --- Quote from: 2N3055 on April 17, 2023, 12:35:35 pm ---So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference... --- End quote --- In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol. --- End quote --- I know, I used my Keysight once to reverse engineer display protocol by sending data to screen and looking into data on the bus... It was quick work to find coordinate system and direction of data.. But to human a realtime is like twice per second... My 2 Siglents (and even a Pico on a fast PC) do that without a problem. Real difference is when you have something happening 5-10 times per second, where you can just see a clear change instead of stuttering in decode... But because of hardware trigger, you will still see realtime change in waveform.. And that happens to Keysight too, as soon as you get fast enough it will just show some of the data flipping.. |
| Someone:
--- Quote from: 2N3055 on April 19, 2023, 10:53:29 pm ---You can have scenario where you have thousands of packets per second and a really fast scope that will render all of those (all 1000) to the screen in real time, keeping each on screen for 1 ms... That would be obviously useless b'cause I can't read that fast. --- End quote --- Immediately you have taken the same strawman trope that has been explained over and over. Modern scopes have persistence in their display, even when set to "zero" persistence, but it can be increased as desired to infinity. Even if the trace was only visible for a single display frame (typically a broadcast rate such as 30Hz,50Hz,60Hz) the eye would still retain the event by visual persistence. As nctnico mentions above, a serial trigger can focus down onto one part/transaction of a system (such as filtering on addresses of SPI) and show what states the data bits contained across many words/packets. In real time, continuously, but with possibly some missing data lost to the dead time. --- Quote from: 2N3055 on April 19, 2023, 10:53:29 pm ---Or I'm interested only in messages sent by only one of sensors... In which case I need the scope to only TRIGGER on certain messages (set by trigger condition) and ignore all the rest. And if my sensor of interest sends data only once a second (or when I press key or something) then I leave scope running, and wait for it to catch a message and show it to the screen. This is interactive work, this is real time, but only if message rate is slow enough for a slow human operator. --- End quote --- So, now it's something "very" slow (from a scope perspective). Take a scope without a hardware serial trigger and how does it find that interesting message in amongst all the others? It has a larger blind time, the problem at hand. |
| Navigation |
| Message Index |
| Next page |
| Previous page |