| Products > Test Equipment |
| Keysight DSOS204A waveform update rate |
| << < (3/8) > >> |
| bdunham7:
--- Quote from: 2N3055 on June 10, 2022, 10:48:19 am ---Clue in long memory scope with tools, like eye diagrams, exclusion triggers etc. Instead of capturing 100ns 10000 times randomly, you capture 200ms (0.2 sec) of data in one go and then unleash software search to search for all and any anomalies inside. With 100% detection rate for multiple anomalies simultaneously. With correlation to outside world, where you can directly correlate that glitch in clock happens exactly at the same time that output relay activates... --- End quote --- That works for some things, but an anomaly that only happens once in 10 seconds or once in an hour requires something different. That random stack of traces with color grading will tell you what the anomaly looks like so you can then trigger on it and acquire all the surrounding data that way. With this method, the proportion of the overall signal that is captured will determine how long it takes for the event to show up. Take the SDS2104X+ which in normal mode takes about 60µs to retrigger with a 1µs capture (100ns/div) @ 2 kpoints. That means that you will see about 1 in 60 events, so something that happens every 10 seconds will take about 10 minutes on average to show up. Cut that 60µs in half and you cut the 10 minutes in half. |
| 2N3055:
--- Quote from: bdunham7 on June 10, 2022, 04:21:38 pm --- --- Quote from: 2N3055 on June 10, 2022, 10:48:19 am ---Clue in long memory scope with tools, like eye diagrams, exclusion triggers etc. Instead of capturing 100ns 10000 times randomly, you capture 200ms (0.2 sec) of data in one go and then unleash software search to search for all and any anomalies inside. With 100% detection rate for multiple anomalies simultaneously. With correlation to outside world, where you can directly correlate that glitch in clock happens exactly at the same time that output relay activates... --- End quote --- That works for some things, but an anomaly that only happens once in 10 seconds or once in an hour requires something different. That random stack of traces with color grading will tell you what the anomaly looks like so you can then trigger on it and acquire all the surrounding data that way. With this method, the proportion of the overall signal that is captured will determine how long it takes for the event to show up. Take the SDS2104X+ which in normal mode takes about 60µs to retrigger with a 1µs capture (100ns/div) @ 2 kpoints. That means that you will see about 1 in 60 events, so something that happens every 10 seconds will take about 10 minutes on average to show up. Cut that 60µs in half and you cut the 10 minutes in half. --- End quote --- It is not exactly calculated that way.. It is statistical game, and there are border cases that are better served by particular scenario. As I said the old school procedure is to use persistence to try to capture a clue if there are anomalies and what they are, and then devising strategy how to catch them red handed... So you get to detect there is anomaly within seconds but your (manual) work just begins then. On the other hand with analytic scope, you might need to let it work for an hour. But when you get back, you will already have a lot of quality information what, when, where, distribution, histogram... A bit more wait but less manual work.. There will be certain tasks that will be better served by each approach... |
| Someone:
--- Quote from: 2N3055 on June 10, 2022, 04:47:51 pm ---It is not exactly calculated that way.. It is statistical game, and there are border cases that are better served by particular scenario. --- End quote --- Yes, so any claim of 100% capture is misleading unless you explain the unusual situation in which that can happen (glitch must be repetitive/repeatable, multiple times per second for current memory depths), and that doesn't account for the post processing time of a deep capture + analysis, or the blind time of a trigger automation/walk. Making you the one who started with nonsense. Such things can be calculated but you failed to do that and made a ridiculous misleading claim. |
| nctnico:
--- Quote from: bdunham7 on June 10, 2022, 04:21:38 pm --- --- Quote from: 2N3055 on June 10, 2022, 10:48:19 am ---Clue in long memory scope with tools, like eye diagrams, exclusion triggers etc. Instead of capturing 100ns 10000 times randomly, you capture 200ms (0.2 sec) of data in one go and then unleash software search to search for all and any anomalies inside. With 100% detection rate for multiple anomalies simultaneously. With correlation to outside world, where you can directly correlate that glitch in clock happens exactly at the same time that output relay activates... --- End quote --- That works for some things, but an anomaly that only happens once in 10 seconds or once in an hour requires something different. That random stack of traces with color grading will tell you what the anomaly looks like so you can then trigger on it and acquire all the surrounding data that way. With this method, the proportion of the overall signal that is captured will determine how long it takes for the event to show up. Take the SDS2104X+ which in normal mode takes about 60µs to retrigger with a 1µs capture (100ns/div) @ 2 kpoints. That means that you will see about 1 in 60 events, so something that happens every 10 seconds will take about 10 minutes on average to show up. Cut that 60µs in half and you cut the 10 minutes in half. --- End quote --- You can greatly increase the chance of catching something by using a longer record. If you capture 1ms per acquisition, the dead time of the oscilloscope becomes much less significant. If something occurs every 10 seconds, then roll mode (continuous sampling) might even be better to catch something that stands out. |
| bdunham7:
--- Quote from: nctnico on June 10, 2022, 11:44:28 pm ---You can greatly increase the chance of catching something by using a longer record. If you capture 1ms per acquisition, the dead time of the oscilloscope becomes much less significant. If something occurs every 10 seconds, then roll mode (continuous sampling) might even be better to catch something that stands out. --- End quote --- The test I'm using is a 10 MHz square wave with an additional pulse synced with the signal so that one in every 10 million, 100 million or 1 billion rising edges has a larger-then-normal overshoot. If the additional pulse is 10mHz--every 100 seconds--then it is one-in-a-billion. You can't use roll mode or 1ms/div for a 10MHz signal. You can put as many cycles on the screen as you can see clearly--but that is 10 to 20 at the most. You can try and I'm open to suggestions, but I don't know of a better method than what I'm doing for finding such a glitch. |
| Navigation |
| Message Index |
| Next page |
| Previous page |