Author Topic: How does waveform updates on an oscilloscope work? Why do they work that way?  (Read 7740 times)

0 Members and 1 Guest are viewing this topic.

Offline ballsystemlordTopic starter

  • Regular Contributor
  • *
  • Posts: 248
  • Country: us
  • Student
Although I tried to be an aware buying when I purchased my oscilloscope, I somehow didn't know about the implications of waveform updates per second. Dave mentioned it here: https://youtu.be/txPxo4TA0i4 offset 25:30 . And so I read up on the matter on rigol's blog where they have this picture:
Here's the rigol article https://int.rigol.com/news/blog/DS70000BLOG.html And the relevant quote:
"Since the speed of data sampling is much greater than the speed of data processing, sampling has to be stopped during data processing, which results in all waveforms during data processing to be lost because they are not collected."

Doesn't this cause a problem for protocol analysis?

Thanks!

EDIT: Never mind the second question. I read wrongly.
« Last Edit: April 17, 2023, 04:48:44 am by ballsystemlord »
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3573
  • Country: it
Doesn't this cause a problem for protocol analysis?

No. I mean, yes, if you can't fit the frame of interest in the acquisition.
But you should really use an oscilloscope only to validate the analog part of the window. The moment the analog signal is correct you should switch to a protocol analyzer (which can usually also put data on the bus, which makes it a much more useful trigger. Imagine doing CANbus analysis of a network with only an oscilloscope).
And if you're looking for protocol error conditions you should use decode triggers (and an oscilloscope that use HARDWARE protocol decoders and not decodes from screen data) in which case you won't have dead time because the scope will process/display only buffers that contains the trigger conditions
 
The following users thanked this post: Someone

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
Doesn't this cause a problem for protocol analysis?
Depends on the scope. Some scope + protocol decoders are run online in realtime hardware, they trigger the sampling/capture and would be blocked from triggering again until the "processing" is complete. But will not miss the first (or only) instance. Some scopes will only do the protocol analysis on captured data, slower than the data is coming in with those sorts of dead times between visibility (and chance to miss triggers). Obviously the scopes that aren't able to do realtime protocol decode are a little shy in saying that so it can be hard to tell, generally if the data sheet isn't making lots of noise about hardware protocol decode then its that slow type.
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 17518
  • Country: 00
Doesn't this cause a problem for protocol analysis?

Can you miss a packet? Yes.

But unless your eyes are fast enough to read all the data in real time as it decodes then it's not a problem. 

A modern 'scope will have enough memory to capture what came after the trigger point so you can stop the 'scope and inspect it.
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 5050
  • Country: si
High waveform update rate only matters for 'analog' signals.

When looking for noise or glitches on a signal you want to see as much of the signal as possible. So a good modern high update rate oscilloscope might do 1000s of captures in the 1/60th of a second that it takes to update the LCD display. So to show all those waveforms the scope overlays them over each other in a transparent way. This gives them that smooth soft fuzzy look that you see on a classical CRT oscilloscope (since it is doing the same thing, drawing the waveform many many times on top of itself). In order to achieve this high update rate, the oscilloscope has to have a processing time as short as possible, so that it is capturing new signal data as much of the time as possible.

When looking at digital communication this is irrelevant. There you are only interested in making one big single shot capture. So the processing time is just the time before making the single shot capture and it appearing on screen. Since humans are really really slow in comparison that doesn't matter. Modern scopes have huge sample memories, so you can really capture a lot in single shot and then scroll trough all the data.
 
The following users thanked this post: maelh

Online Martin72

  • Super Contributor
  • ***
  • Posts: 7019
  • Country: de
  • Testfield Technician

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
Doesn't this cause a problem for protocol analysis?
Can you miss a packet? Yes.

But unless your eyes are fast enough to read all the data in real time as it decodes then it's not a problem. 

A modern 'scope will have enough memory to capture what came after the trigger point so you can stop the 'scope and inspect it.
Catching specific or broken words/packets is the key feature of hardware decode + trigger, where none are missed. Finding that one problematic event in an embedded bus that only appears occasionally can overflow the memory capacity easily (have debugged things like this many times, not just some imaginary use case or one off special purpose).

When looking at digital communication this is irrelevant. There you are only interested in making one big single shot capture. So the processing time is just the time before making the single shot capture and it appearing on screen. Since humans are really really slow in comparison that doesn't matter. Modern scopes have huge sample memories, so you can really capture a lot in single shot and then scroll trough all the data.
You haven't separated software decode vs hardware decode, one of those is significantly handicapped by the dead time.
« Last Edit: April 17, 2023, 09:58:45 am by Someone »
 
The following users thanked this post: Fungus

Offline pdenisowski

  • Frequent Contributor
  • **
  • Posts: 929
  • Country: us
  • Product Management Engineer, Rohde & Schwarz
    • Test and Measurement Fundamentals Playlist on the R&S YouTube channel
I made a video on this exact topic :)

Test and Measurement Fundamentals video series on the Rohde & Schwarz YouTube channel:  https://www.youtube.com/playlist?list=PLKxVoO5jUTlvsVtDcqrVn0ybqBVlLj2z8
 
The following users thanked this post: Someone, alm, MJF, skander36, Martin72

Offline pdenisowski

  • Frequent Contributor
  • **
  • Posts: 929
  • Country: us
  • Product Management Engineer, Rohde & Schwarz
    • Test and Measurement Fundamentals Playlist on the R&S YouTube channel
Doesn't this cause a problem for protocol analysis?
Depends on the scope. Some scope + protocol decoders are run online in realtime hardware, they trigger the sampling/capture and would be blocked from triggering again until the "processing" is complete. But will not miss the first (or only) instance. Some scopes will only do the protocol analysis on captured data, slower than the data is coming in with those sorts of dead times between visibility (and chance to miss triggers). Obviously the scopes that aren't able to do realtime protocol decode are a little shy in saying that so it can be hard to tell, generally if the data sheet isn't making lots of noise about hardware protocol decode then its that slow type.

Yes.  The hardware / software distinction also becomes increasingly important as bus speed increases: here's a screenshot of our MXO4 decoding SPI data in realtime where the frames are only ~10 microseconds apart.  Guess which kind of processing we use :)   

Hardware processing is also important when you want to trigger at the protocol level, e.g. if I want my scope to only capture frames that contain some bit pattern or which have errors.

Blind time is much more important than most people think, and it is definitely an issue for higher speed protocol decodes as well.

(Incidentally, the data in the screenshot is me repeatedly sending "pAUL" in ASCII and then x76 to a display - note that the MXO is not missing any of the frames even at > 40 Mbps)
« Last Edit: April 17, 2023, 08:57:07 am by pdenisowski »
Test and Measurement Fundamentals video series on the Rohde & Schwarz YouTube channel:  https://www.youtube.com/playlist?list=PLKxVoO5jUTlvsVtDcqrVn0ybqBVlLj2z8
 

Offline pdenisowski

  • Frequent Contributor
  • **
  • Posts: 929
  • Country: us
  • Product Management Engineer, Rohde & Schwarz
    • Test and Measurement Fundamentals Playlist on the R&S YouTube channel
Here's the rigol article https://int.rigol.com/news/blog/DS70000BLOG.html And the relevant quote:
"Since the speed of data sampling is much greater than the speed of data processing, sampling has to be stopped during data processing, which results in all waveforms during data processing to be lost because they are not collected."

I think what they mean is that samples being generated by the ADC are ignored (that is, not processed).  I'd be surprised if they (can) turn the ADC "off" while processing the samples from the previous acquisition :)
« Last Edit: April 17, 2023, 08:49:24 am by pdenisowski »
Test and Measurement Fundamentals video series on the Rohde & Schwarz YouTube channel:  https://www.youtube.com/playlist?list=PLKxVoO5jUTlvsVtDcqrVn0ybqBVlLj2z8
 
The following users thanked this post: Someone, 2N3055

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
Lots of info here... Let's sift through it to see what could be relevant to OP's question.

Blind time: the time scope is not registering new acquisition after previous trigger. I say registering because ADC conversions are running all the time... Most new scopes are having digital trigger engine, so ADC is running all the time, putting samples into buffer all the time. Triggering engine is processing all the time until it encounters trigger term in input data.
Then for short time it stops being ready for new trigger and that is blind time...

So if you are doing protocol analysis by triggering on single communication packet, then you need to have a scope whose blind time is less then inter-packet time.

But, why would you look at packets this way?  If you have a burst of 100 packets in 10ms, what you will actually see on screen?
Only last packet. Your super-duper fast scope is useless... You cannot see packets that change every millisecond. 

To overcome this you need for scope to capture all 100 of them and stop for you to be able to go through data...

And you can do that two ways: by using long memory,where you capture 100ms of data and all the packets inside with zero (0) blind time inside that capture, or you can use segmented memory mode, where you capture short packets individually but capture 100s of those segments in one burst. That mode (segmented memory mode, manufacturers have different names for it) usually have very little blind time because it doesn't refresh the screen or do anything with data until it is finished with whole burst. Not a zero blind time, but really fast.

And that makes scope refresh rate pretty much irrelevant to decoding. Fast scope refresh rate is sometimes useful for some other tasks. But for decoding, not really.

As for decoding in software or hardware, I have scopes with both. Ones with software decodes I have are fast enough to be faster than me, i.e. the decode faster than I can read and comprehend. 

Fact that my Keysight can decode much faster than I can read looks cool on paper but is not really useful.. I look at the packet where data is changing all the time and all I see is blur. Like benchtop DMM set to really fast conversion, where last two digits are just flipping so fast you can't see the number...

Software decoding was (is) a problem only with really slow (old or entry level) scopes, where it needs a second or two for decode to appear. If decode pops up in 100ms or faster, that is real time to a human.
And if you look at a packet to see if it changes, you can see that on waveform. Software decoding is asynchronous from waveform drawing if done properly. You still see waveform change even if protocol decoder is lagging....

For protocol decoding, long memory is more important than retrigger rate...
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 
The following users thanked this post: Performa01, pdenisowski, Martin72

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
As for decoding in software or hardware, I have scopes with both. Ones with software decodes I have are fast enough to be faster than me, i.e. the decode faster than I can read and comprehend.
As we keep saying, that you have not exploited the feature does not make it equivalent to the poorer cousin. When you know what you are looking for hardware triggers will get there quicker and easier, you often know something about the problem data/word but need to see it in context with other signals or data before/after it. Yes in some situations the software triggering will be quick enough to keep up (not a published specification) but there are real world examples where it isn't sufficient and hardware triggers are going to collect the capture faster. To take extreme examples to the other side some software decoders (and triggers in general) are nonrandom, you always get the first in a burst and the following are always caught in the blind time, they will never see the anomaly if it only occurs inside that blinded region (again coming from a real world example of that).

Long memory and sifting through (hoping the relevant point is within the captured window and not the blind time) vs triggering directly on what you are looking for are not equivalent as you make out.

For protocol decoding, long memory is more important than retrigger rate...
They are radically different tools for different situations, neither is more important than the other until you know specific details about the signals.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
As for decoding in software or hardware, I have scopes with both. Ones with software decodes I have are fast enough to be faster than me, i.e. the decode faster than I can read and comprehend.
As we keep saying, that you have not exploited the feature does not make it equivalent to the poorer cousin. When you know what you are looking for hardware triggers will get there quicker and easier, you often know something about the problem data/word but need to see it in context with other signals or data before/after it. Yes in some situations the software triggering will be quick enough to keep up (not a published specification) but there are real world examples where it isn't sufficient and hardware triggers are going to collect the capture faster. To take extreme examples to the other side some software decoders (and triggers in general) are nonrandom, you always get the first in a burst and the following are always caught in the blind time, they will never see the anomaly if it only occurs inside that blinded region (again coming from a real world example of that).

Long memory and sifting through (hoping the relevant point is within the captured window and not the blind time) vs triggering directly on what you are looking for are not equivalent as you make out.

For protocol decoding, long memory is more important than retrigger rate...
They are radically different tools for different situations, neither is more important than the other until you know specific details about the signals.

I really don't understand software and hardware TRIGGERS mentioning ...  TRIGGERS are hardware (for both analog and decodes) on Siglent for instance...... Screen data decoding is software... But triggers are hardware based state machine... That works in normal and segmented mode in full speed.

That is why I said that for Siglent, for instance, only on screen update rate (human readable) of actual messages is related to software speed.
But triggering is hardware based digital trigger working off the ADC full sample rate..
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline ballsystemlordTopic starter

  • Regular Contributor
  • *
  • Posts: 248
  • Country: us
  • Student
Thanks again. That answers my question.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
As for decoding in software or hardware, I have scopes with both. Ones with software decodes I have are fast enough to be faster than me, i.e. the decode faster than I can read and comprehend.
As we keep saying, that you have not exploited the feature does not make it equivalent to the poorer cousin. When you know what you are looking for hardware triggers will get there quicker and easier, you often know something about the problem data/word but need to see it in context with other signals or data before/after it. Yes in some situations the software triggering will be quick enough to keep up (not a published specification) but there are real world examples where it isn't sufficient and hardware triggers are going to collect the capture faster. To take extreme examples to the other side some software decoders (and triggers in general) are nonrandom, you always get the first in a burst and the following are always caught in the blind time, they will never see the anomaly if it only occurs inside that blinded region (again coming from a real world example of that).

Long memory and sifting through (hoping the relevant point is within the captured window and not the blind time) vs triggering directly on what you are looking for are not equivalent as you make out.

For protocol decoding, long memory is more important than retrigger rate...
They are radically different tools for different situations, neither is more important than the other until you know specific details about the signals.

I really don't understand software and hardware TRIGGERS mentioning ...  TRIGGERS are hardware (for both analog and decodes) on Siglent for instance...... Screen data decoding is software... But triggers are hardware based state machine... That works in normal and segmented mode in full speed.

That is why I said that for Siglent, for instance, only on screen update rate (human readable) of actual messages is related to software speed.
But triggering is hardware based digital trigger working off the ADC full sample rate..
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
There are scopes out there with no hardware serial trigger, but do have software serial decode, or the serial trigger only frames the packet and does not qualify/inspect the contents. Yet this thread is a mess of people not making that key separation which answers part of the question of the OP. You can talk about your specific situation and be correct, but stop making it sound like that is true for everyone and everything because it is not.

Even with hardware triggers that extract errors or specific data values/patterns, there is still dead time which can miss some events. In some situations that is important and cannot be replaced with deep memory, instead segmented memory can be the better choice, or a more powerful trigger. Long memory is not a replacement for short re-arm time.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
There are scopes out there with no hardware serial trigger, but do have software serial decode, or the serial trigger only frames the packet and does not qualify/inspect the contents. Yet this thread is a mess of people not making that key separation which answers part of the question of the OP. You can talk about your specific situation and be correct, but stop making it sound like that is true for everyone and everything because it is not.

Even with hardware triggers that extract errors or specific data values/patterns, there is still dead time which can miss some events. In some situations that is important and cannot be replaced with deep memory, instead segmented memory can be the better choice, or a more powerful trigger. Long memory is not a replacement for short re-arm time.

I don't know of any current production scope that has software triggers (out of mainstream brands i know off). I certainly don't know them all naturally..

You seem not to understand what I want to say. Let's take example of my Keysight 3000T and Siglent SDS6000Pro H12.
MSOX3000T has faster retrigger rate in normal mode, no question. But it has 4MPts of memory total while SDS6000 has 500MPts and even more in segmented mode.
Which means I can capture 125 times longer capture at same sampling rate and just capture whole burst of data from sensor for instance.
And for data in that capture, there is 0 blind time. Zero.
With MSOX3000T I have to use segmented memory...
And I have to have very short blind time not to miss packets...

With sds6000 I don't have to use segmented  memory most of the time.  And if I do I get many more segments.. and blind time is minimal because in segmented mode it skips processing. In segmented mode difference is minimal.

Segmented memory is better for sparse packets on fast protocols (short packets, long dead time). Which incidentally makes short retrigger time less critical. No missed events.

For a burst of really fast packets with very little interpacket time, continuous long memory can capture tens of thousands of packets in single go. With zero blind time inside. No missed events.


As for decoding speed, on scopes with hardware trigger and software decode, decoding is asynchronous to capture. If decode is too slow to decode every screen refresh,

Short rearm time is (like I said) useful. If that is so important and if you feel that software decoding is slowing down retriggering ( it should not because trigger is hardware and decode processing is not included in retrigger time but in display threads), you can disable decoding, capture packets you like and then stop scope and enable decoding. It will decode it then. That is one great advantage of decoding as postprocessing. You can also grab some data and play with decoding settings until you get meaningful data. Compare that with MSOX3000T that you need to have all protocol parameters perfectly set before capture or it won't decode right and you have no other choice but to capture again...

And at the end of the day, I use both Siglent, Pico and Keysight and they are all up to the task, just by using different techniques.

That is one thing.


Other is this: If you are looking at scope interactively with packets flying by, hundreds per second, looking at the screen will show you just a bunch of changing data and waveforms.. You will see just a blur.   
You need to either :

1.  trigger on a rare packet that happens rarely enough for you be able to see, read and comprehend messages
2. Capture 100s of packet individually using segmented mode, stop and sift trough them. In which case even scopes with software decode don't do any processing and have extremely fast retrigger rates.. POI is very high on all.
3. Capture very long capture that has 100s of packets inside, stop and sift trough them. In which case there is 0 retrigger time between packets and POI is 100%.

"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 
The following users thanked this post: Performa01

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
Which means I can capture 125 times longer capture at same sampling rate and just capture whole burst of data from sensor for instance.
And for data in that capture, there is 0 blind time. Zero.
No missed events.
Which immediately means you can only capture for that length maximum with 100% coverage (and slower offline search/processing). I get it, you are correct for that specific situation. No complaints. But that is almost no use at all if you are trying to pull out intermittent problems that appear in minutes or hours of time (longer than the maximum memory depth).

If you can capture the entire systems operation within the memory sure, go for it and have 100% capture probability. But that is not the situation for every case.
1.  trigger on a rare packet that happens rarely enough for you be able to see, read and comprehend messages
2. Capture 100s of packet individually using segmented mode, stop and sift trough them. In which case even scopes with software decode don't do any processing and have extremely fast retrigger rates.. POI is very high on all.
3. Capture very long capture that has 100s of packets inside, stop and sift trough them. In which case there is 0 retrigger time between packets and POI is 100%.
I think you are wildly misinformed and inexperienced if you believe turning on software decodes does not slow down any scope or increase the blind time. Regardless, as I have said from the start and you keep ignoring/discounting/arguing against, it is unusual to have systems where you can capture 100% of the systems operation in memory. So to say that is a given absolute and true statement is why this keeps going around and around.

To rephrase those 3 points in a more balanced and representative manner:
1. capture the problem with a complex hardware trigger if possible, captures the first event and others following it with some blind time
2. generic trigger with post processing/search in segmented memory, longer blind time unless the entire operation fits in memory
3. capture everything in memory and search through it, blind all the time outside that capture

Long memory is not 100% capture in the general case, only IF you can guarantee that the entire possible system operation and state is captured within. I could show you examples where that is a 0% capture rate every time, equally true and equally not generalised. In the general case any one of those options might be the most appropriate or have the least blind time. But the OP asked about blind time and how that related to protocol analysis, so the answers are surprise surprise nothing to do with "what happens with protocol decodes within a single capture?" or any of your 3 examples.

Scopes miss everything outside their captures so its up to the user to make sure the triggering and memory settings are set for the task at hand.
 
The following users thanked this post: Performa01

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
This whole discussion appears to be ending up down a traditional rathole, equivalent to trying to use a hammer to insert a screw.

Use a scope to ensure signal integrity, i.e. that the analogue waveforms will be correctly interpreted as digital signals.

Then flip to the digital domain, and use a logic analyser's superior clocking, triggering and filtering to discard (i.e. don't store) the irrelevant crap.

If that is insufficient, use a protocol analyser to concentrate on the messages in conversations between devices.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
Which means I can capture 125 times longer capture at same sampling rate and just capture whole burst of data from sensor for instance.
And for data in that capture, there is 0 blind time. Zero.
No missed events.
Which immediately means you can only capture for that length maximum with 100% coverage (and slower offline search/processing). I get it, you are correct for that specific situation. No complaints. But that is almost no use at all if you are trying to pull out intermittent problems that appear in minutes or hours of time (longer than the maximum memory depth).

If you can capture the entire systems operation within the memory sure, go for it and have 100% capture probability. But that is not the situation for every case.
1.  trigger on a rare packet that happens rarely enough for you be able to see, read and comprehend messages
2. Capture 100s of packet individually using segmented mode, stop and sift trough them. In which case even scopes with software decode don't do any processing and have extremely fast retrigger rates.. POI is very high on all.
3. Capture very long capture that has 100s of packets inside, stop and sift trough them. In which case there is 0 retrigger time between packets and POI is 100%.
I think you are wildly misinformed and inexperienced if you believe turning on software decodes does not slow down any scope or increase the blind time. Regardless, as I have said from the start and you keep ignoring/discounting/arguing against, it is unusual to have systems where you can capture 100% of the systems operation in memory. So to say that is a given absolute and true statement is why this keeps going around and around.

To rephrase those 3 points in a more balanced and representative manner:
1. capture the problem with a complex hardware trigger if possible, captures the first event and others following it with some blind time
2. generic trigger with post processing/search in segmented memory, longer blind time unless the entire operation fits in memory
3. capture everything in memory and search through it, blind all the time outside that capture

Long memory is not 100% capture in the general case, only IF you can guarantee that the entire possible system operation and state is captured within. I could show you examples where that is a 0% capture rate every time, equally true and equally not generalised. In the general case any one of those options might be the most appropriate or have the least blind time. But the OP asked about blind time and how that related to protocol analysis, so the answers are surprise surprise nothing to do with "what happens with protocol decodes within a single capture?" or any of your 3 examples.

Scopes miss everything outside their captures so its up to the user to make sure the triggering and memory settings are set for the task at hand.

We are finally getting somewhere.. I know my English is not very good but we seem to have more trouble than usual. But getting there...

First I am talking about decoding and looking at serial protocol data on scope ( that is what OP asked about). Not about general scope use. That is so wide and unpredictable and I would not generalize that in two sentences, and a reason why there are more than one type scope out there...

Also OP spoke about Rigol scope that has I2C, SPI, CAN, UART protocols...  Not about 10 Gbit Ethernet.
So speeds are known in some general terms, and order of magnitude is known...
CAN at 1MBit will have packets in 100 us range for instance..

You usually cannot EVER capture whole system messages in memory. You divide and conquer. 
Hence you either set trigger for specific data ( address, payload, error type).
Or you capture a manageable chunk (by either segmented capture or single long capture), stop (or single in a first place) and sift through data. By which you gain knowledge what is in there so you can start drilling down to specific information.

And then we get to the point that if I use my scopes in segmented mode, they all are VERY fast.
While Keysight has faster retrigger time than Siglents I have here in normal display mode. But in segmented mode Siglents have less than 2 µs blind time , and Pico 3406D has less than 0.6 µs and is faster then Keysight for that....

And yes, on SDS6000, enabling software decode DOES NOT slow down data acquisition in normal mode. It is not me here who is talking without knowing..
I know. I measured ..
And in segmented mode ALL processing except trigger and memory acquisition is stopped so there can not be influence..

Of course there might be scopes out there that behave like you said. I don't know. But those I have behave the way I explained.
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
First I am talking about decoding and looking at serial protocol data on scope ( that is what OP asked about). Not about general scope use.
1500 words on and you still try to talk about what you want to talk about. Ignoring the OPs question, and taking a long long time to add in the honest details.....    :=\

I don't know of any current production scope that has software triggers (out of mainstream brands i know off). I certainly don't know them all naturally.
Pico 3406D has less than 0.6 µs [blind time]
Arent Picoscopes in that category of software based serial triggers? At which point the best case blind time of captures is irrelevant when talking about sustained throughput.

Yes you want to talk at length about non realtime serial analysis/search. But dont make out like that is some replacement for all realtime cases, or add mountains of half-truths and misleading drivel.

OP asked really clearly what is the interaction between dead time/update rate and serial decode, that has been covered quite well in a non biased and polite manner.
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 4135
  • Country: 00
Doesn't this cause a problem for protocol analysis?

No, protocol analysis is done at full speed on FPGA logic with no dead zone.

Regarding to waveform capture, it works in that way because it requires some time for processing before show it on the screen. It requires some time for processing.

Actually it doesn't matters, because signal is captured by trigger which works on FPGA logic at full speed with no dead zone. Dead zone happens between trigger events, more fast CPU and FPGA allows to minimize dead zone.

Also, note that usual analog oscilloscope also has dead zone, when the ray is out of the display or in back trace state. It needs some time to return ray back on the left side, and analog oscilloscope don't display signal at this time.
« Last Edit: April 18, 2023, 12:13:42 pm by radiolistener »
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 4135
  • Country: 00
I really don't understand software and hardware TRIGGERS mentioning ...

some cheap Chinese toys don't have FPGA to implement hardware trigger which works at full ADC speed in real-time, they just capture some long sample and then using software trigger. In that case important signal event can appears in dead zone and such software "oscilloscope" may lose such signal event in a dead zone.
« Last Edit: April 18, 2023, 12:20:39 pm by radiolistener »
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
First I am talking about decoding and looking at serial protocol data on scope ( that is what OP asked about). Not about general scope use.
1500 words on and you still try to talk about what you want to talk about. Ignoring the OPs question, and taking a long long time to add in the honest details.....    :=\

I don't know of any current production scope that has software triggers (out of mainstream brands i know off). I certainly don't know them all naturally.
Pico 3406D has less than 0.6 µs [blind time]
Arent Picoscopes in that category of software based serial triggers? At which point the best case blind time of captures is irrelevant when talking about sustained throughput.

Yes you want to talk at length about non realtime serial analysis/search. But dont make out like that is some replacement for all realtime cases, or add mountains of half-truths and misleading drivel.

OP asked really clearly what is the interaction between dead time/update rate and serial decode, that has been covered quite well in a non biased and polite manner.

There is no software triggers in Picoscope..
You have wrong information. Do better research before stating something as a fact..

I've grown tired of being called dishonest by you..
Everything I said is true and founded in facts. On equipment I have and use and have verified...
Very unlike speculations and generalizations...

I answered to OP. Adding to data that was biased and wrong, stating that hardware decoding has influence on scope retrigger time.
It does not. It has influence on screen refresh of decoded strings but no influence on triggering.
At least on software decoding scopes I have here. So if there are scopes out there that do have that problem it is NOT because they software decode, but because they have BAD IMPLEMENTATION of software decoding...

Pdenisowsky gave some good info..
Where he said that MXO4 is fast because decoding is in hardware and that it can easily capture packets that are 10µs apart.
News flash, if they are only 10µs apart that is a piece of cake for 3 software decoding scopes I have here...
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline ballsystemlordTopic starter

  • Regular Contributor
  • *
  • Posts: 248
  • Country: us
  • Student
First I am talking about decoding and looking at serial protocol data on scope ( that is what OP asked about). Not about general scope use.
1500 words on and you still try to talk about what you want to talk about. Ignoring the OPs question, and taking a long long time to add in the honest details.....    :=\

If the OP, me, might interject for a moment.
Not knowing how oscilloscopes do their protocol decoding, I asked how dead time affects triggering. The question, with respect to the analog part, has been answered in that it was pointed out that all, or almost all, scopes use HW based decode-triggers. Or so I understand from your posts.
Just to be clear, decoding is necessary for triggering because a 1 and a 0 look like any other 1 and 0 until you decode what the bus is trying to transmit.

It was also said,
Quote from: JPortici
The moment the analog signal is correct you should switch to a protocol analyzer (which can usually also put data on the bus, which makes it a much more useful trigger. Imagine doing CANbus analysis of a network with only an oscilloscope).

Which brings us to the most advanced part of the question, which I didn't think I'd ask. Now my own scope has 16 digital channels (MSO5074 LA). I was trying to save myself a few $$$ and buy a scope that could do protocol analysis, and it can, via the web interface. So, does the scope have blind time with it's digital channels so that I can't trigger on them at certain points during the capture? IDK. I'd ask rigol, but every question I've ever asked them has gone unanswered. I've tired email and phone without result. The phone just goes to a voice mail.
« Last Edit: April 18, 2023, 04:43:34 pm by ballsystemlord »
 
The following users thanked this post: Someone

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
First I am talking about decoding and looking at serial protocol data on scope ( that is what OP asked about). Not about general scope use.
1500 words on and you still try to talk about what you want to talk about. Ignoring the OPs question, and taking a long long time to add in the honest details.....    :=\

If the OP, me, might interject for a moment.
Not knowing how oscilloscopes do their protocol decoding, I asked how dead time affects triggering. The question, with respect to the analog part, has been answered in that it was pointed out that all, or almost all, scopes use HW based decode-triggers. Or so I understand from your posts.
Just to be clear, decoding is necessary for triggering because a 1 and a 0 look like any other 1 and 0 until you decode what the bus is trying to transmit.

It was also said,
Quote from: JPortici
The moment the analog signal is correct you should switch to a protocol analyzer (which can usually also put data on the bus, which makes it a much more useful trigger. Imagine doing CANbus analysis of a network with only an oscilloscope).

Which brings us to the most advanced part of the question, which I didn't think I'd ask. Now my own scope has 16 digital channels (MSO5074 LA). I was trying to save myself a few $$$ and buy a scope that could do protocol analysis, and it can, via the web interface. So, does the scope have blind time with it's digital channels so that I can't trigger on them at certain points during the capture? IDK. I'd ask rigol, but every question I've ever asked them has gone unanswered. I've tired email and phone without result. The phone just goes to a voice mail.

No, scope decoding is separate from serial triggering. Triggering has it's own "decoder" that is used for triggering. It is a state machine that gets set by scope for trigger. After trigger scope captures certain time length (decided by timebase or manual memory setting) and then gives such captured data to analysis engine for full decode of all captured data.
On Siglent scope you can easily disable on screen decoding but use serial trigger... They are separate.

Reading your last paragraph I really don't understand what you mean...

Scope waits for a trigger (analog, digital or serial protocol doesn't matter). After scope triggers (because trigger engine decides the conditions for trigger were fulfilled) scope grabs a certain time (as set with timebase ) of data on all enabled channels (analog and digital) and saves it into acquisition memory, notifies display engine and whatnot and resets itself for new capture and starts a new wait for next trigger. So minimum time between two triggers is captured time + this time scope needs for it to rearm trigger engine and gets ready for new trigger. This rearm time is blind time between two captures. So minimum time between two trigger events is going to be time set by timebase +blind time. Tigger event is start of capture event, it cannot happen again until scope fills the whole requested time period + rearm time...
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 
The following users thanked this post: ballsystemlord


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf