Author Topic: How does waveform updates on an oscilloscope work? Why do they work that way?  (Read 7735 times)

0 Members and 1 Guest are viewing this topic.

Offline pdenisowski

  • Frequent Contributor
  • **
  • Posts: 929
  • Country: us
  • Product Management Engineer, Rohde & Schwarz
    • Test and Measurement Fundamentals Playlist on the R&S YouTube channel
Trigger event is start of capture event, it cannot happen again until scope fills the whole requested time period + rearm time...

Yes.  People sometimes forget that the waveform update rate may also be limited by the user settings:  if you have 100 ms/div and 10 divisions, the maximum waveform update rate for any oscilloscope is < 1 waveform per second -- it will take 1 full second just to perform a single acquisition :)
Test and Measurement Fundamentals video series on the Rohde & Schwarz YouTube channel:  https://www.youtube.com/playlist?list=PLKxVoO5jUTlvsVtDcqrVn0ybqBVlLj2z8
 
The following users thanked this post: Someone, 2N3055

Offline ballsystemlordTopic starter

  • Regular Contributor
  • *
  • Posts: 248
  • Country: us
  • Student
<snip>
It was also said,
Quote from: JPortici
The moment the analog signal is correct you should switch to a protocol analyzer (which can usually also put data on the bus, which makes it a much more useful trigger. Imagine doing CANbus analysis of a network with only an oscilloscope).

Which brings us to the most advanced part of the question, which I didn't think I'd ask. Now my own scope has 16 digital channels (MSO5074 LA). I was trying to save myself a few $$$ and buy a scope that could do protocol analysis, and it can, via the web interface. So, does the scope have blind time with it's digital channels so that I can't trigger on them at certain points during the capture? IDK. I'd ask rigol, but every question I've ever asked them has gone unanswered. I've tired email and phone without result. The phone just goes to a voice mail.
<snip>
Reading your last paragraph I really don't understand what you mean...

Scope waits for a trigger (analog, digital or serial protocol doesn't matter). After scope triggers (because trigger engine decides the conditions for trigger were fulfilled) scope grabs a certain time (as set with timebase ) of data on all enabled channels (analog and digital) and saves it into acquisition memory, notifies display engine and whatnot and resets itself for new capture and starts a new wait for next trigger. So minimum time between two triggers is captured time + this time scope needs for it to rearm trigger engine and gets ready for new trigger. This rearm time is blind time between two captures. So minimum time between two trigger events is going to be time set by timebase +blind time. Tigger event is start of capture event, it cannot happen again until scope fills the whole requested time period + rearm time...

I meant, are the digital channels treated any differently, with respect to triggering or to waveform updates per second?
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
I meant, are the digital channels treated any differently, with respect to triggering or to waveform updates per second?

Digital channels might have different number of samples in same time (sample rate) as analog channels. Analog channels might have different sample rates based on what channels are enabled or what time base is.. These things change all the time.
But all analog channels and digital channels will listen to same trigger engine (whatever you set it to trigger from) and all the channels will be correlated in time, so you can compare what happened when across the channels, analog or digital.

And also since sampling more data at the same time, 8bit scope with only one analog channel enabled will need to process 6x less data per second than with 4 analog channel +16 digital channel. So enabling digital channels might slow things down, but not because digital channels are treated differently but because scope has much more data to process.

That parameter will change from individual scope to scope model, even for the same manufacturer.


"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 
The following users thanked this post: pdenisowski

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
I don't know of any current production scope that has software triggers (out of mainstream brands i know off). I certainly don't know them all naturally.
Pico 3406D has less than 0.6 µs [blind time]
Arent Picoscopes in that category of software based serial triggers? At which point the best case blind time of captures is irrelevant when talking about sustained throughput.
There is no software triggers in Picoscope..
You have wrong information. Do better research before stating something as a fact..
Really? where do they advertise hardware serial triggers? where are examples or documentation of that? Why do they not correct people on their own forums who say the serial triggers are software?
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
Trigger event is start of capture event, it cannot happen again until scope fills the whole requested time period + rearm time...

Yes.  People sometimes forget that the waveform update rate may also be limited by the user settings:  if you have 100 ms/div and 10 divisions, the maximum waveform update rate for any oscilloscope is < 1 waveform per second -- it will take 1 full second just to perform a single acquisition :)
While datasheets like to publish the lowest achievable dead time (and highest update rates) they are always with the little * or acknowledgement that they are the best case. Often re-arm time changes with parameters like memory depth, or it has a minimum that can occur between some acquisitions but that is not guaranteed with occasional longer gaps. The re-arm time is rarely (never?) a constant.

For many complex questions like these the only option is to test in the specific application, oscilloscopes being such complex and configurable devices that they cannot detail/explain every possible use case (or might not want to talk about their shortcomings).
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 29809
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Trigger event is start of capture event, it cannot happen again until scope fills the whole requested time period + rearm time...

Yes.  People sometimes forget that the waveform update rate may also be limited by the user settings:  if you have 100 ms/div and 10 divisions, the maximum waveform update rate for any oscilloscope is < 1 waveform per second -- it will take 1 full second just to perform a single acquisition :)
While datasheets like to publish the lowest achievable dead time (and highest update rates) they are always with the little * or acknowledgement that they are the best case. Often re-arm time changes with parameters like memory depth, or it has a minimum that can occur between some acquisitions but that is not guaranteed with occasional longer gaps. The re-arm time is rarely (never?) a constant.

For many complex questions like these the only option is to test in the specific application, oscilloscopes being such complex and configurable devices that they cannot detail/explain every possible use case (or might not want to talk about their shortcomings).
Quite but unless someone (not you  :P) does the work and prepares tables of how sampling rates and mem depths are managed by the scopes acquisition process we are all just guessing.  :horse:

Then we also have user selectable memory depths which of course impact on rearm times...is that any surprise and user selectable sampling rates too so we must be careful to carefully compare apples with apples with instruments running in a default auto sampling/memory management configuration.

Here is where user skill and knowledge come into their own for each and every use case....there are no hard and fast rules and the experienced user like this, we need to be in charge of how a scope acquisition works, not the manufacturer thinking they know best.
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 
The following users thanked this post: Someone

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
Trigger event is start of capture event, it cannot happen again until scope fills the whole requested time period + rearm time...

Yes.  People sometimes forget that the waveform update rate may also be limited by the user settings:  if you have 100 ms/div and 10 divisions, the maximum waveform update rate for any oscilloscope is < 1 waveform per second -- it will take 1 full second just to perform a single acquisition :)
While datasheets like to publish the lowest achievable dead time (and highest update rates) they are always with the little * or acknowledgement that they are the best case. Often re-arm time changes with parameters like memory depth, or it has a minimum that can occur between some acquisitions but that is not guaranteed with occasional longer gaps. The re-arm time is rarely (never?) a constant.

For many complex questions like these the only option is to test in the specific application, oscilloscopes being such complex and configurable devices that they cannot detail/explain every possible use case (or might not want to talk about their shortcomings).
Quite but unless someone (not you  :P) does the work and prepares tables of how sampling rates and mem depths are managed by the scopes acquisition process we are all just guessing.  :horse:

Then we also have user selectable memory depths which of course impact on rearm times...is that any surprise and user selectable sampling rates too so we must be careful to carefully compare apples with apples with instruments running in a default auto sampling/memory management configuration.

Here is where user skill and knowledge come into their own for each and every use case....there are no hard and fast rules and the experienced user like this, we need to be in charge of how a scope acquisition works, not the manufacturer thinking they know best.
And when you have a particular application that needs some performance measure not in the specifications, hopefully your local distributor can test it for you or loan a unit for evaluation ;)
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 29809
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
And when you have a particular application that needs some performance measure not in the specifications, hopefully your local distributor can test it for you or loan a unit for evaluation ;)
Of course however many need review the way they think about using a DSO and better utilise their best feature: Storage !
Of course you can use a DSO just like a CRO and for those that do they are missing so much capability....and members still advise newbies to buy such outdated stuff.  ::)
Really they do them no favours when they are better served to get into this century and learn how to use the modern feature set rather than some thing that just displays wiggly lines in real time.

With a DSO we can overlook retrigger rates to some degree and use a slow timebase, capture the waveform/s of interest and zoom in to deeply inspect them in detail.
Hardly advanced use just properly utilising a DSO's basic features.
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 
The following users thanked this post: xrunner

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
And when you have a particular application that needs some performance measure not in the specifications, hopefully your local distributor can test it for you or loan a unit for evaluation ;)
Of course however many need review the way they think about using a DSO and better utilise their best feature: Storage !
Of course you can use a DSO just like a CRO and for those that do they are missing so much capability....and members still advise newbies to buy such outdated stuff.  ::)
Really they do them no favours when they are better served to get into this century and learn how to use the modern feature set rather than some thing that just displays wiggly lines in real time.

With a DSO we can overlook retrigger rates to some degree and use a slow timebase, capture the waveform/s of interest and zoom in to deeply inspect them in detail.
Hardly advanced use just properly utilising a DSO's basic features.
I think you are walking off (and back around on the loop) from the point of that comment. There are people on this forum who like to take a (best case and/or asterisked) specification and then try to use that as evidence of some performance related to but not actually specified. Oscilloscopes have many non specified performance and features, that can only be determined by testing.

To state blankly that retirgger rates are generally not important as deep memory can do the job.....   we're back at the start of that stupidity again.
Both have valid uses and neither is better for all cases. They are not replacements for each other and result in radically different workflows/results for the cases that could use either.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
I don't know of any current production scope that has software triggers (out of mainstream brands i know off). I certainly don't know them all naturally.
Pico 3406D has less than 0.6 µs [blind time]
Arent Picoscopes in that category of software based serial triggers? At which point the best case blind time of captures is irrelevant when talking about sustained throughput.
There is no software triggers in Picoscope..
You have wrong information. Do better research before stating something as a fact..
Really? where do they advertise hardware serial triggers? where are examples or documentation of that? Why do they not correct people on their own forums who say the serial triggers are software?

I don't know what and why happens on their forums, that is a question for them. I imagine they cannot police every word and correct every wrong thing ever said there...

Picoscope has no serial triggers software OR hardware. It has only analog and parallel digital triggers.
Triggering is triggering. Decoding is post processing like math channels and measurements. It has decoders for 30+ protocols..
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 29809
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
To state blankly that retirgger rates are generally not important as deep memory can do the job.....   we're back at the start of that stupidity again.
Both have valid uses and neither is better for all cases. They are not replacements for each other and result in radically different workflows/results for the cases that could use either.
I didn't but instead:
we can overlook retrigger rates to some degree

And I stand by that.
You must know we each have a different DSO usage style based on previous experience or the feature set of the instrument in front of us but simply we must accept of each other, there are many ways to skin the proverbial cat and end with the same result.
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
I don't know of any current production scope that has software triggers (out of mainstream brands i know off). I certainly don't know them all naturally.
Pico 3406D has less than 0.6 µs [blind time]
Arent Picoscopes in that category of software based serial triggers? At which point the best case blind time of captures is irrelevant when talking about sustained throughput.
There is no software triggers in Picoscope..
You have wrong information. Do better research before stating something as a fact..
Really? where do they advertise hardware serial triggers? where are examples or documentation of that? Why do they not correct people on their own forums who say the serial triggers are software?
I don't know what and why happens on their forums, that is a question for them. I imagine they cannot police every word and correct every wrong thing ever said there...

Picoscope has no serial triggers software OR hardware. It has only analog and parallel digital triggers.
Triggering is triggering. Decoding is post processing like math channels and measurements. It has decoders for 30+ protocols..
Picoscope forum, picoscope staff member:
https://www.picotech.com/support/topic40355.html
All done in software offline (significant blind time and no option for hardware serial trigger). Those sorts of differences were clearly expanded upon already:
There are scopes out there with no hardware serial trigger, but do have software serial decode, or the serial trigger only frames the packet and does not qualify/inspect the contents. Yet this thread is a mess of people not making that key separation which answers part of the question of the OP.
So why are you making so much effort to try and say they are all the same? Scopes without hardware serial triggers cant perform many debugging options that hardware triggers expose, it is a key separator that you seem to be trying to discount/hide/dismiss with faulty reasoning (such as bringing up the best case blind time when the OP is asking about protocol decode).
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
I don't know of any current production scope that has software triggers (out of mainstream brands i know off). I certainly don't know them all naturally.
Pico 3406D has less than 0.6 µs [blind time]
Arent Picoscopes in that category of software based serial triggers? At which point the best case blind time of captures is irrelevant when talking about sustained throughput.
There is no software triggers in Picoscope..
You have wrong information. Do better research before stating something as a fact..
Really? where do they advertise hardware serial triggers? where are examples or documentation of that? Why do they not correct people on their own forums who say the serial triggers are software?
I don't know what and why happens on their forums, that is a question for them. I imagine they cannot police every word and correct every wrong thing ever said there...

Picoscope has no serial triggers software OR hardware. It has only analog and parallel digital triggers.
Triggering is triggering. Decoding is post processing like math channels and measurements. It has decoders for 30+ protocols..
Picoscope forum, picoscope staff member:
https://www.picotech.com/support/topic40355.html
All done in software offline (significant blind time and no option for hardware serial trigger). Those sorts of differences were clearly expanded upon already:
There are scopes out there with no hardware serial trigger, but do have software serial decode, or the serial trigger only frames the packet and does not qualify/inspect the contents. Yet this thread is a mess of people not making that key separation which answers part of the question of the OP.
So why are you making so much effort to try and say they are all the same? Scopes without hardware serial triggers cant perform many debugging options that hardware triggers expose, it is a key separator that you seem to be trying to discount/hide/dismiss with faulty reasoning (such as bringing up the best case blind time when the OP is asking about protocol decode).

It is now obvious to me that it is not MY English that is the problem...

I said and done none of the things you say I did. Your conclusions are your hallucinations of what my sentences mean.
Read again and again until you understand.


"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 4135
  • Country: 00
it all depends on oscilloscope performance. If it has fast enough CPU and FPGA and software to get better update frame rate, it has less dead time interval between trigger events. But more fast hardware cost more money. So, more modern and more expensive hardware provides better results.  :)

Anyway, analog osciloscope also has dead time for trigger and it seems that modern DSO such as Siglent SDS1000X-E series can be equal or even better than analog oscilloscope.

I think there is more important specification for modern DSO - this is a memory depth which can be used on a slow horizontal resolution. I think 14-28 Megapoints is the minimum for a modern low end oscilloscope. Also there is important the quality of digital phosphor algorithms used for rendering that deep memory on the screen. It's not easy to process 28 million points 30-60 times per second, it requires really fast hardware.
« Last Edit: April 19, 2023, 11:51:04 am by radiolistener »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
Those sorts of differences were clearly expanded upon already:
There are scopes out there with no hardware serial trigger, but do have software serial decode, or the serial trigger only frames the packet and does not qualify/inspect the contents. Yet this thread is a mess of people not making that key separation which answers part of the question of the OP.
So why are you making so much effort to try and say they are all the same? Scopes without hardware serial triggers cant perform many debugging options that hardware triggers expose, it is a key separator that you seem to be trying to discount/hide/dismiss with faulty reasoning (such as bringing up the best case blind time when the OP is asking about protocol decode).

It is now obvious to me that it is not MY English that is the problem...

I said and done none of the things you say I did. Your conclusions are your hallucinations of what my sentences mean.
Read again and again until you understand.
A scope without hardware protocol triggers is exactly what is framed in the above quote. That would be things like the Picoscope and older Tektronix platforms (dunno about the current tek models confirmed the current Tek platforms still have increased blind time when running serial trigger+decode).

If a scope such as the Picoscope has the decodes run in software, then it is the rate of that which produces the blind time for serial analysis. Serial decode was the OPs question.

This (as noted above by tggzzz) is almost identical to the many threads where people would ask about realtime update rate, then have particular members consistently misleadingly talk at length about blind time in segmented mode and say how its all the same thing. That the acquisitions can enter memory but none of that information reaches the screen/user/result, makes them part of the dead time for that analysis. You want to talk about dead time? then discuss it accurately and honestly rather than pushing your "side" as somehow universal and superior.
« Last Edit: April 19, 2023, 11:36:41 pm by Someone »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Where it comes to software triggering on protocols: most decent benchtop oscilloscopes starting from $350 will have hardware protocol triggering. In the end it is not difficult to implement; it comes down to triggering on a specific pattern.

There are some exceptions though: The Picoscopes and AFAIK also some older Lecroy models (like the Wavepro 7k series).

A much more interesting seperation between models is the amount of packets they can decode, how much the signal is decimated and minimum required samplerate versus protocol bitrate. These limits determine how effective you can use an oscilloscope's memory.

This whole discussion appears to be ending up down a traditional rathole, equivalent to trying to use a hammer to insert a screw.

Use a scope to ensure signal integrity, i.e. that the analogue waveforms will be correctly interpreted as digital signals.

Then flip to the digital domain, and use a logic analyser's superior clocking, triggering and filtering to discard (i.e. don't store) the irrelevant crap.
In theory. But in practise you don't always know what you are looking for so it can be helpfull to capture a lot of messages on an oscilloscope, save to disk for analysis on a PC and go back to the scope to inspect the waveform for the malformed packet.
« Last Edit: April 19, 2023, 10:14:56 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Someone

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: 2N3055

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
Those sorts of differences were clearly expanded upon already:
There are scopes out there with no hardware serial trigger, but do have software serial decode, or the serial trigger only frames the packet and does not qualify/inspect the contents. Yet this thread is a mess of people not making that key separation which answers part of the question of the OP.
So why are you making so much effort to try and say they are all the same? Scopes without hardware serial triggers cant perform many debugging options that hardware triggers expose, it is a key separator that you seem to be trying to discount/hide/dismiss with faulty reasoning (such as bringing up the best case blind time when the OP is asking about protocol decode).

It is now obvious to me that it is not MY English that is the problem...

I said and done none of the things you say I did. Your conclusions are your hallucinations of what my sentences mean.
Read again and again until you understand.
A scope without hardware protocol triggers is exactly what is framed in the above quote. That would be things like the Picoscope and older Tektronix platforms (dunno about the current tek models).

If a scope such as the Picoscope has the decodes run in software, then it is the rate of that which produces the blind time for serial analysis. Serial decode was the OPs question.

This (as noted above by tggzzz) is almost identical to the many threads where people would ask about realtime update rate, then have particular members consistently misleadingly talk at length about blind time in segmented mode and say how its all the same thing. That the acquisitions can enter memory but none of that information reaches the screen/user/result, makes them part of the dead time for that analysis. You want to talk about dead time? then discuss it accurately and honestly rather than pushing your "side" as somehow universal and superior.

You keep repeating same thing WHICH is the reason for repetition..

And miss the point.

Let's define terms so you can finally see that we are both saying same thing in different way.

First, what is "real time serial decode (analysis)?
Is that when you are looking at the scope screen waiting for scope to show messages as they run over comm bus we are decoding?
You can  have scenario where you have thousands of packets per second and a really fast scope that will render all of those (all 1000) to the screen in real time, keeping each on screen for 1 ms... That would be obviously useless b'cause I can't read that fast.

I might need to capture all of them because I wan't to reverse engineer what is going on in detail. This will need one long capture with all of them, stop and then I have few hours of work to make sense of all that data. This is not real time.

Or I'm interested only in messages sent by only one of sensors... In which case I need the scope to only TRIGGER on certain messages (set by trigger condition) and ignore all the rest. And if my sensor of interest sends data only once a second (or when I press key or something) then I leave scope running, and wait for it to catch a message and show it to the screen. This is interactive work, this is real time, but only if message rate is slow enough for a slow human operator.

3rd scenario is same as 2nd but although we are capturing only one type of data packet, there are still 10-20 a second... We are not interested in anything else, but poor human is too slow..  In this case we set trigger and capture some to segments. And again look at the data in offline mode.

Scenario 1 is perfect for Picoscopes. And many of Saelae type USB analyisers.
It is not good for short memory scopes.

Scenario 2 is perfect for fast hardware trigger/decode scopes. But most modern hardware trigger/ software decode scopes will be fast enough to keep the pace with human.. If it is too fast for human it will be too fast for them.

Scenario 3 is good for both hardware trig/decode and hardware trig/ soft decode scopes, because triggers run in hardware in both but segmented capture postpones all processing until predefined number of segments (packets) is captured, negating any blind time difference in decoding. On soft decode scope you might have a short pause after capture is done for scope to decode while hardware one will show data much faster. But neither will miss  any packets..

There is also marginal scenario between 2 and 3 where some people will be able to read messages faster or slower, and where some will prefer full hardware trigger/decode scope because they will be able to show things faster..

"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.
I know, I used my Keysight once to reverse engineer display protocol by sending data to screen and looking into data on the bus... It was quick work to find coordinate system and direction of data.. But to human a realtime is  like twice per second... My 2 Siglents (and even a Pico on a fast PC) do that without a problem.

Real difference is when you have something happening 5-10 times per second, where you can  just see a clear change instead of stuttering in decode... But because of hardware trigger, you will still see realtime change in waveform..
And that happens to Keysight too, as soon as you get fast enough it will just show some of the data flipping..
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
You can  have scenario where you have thousands of packets per second and a really fast scope that will render all of those (all 1000) to the screen in real time, keeping each on screen for 1 ms... That would be obviously useless b'cause I can't read that fast.
Immediately you have taken the same strawman trope that has been explained over and over.

Modern scopes have persistence in their display, even when set to "zero" persistence, but it can be increased as desired to infinity. Even if the trace was only visible for a single display frame (typically a broadcast rate such as 30Hz,50Hz,60Hz) the eye would still retain the event by visual persistence.

As nctnico mentions above, a serial trigger can focus down onto one part/transaction of a system (such as filtering on addresses of SPI) and show what states the data bits contained across many words/packets. In real time, continuously, but with possibly some missing data lost to the dead time.

Or I'm interested only in messages sent by only one of sensors... In which case I need the scope to only TRIGGER on certain messages (set by trigger condition) and ignore all the rest. And if my sensor of interest sends data only once a second (or when I press key or something) then I leave scope running, and wait for it to catch a message and show it to the screen. This is interactive work, this is real time, but only if message rate is slow enough for a slow human operator.
So, now it's something "very" slow (from a scope perspective). Take a scope without a hardware serial trigger and how does it find that interesting message in amongst all the others? It has a larger blind time, the problem at hand.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
You can  have scenario where you have thousands of packets per second and a really fast scope that will render all of those (all 1000) to the screen in real time, keeping each on screen for 1 ms... That would be obviously useless b'cause I can't read that fast.
Immediately you have taken the same strawman trope that has been explained over and over.

Modern scopes have persistence in their display, even when set to "zero" persistence, but it can be increased as desired to infinity. Even if the trace was only visible for a single display frame (typically a broadcast rate such as 30Hz,50Hz,60Hz) the eye would still retain the event by visual persistence.

As nctnico mentions above, a serial trigger can focus down onto one part/transaction of a system (such as filtering on addresses of SPI) and show what states the data bits contained across many words/packets. In real time, continuously, but with possibly some missing data lost to the dead time.

Or I'm interested only in messages sent by only one of sensors... In which case I need the scope to only TRIGGER on certain messages (set by trigger condition) and ignore all the rest. And if my sensor of interest sends data only once a second (or when I press key or something) then I leave scope running, and wait for it to catch a message and show it to the screen. This is interactive work, this is real time, but only if message rate is slow enough for a slow human operator.
So, now it's something "very" slow (from a scope perspective). Take a scope without a hardware serial trigger and how does it find that interesting message in amongst all the others? It has a larger blind time, the problem at hand.

And again, you read what you want to see,  so you can keep arguing...

There is no such thing as software serial trigger. Get your terminology right. Triggering is a hardware process of starting a new cycle of capture. Anything software is not a trigger per definition. If it is software than it is search or decode. Not trigger.

I don't care if someone out there abuses the word. And I don't care about toy scopes that have it all as a software in ARM. We are talking real scopes here.

Picoscope (for instance) has no hardware serial triggers. It has no decoded data search either. It only has data filtering, where you can filter messages in table by filter string...

I said that scopes without hardware serial trigger are not good for realtime, because, well, they cannot trigger on specific packets... It's kinda obvious.

In real time screen refreshes fully with every trigger and decoded messages overwrite on top of each other...
I said 1000 messages a second, every 1ms on top of each other... On same screen overwriting on top of each other... Persistence doesn't apply to decode. For decode you have Normal trigger mode that keeps last message/screen on until new trigger comes and new capture gets on screen, on top.

As for missing packets  in real time, if scope has hardware serial triggers, if you miss packets on the screen it won't be because trigger missed it, but because you got fast burst of 3-4 packets in 100 µs, and you will be able to see only the last one because previous 3 went by in such a hurry you didn't see them.

When you blink, that is 100 ms of screen dead time....

And about visual persistence: if you have something displayed for 10ms on screen and then deleted, you will see something happened but good luck reading it with comprehension.... You might as well not enable decoding at all and just look at flicker of waveform instead... You can obviously see bits flickering in waveform...
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
This whole discussion appears to be ending up down a traditional rathole, equivalent to trying to use a hammer to insert a screw.

Use a scope to ensure signal integrity, i.e. that the analogue waveforms will be correctly interpreted as digital signals.

Then flip to the digital domain, and use a logic analyser's superior clocking, triggering and filtering to discard (i.e. don't store) the irrelevant crap.
In theory. But in practise you don't always know what you are looking for so it can be helpfull to capture a lot of messages on an oscilloscope, save to disk for analysis on a PC and go back to the scope to inspect the waveform for the malformed packet.

What do you mean by "malformed packet"?

If the waveform shape is malformed, then that is a signal integrity issue and a scope is the only tool to use. Eye diagrams are extremely helpful here - but you must be able to prevent a digitising scope from interpolating between samples.

If the data within the packet is malformed, then ignore the scope and capture many bits/packets in a logic analyser, protocol analyser, or printf() statements. If convenient, use a PC + application of your choice.

Don't forget that if you are using a scope to interpret an analogue waveform as a digital signal/message, it may have a different interpretation to the receiving device.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: Someone

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21225
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: Someone

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: au
    • send complaints here
You can  have scenario where you have thousands of packets per second and a really fast scope that will render all of those (all 1000) to the screen in real time, keeping each on screen for 1 ms... That would be obviously useless b'cause I can't read that fast.
Immediately you have taken the same strawman trope that has been explained over and over.

Modern scopes have persistence in their display, even when set to "zero" persistence, but it can be increased as desired to infinity. Even if the trace was only visible for a single display frame (typically a broadcast rate such as 30Hz,50Hz,60Hz) the eye would still retain the event by visual persistence.

As nctnico mentions above, a serial trigger can focus down onto one part/transaction of a system (such as filtering on addresses of SPI) and show what states the data bits contained across many words/packets. In real time, continuously, but with possibly some missing data lost to the dead time.

Or I'm interested only in messages sent by only one of sensors... In which case I need the scope to only TRIGGER on certain messages (set by trigger condition) and ignore all the rest. And if my sensor of interest sends data only once a second (or when I press key or something) then I leave scope running, and wait for it to catch a message and show it to the screen. This is interactive work, this is real time, but only if message rate is slow enough for a slow human operator.
So, now it's something "very" slow (from a scope perspective). Take a scope without a hardware serial trigger and how does it find that interesting message in amongst all the others? It has a larger blind time, the problem at hand.

And again, you read what you want to see,  so you can keep arguing...

There is no such thing as software serial trigger. Get your terminology right.
There certainly is, various scopes slow down their acquisition rate when using serial trigger/decode. Including big $ scopes from A brands.

In real time screen refreshes fully with every trigger and decoded messages overwrite on top of each other...
I said 1000 messages a second, every 1ms on top of each other... On same screen overwriting on top of each other... Persistence doesn't apply to decode. For decode you have Normal trigger mode that keeps last message/screen on until new trigger comes and new capture gets on screen, on top.
As tggzzz says, why use a scope if all you wanted to see was the decoded data? use something more appropriate that doesn't have dead time like a USB to XXX serial adaptor, or a protocol analyser. The scope is more appropriate for when you want to see some analog characteristics of the signal correlated to the data, be that timing or collisions or coincident signals like power. All of those do show in the persistence (as does the presence or absence of 0's and 1's for each bit, which may be all the information required; confirming aligned addresses, or correct data).

As for missing packets  in real time, if scope has hardware serial triggers, if you miss packets on the screen it won't be because trigger missed it, but because you got fast burst of 3-4 packets in 100 µs, and you will be able to see only the last one because previous 3 went by in such a hurry you didn't see them.
Finally you are actually getting to the point, the dead time can miss valid triggers. Scopes can run with segmented or waveform history modes to collect such bursts for inspection, but they are still limited by finite dead time.... which can be a different length when the serial trigger or decode is enabled.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 7463
  • Country: hr
So you would have trigger on every packet, and it's waveform would be shown on screen in real time. And decoding would skip some decodes in a burst and show you only last one. Instead of unrecognizable blur and then the last one... Same difference...
In some cases it is usefull to look at the decoded data in near realtime. Think about checking which bits change every now and then when reverse engineering a protocol.

If you are looking for changes in bits, then use a digital domain tool (LA, protocol analyser, printf()) to capture the bits, and a PC application to show diffs.

A scope is a non-optimum tool for that, even if it can be bent to the purpose.

I agree but premise of the question is a scope for that use.. Not what are alternatives.

I did agree with you many messages ago that you can even use analog CRT scope for SI and some sort of LA for actual decoding. How it was done for ages..
When you don't want to look at lots of decoded data, digital scope with decoding is useful because you can do both at the same time. Saves time and fiddling with connections...

But I don't want to look at 1000 messages on scope screen. Even super expensive scopes with HUGE 15"  :-DD screens are joke compared to the screen on a PC.. So for that job, for me it is MSO Pico or a LA... occasional print to uart. For stuff I'm making I bridge Pico's trigger deficiencies by toggling a pin of µC at critical points and trigger of that. Kind of "breakpoints" for scope.....
"Just hard work is not enough - it must be applied sensibly."
Dr. Richard W. Hamming
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf