OTOH oscilloscopes aren't really designed internally for this sort of programming pattern and I can see why they don't do it.
It depends entirely on whether or not the acquisition memory is available at scope stop. Frankly, it almost has to be, because you can zoom and pan on that same data
Nope. It has to be done in hardware because of the enormous amount of data.
The acquisition memory will be on one bus. Data is fed to it via DMA direct from the ADC.
Also connected to that RAM ss some sort of FPGA/ASIC which takes data from there and copies it to another buffer, scaled to fit the number of pixels visible on screen (depending on your timebase). It will also do the sin(x)/x interpolation, etc. There's no way this scaling/interpolation could be done in software on a cheap CPU, data can be coming in a 1GS/s even on low-end oscilloscopes.
This secondary buffer is all the main CPU has direct access to, not the original sample data. This is the reason most calculations are done "on screen".
It's not impossible to process the whole of sample memory but it would either have to be done in batches on the main CPU or directly inside the FPGA/ASIC.
I certainly don't disagree with this. But see below.
No idea, but since we're talking about software decoding, there isn't anything in principal or in practice (that I'm aware of) that would prevent it.
Apart from (eg.) trying to access the 24MB of sample memory in 1200 byte chunks (on a DS1054Z).
That isn't something that
prevents it. And remember, the situation here is when the scope is
stopped. And especially if the implementation is done in the way I described, you'd be accessing chunks backwards starting with the one immediately prior to that of the display, and accessing only those chunks sufficient to arrive at the packet start location, then moving forwards from there, and saving the decoded values (along with their locations) so that you can easily display them should the user decide to pan/zoom.
In principal it could be done - they did something like it for the DS1054Z's improved FFT.
That's exactly my point. There isn't anything that
prevents them from implementing correct decoding. And I'd argue that it's not necessarily uneconomical to do so either since it's just software -- it takes some additional time, but it only has to be done once.
In practice it doesn't seem like anybody is. Not for serial decoding.
I'm thinking it would be too slow.
Why in the world would it be too slow if you only do it when the scope is stopped??
It's a different matter altogether if the scope is running. In that case, you can just do on-screen decoding and be done with it. It might not be correct, but at that point there are real tradeoffs that you can't get away from.
Yes, the decode could be done in the FPGA/ASIC but that assumes there's enough gates left over and they bothered to do it.
Yeah, and it would also require additional memory that the FPGA/ASIC would have to write data and position information to. I was thinking a strictly software-based approach. Having hardware support opens up a lot of possibilities.