Scope timebases on Keysight 3000T range from 5E-10 to 50 seconds per div.. It is hard doing tricks like that with 11 orders of magnitude of time span.
Why? The timebase, combined with the memory depth, ultimately determines the sampling rate. The sampling rate is set so as to ensure that the time covered by the buffer is never less than the time represented by the screen. The algorithm I described is independent of the sample rate.
Also, you forget about measurements, those have to be optimized for speed and for that you want streamlined data formats and flow.
Sure, and that would place some additional demands on at least some of the data. I fully expect that the sampled data would have to be decimated and processed in real time by the FPGA. But these are separate processing paths.
I admit that if you're using DRAM, then a memory controller that makes possible the kinds of parallel processing of data needed here might be quite a challenge to design. Decimation into a separate buffer (likely internal to the FPGA) would have to be able to keep up with the fastest sampling rate the scope is capable of. Fast SRAM is more expensive but might be needed for something like this if the sample rate is high enough. You might even incorporate SRAM into your DRAM memory controller design and use that as a write-through cache to satisfy reads when possible.
Also you cannot retrigger while capturing unless you have datapath that will change where it writes data all the time..
Why is that? As you note, a trigger event merely sets a marker. That happens while capturing. The implementation I described requires only a single trigger event location pointer, which always points at where the last trigger event occurred.
Capturing goes into circular buffer of certain size, all the time. Trigger engine only leaves the marker where trigger happened, and then you rollover and keep capturing until you reach trigger point from the other side. Than you start dumping data on other memory location, leaving last data buffer for display/measurement engine to process it. That is also your history buffers.
That's exactly what I was describing above. The primary difference is that what I described would re-arm the trigger after the acquisition had passed the location boundary of the display, rather than the end of the buffer, and whether or not the buffer location for the next acquisition would be reset to the start of the current buffer section of N points would depend on whether a trigger event occurred within the last N points (if no event occurred within the next N points then it would reset the acquisition location to the start of the existing buffer area because the rest of the buffer area might contain a trigger event and that would need to be preserved).
Think of the implementation I described as a conditional circular buffer implemented within another circular buffer that's twice the size. The point of it is to ensure that you always preserve a memory region of N points that contains the last seen trigger event within it, as long as at least one trigger event was seen at all, as well as to ensure that the trigger fires, and the resulting processing happens, as often as possible.
My question is: how is what I described so different from what scopes currently do that what I described is
not possible to implement with the hardware used by low-end scopes? What
mandates that the trigger must fire at most once per amount of time represented by the memory depth and sample rate combination?