There are scopes using DDR for their acquisition memory, across the entire range. But it comes with limitations and they can't just add more capture depth without compromising the user interface (its been done before when Mpts of memory was large and zooming in and out was painfully slow).
That latter makes me wonder why that is, if the speed of the memory is sufficient that it is not imposing a bottleneck on the rest of the processing pipeline. I know that certain types of operations that you'll ultimately want to build a display for, like FFT, work better if you scale them to cover as much of the sample memory as possible, but I still don't understand why normal UI operations would be compromised by more memory.
For instance, let's say that you've got your display window set to cover the entire buffer (Siglent's current lineup always behaves like this). There isn't anything that says that you have to process
every point in the buffer under those conditions, right? Or is there?
Low end scopes are the 5-10Gs/s market,
Oh. I didn't realize there was that distinction. I was rather under the impression that 5-10GS/s was "midrange". No matter. I don't mind using your terminology for this.

if you want to talk entry level then yes, several hundred dollar scopes with 1Gs/s are there but they can't support expensive acquisition systems.
Why would the acquisition system have to be more expensive if you're using DDR memory for the acquisition buffer, when DDR memory controller designs for FPGAs are widely available these days?
Chips of DDR are just a tiny part of what makes deep acquisition memory work, rigol have pushed that out further each year:
https://www.flickr.com/photos/eevblog/8022112817/in/album-72157631618295437/
https://www.skhynix.com/eolproducts.view.do?pronm=DDR2+SDRAM&srnm=H5PS5162GFR&rk=03&rc=consumer
and they work and feel like old tek scopes in many respects with the slow updates and difficulty in accessing the information in the deep memory.
Can you be more specific in what you mean by the "difficulty in accessing the information in the deep memory"?
You need to get the idea of FPS out of your head, its not a good measure of how well the scope is performing.
No, but it's a good measure of how fluid the display appears to be. It's a perception thing. Humans simply do not benefit from a display change rate that exceeds something like 120 times per second (they might be able to see changes that are faster than that, but that matters little when human reaction times are an order of magnitude longer). And what we're talking about here is an oscilloscope, not a real-time first-person shooter game where a person's ability to react in real time to changes in the display matter. For oscilloscope display purposes, a display update rate of 30 times/second (which is the very same thing as saying "30 FPS") is perfectly adequate.
That does not mean that the display should be skipping events which occur more quickly than that. A fast glitch should still be shown, within the persistence parameters that have been set.
And note that I'm talking about display
update rates here. Obviously, if the rate at which the data gets decimated and processed for display is lower than some reasonable minimum (20 times per second?), it'll begin to impact the user's perception of the display at a minimum, and perhaps even of the UI.
There's a
lot more than goes into the UI in terms of making it responsive than just the rate at which data on the display is updated. With the sole exception of the initial amount of time it takes to perform decimation of the acquired data and to get it to the screen after you make a UI change,
all of the rest of the lag in the UI is down to the coding of the UI and the speed of the UI processor itself.
What is more important is how much data actually gets to the screen,
Data on the screen is already in decimated form. So what exactly do you mean by "how much data actually gets to the screen" here?
More to the point, you can't see on the screen more data than the display itself is actually capable of displaying. If I have a 1000 pixel wide display, that means I can only display 1000 pixels of horizontal information. Period. Even if I have a million points of acquisition within the time frame represented by the screen, I can't display
all of them. I can only display a 1000 pixel
representation of them.
And that means that I might be able to take shortcuts, depending on the set of operations that are being performed in order to transform the data in the acquisition buffer into the data that is actually displayed. For instance, for the FFT, I can sample subsets of the acquisition data, and as long as my technique preserves the frequency domain statistical characteristics of the data in the acquisition buffer, what lands on the screen will be a reasonable representation of the FFT, right? Moreover, I can't display more than 1000 discrete frequency bins from the FFT at any given time, either, precisely because my display is limited to 1000 pixels of width.
The intensity grading mechanism is one of the mechanisms where you can't get away with subsampling, though, but at the same time, the transforms required for it are very simple and, more importantly, because the target of the transforms is the display, it means the target buffer can be relatively small, which means it can be very fast.
when you're on a long timebase then the update can only be as fast as the acquisition a 1s sweep can only provide a 1s update.
Of course. Nobody is arguing otherwise here. But that's not the situation we're talking about, is it? We're talking about the situation where something in the processing pipeline is a bottleneck, i.e. one where if the processing pipeline were faster, then the display update rate would be faster.
What you miss is that most scopes to get the information to the screen stop the acquisition and have a substantial blind time where although the screen may be flickering away quickly at high FPS much of the information captured by the ADC is completely ignored.
Okay. And
why is that?
Why can't the scope continue to acquire samples unless the
triggering mechanism can't keep up?
FFT is a good example as the computational cost grows with N log N, so its not just a matter of linear scaling to support deeper memory. The processing demands blow up quickly , unlike the display plotting which is proportional to the memory depth multiplied by the output display resolution (where the low latency ram is the limiting factor).
So you wind up with a limited number of points in the FFT's baseline. That doesn't mean you have to sacrifice acquisition memory to get that. You can just as easily subsample the acquisition memory to build your FFT, right? The original data remains there to be examined in other ways if you want.
More memory gets you more flexibility. How is it
ever a downside when
at worst you can always pretend that you have less memory to work with?
I'm really going to have to work up a block diagram of what I have in mind here. I can hardly believe someone else hasn't already thought of the overall architecture I have in mind.