Siglent scopes do not support variable timebase. Many scopes do not.
I know Keysight does on Megazooms and I think some Rigols do.
LeCroy, Micsig, and some R&S I looked into do not. I don't think Tektronix supports it either, on scopes I remember. But Tektronix might have some scopes support it and some not, because of various architectures over the years.
As an information, those that support it do it by still sampling at same 1-2-5 (or some other fixed) increments and just showing only part of the data on screen. So 33 ns/div will be sampled at 50ns/div and only first 330ns will be shown (if there are 10 divs on screen). So no sampling benefits of being able to chose finely how scope will sample. In fact you will have even more blind time in this case.
To me, most useful fine timebase is when decoding serial protocols on my Keysight. But that is mostly because screen is so small and you want to maximize useful part you are looking at. With much larger (10" and above) screens I don't find it that important. I just make sure what I need is somewhere on a screen in full and it is legible enough.
thank you!
about the bit of the scope sampling at fixed increments and just displaying cropped data to accommodate odd timebases, i see the analog discovery will adjust its rate with every timebase selection allowed by waveforms, so, how do you know the other scopes that at the very least display non-fixed-step timebases are cropping the data to display it and not actually filling at the sample rate they should at an arbitrary timebase?(i guess with the keysight it's easy to figure that one out as it changes sample rate only when you hit the fixed timebase intervals, but still, it only shows you the data that's on the display, it won't zoom out to what one would expect based on the sample rate and the 1MPt of memory)
thanks again!
AD is not representative to how most scopes work. It is slow ADC connected to FPGA and it streams raw data to PC for everything else.
It contains only triggering onboard. All waveform processing is on PC from a streamed data. For such systems it pays for "sampling head" to aggressively decimate data to be sent to PC. It is 32ksamples buffer anyways.
On real scopes you have hundreds of MS buffers. Even cheap SDS800xHD has 100MS buffer.
So you cannot brute force it. It needs optimizations in processing.
Scopes do not actually change sample rate at all. Most ADC only accept limited clock ranges. Their ADC sample all the time at the fastest rate. That is being feed to trigger engine, and after that (or in parallel sometimes) that data stream is then decimated to achieve equivalent of slower sample rate.
Decimation is usually done by simple discarding of some samples. So you keep ever 10th of samples on 1 GS/s sample rate and you get 100MS/s equivalent. You can discard only integer number of samples, but usually it is something that nicely divisible. FPGA has to process huge amounts of data, and that is done by logic that is preset to some fixed values.
Digital circuits and processors are very efficient if you round it up to some fixed values. Multiply/divide by 2 is achieved by simply shifting binary number left and right. DSP block will favour certain data sizes...