OK, thanks. In other words short pulses can be displayed for several screen update frames then to be detectable by the human eye?
It's really about how many waveforms per second it can
capture. Things like measurements and decoding that can't be done in real time at the higher acquisition speeds will create dead time before the trigger can be rearmed since it can't overwrite the record while it's being processed. If you're counting events for instance such dead time makes the result useless. You will end up miscounting back-to-back events. The wfm/s spec is typically stated with processing set to an absolute minimum, meaning no rms or peak voltage measurements, no decoding, no mask testing, and probably no complex triggers. But a scope in this mode is rather useless, and where the rubber hits the road is the acquisition rate you can get
while making measurements like counting mask test misses. Things like segmenting also impose blind periods. This is a much more complex problem, some very high acquisition rate scopes have poor processing, or vice versa and in reality the wfm/s spec is pointless since as you point out it's too fast to process with a Mk1 Eyeball. The best way to determine the actually useful acquisition rate - which depends on so many factors, including any auto-zero time for active probes and M&D implementation - is to hook up a counter to the ext trg out on the scope. This is the ONLY way to determine what the blind time is, if any, and the impact of various settings like enabling measurements or the DVM on it. (Yes, even the free DVM every vendor throws in these days can impose a blind time.)
To be honest, I wish scopes would state on screen the blind time between the end of a record and the next trigger. This can either be straight-up measured using the timebase, or calculated from the time processing time and record duration. (Maybe some scopes do provide this?)