It was harder than I thought to make objective tests of this. Strange that I never learn...
I
expected the test frequency to affect the capture rate. That's the reason I chose a pretty high test frequency, so that its period was much lower than the scope's minimum retrigger/re-arm time. What I didn't expect was that the capture rate would
decrease when I increased the test frequency
And it was consistent, going from 1 MHz up to 30 MHz in 1MHz steps.
Then I noticed that even changing the output amplitude of the test signal affected the capture rate: higher amplitude lowered the capture rate. Next, I kept the same amplitude but changed the scope vertical scale, with the same effect: going from 2 V/div to 100 mV/div substantially decreased the capture rate! And yes, turning off all channels gave max capture rate (other settings unchanged).
I assumed that only the amount of captured data would have effect (which is automatic on these Agilent scopes), but obviously that's not the case. I'm not sure whether the data processing time increases, or if it's substantially more data to be transferred to the graphics processor, that lowers the capture rate.
The conclusion is that when comparing scope capture rates, it seems "fair" to use the Auto setup since so many parameters come into play. However, the Auto setup result may not (or rather typically won't) be the setting you would choose for a typical measurement, which may give an "unfair" comparision, like when comparing scopes with different number of horizontal divisions.
Also, it might be worth noting that
the signal content may vary the capture rate on DSOs, which may be good to know when high capture rate is important.
Some additions to the test in the first post (not tested on all settings so there might be exceptions):
Acqusition mode:
- Peak detect did not change the capture rate
- Average mode and high resolution mode lowers the capture rate
Display mode:
- Dot display mode did not change the capture rate
Horizontal mode:
- XY mode did not change the capture rate