HI guys,
I have been playing around with my new Rigol oscilloscope and I recently came across a behavior that seems awkwards to me. Specifically, the awkwardness revolves around the sample rate the oscilloscope is choosing. To get my point across, please take a minute to check out the attached screenshot.
If you look at that screenshot, you will notice that the memory depth selection is equal to 12M (12000000 samples) and that the horizontal time resolution per division is 50ms (.05 seconds) for a total window horizontal time resolution of 0.6 seconds (0.05 X 12 = 0.6).
Now, if I take the 12000000 samples (memory depth) and divide that number by 0.6 seconds (total time being viewed), I should end up with the samples per seconds (sample rate) that the oscilloscope should capture for that range of time: 12000000 / 0.6 = 20000000 Sa/s (20M Sa/s).
Assuming that my math is correct, why is the oscilloscope choosing a sample rate of 10M Sa/s as seen on the screenshot? It looks like rather than choosing a maximum sample rate of 20M Sa/s is choosing a 10M Sa/s and doubling the amount of time being capture from 0.6 seconds to 1.2 seconds.
Why would the oscilloscope do that? Why is it not maximizing the sample rate for my chosen time range?
Thanks.