EEVblog Electronics Community Forum
Products => Test Equipment => Topic started by: Rene on November 02, 2014, 04:13:28 am

HI guys,
I have been playing around with my new Rigol oscilloscope and I recently came across a behavior that seems awkwards to me. Specifically, the awkwardness revolves around the sample rate the oscilloscope is choosing. To get my point across, please take a minute to check out the attached screenshot.
If you look at that screenshot, you will notice that the memory depth selection is equal to 12M (12000000 samples) and that the horizontal time resolution per division is 50ms (.05 seconds) for a total window horizontal time resolution of 0.6 seconds (0.05 X 12 = 0.6).
Now, if I take the 12000000 samples (memory depth) and divide that number by 0.6 seconds (total time being viewed), I should end up with the samples per seconds (sample rate) that the oscilloscope should capture for that range of time: 12000000 / 0.6 = 20000000 Sa/s (20M Sa/s).
Assuming that my math is correct, why is the oscilloscope choosing a sample rate of 10M Sa/s as seen on the screenshot? It looks like rather than choosing a maximum sample rate of 20M Sa/s is choosing a 10M Sa/s and doubling the amount of time being capture from 0.6 seconds to 1.2 seconds.
Why would the oscilloscope do that? Why is it not maximizing the sample rate for my chosen time range?
Thanks.

I have noticed this too.
My assumption is that they had a limited amount of room in the FPGA and so had to hard code a number of depth and sample rate combinations to suit their algorithmic and space constraints.
The sample rates seem to go up in a 125 increment until you get beyond 10MSa/s, where it appears there is more correlation to making the scheme fit perhaps some internal counter/divider scheme.
100kSa/s
200kSa/s
500kSa/s
1MSa/s
2MSa/s
5MSa/s
10MSa/s
25MSa/s
50MSa/s
125MSa/s
250MSa/s
500MSa/s
1GSa/s

HI guys,
I have been playing around with my new Rigol oscilloscope and I recently came across a behavior that seems awkwards to me. Specifically, the awkwardness revolves around the sample rate the oscilloscope is choosing. To get my point across, please take a minute to check out the attached screenshot.
If you look at that screenshot, you will notice that the memory depth selection is equal to 12M (12000000 samples) and that the horizontal time resolution per division is 50ms (.05 seconds) for a total window horizontal time resolution of 0.6 seconds (0.05 X 12 = 0.6).
Now, if I take the 12000000 samples (memory depth) and divide that number by 0.6 seconds (total time being viewed), I should end up with the samples per seconds (sample rate) that the oscilloscope should capture for that range of time: 12000000 / 0.6 = 20000000 Sa/s (20M Sa/s).
Assuming that my math is correct, why is the oscilloscope choosing a sample rate of 10M Sa/s as seen on the screenshot? It looks like rather than choosing a maximum sample rate of 20M Sa/s is choosing a 10M Sa/s and doubling the amount of time being capture from 0.6 seconds to 1.2 seconds.
Why would the oscilloscope do that? Why is it not maximizing the sample rate for my chosen time range?
Thanks.
TFT visible area is 12 div but captured lenght is more. Stop and zoom out you know real sampling lenght. (I do not have this Rigol so I do not know exactly this number of divs.) This is quite common practice with many scopes.