Consider the situation in Figure 1. Each period produces a result - either G or L. These results are analyzed over the averaging interval. If the probabilithy of obtaining G is p, then the probability of obtaining a L is 1-p. For simplicity it is assumed that p=1-p=.5.

For measuring oscillator stability the statistic of interest is not how many Gs or Ls appear in an averaging interval, but the difference between these values. The process represented by an averaging interval is well-known and is called a bernoulli trial. The expected value of the difference between the number of Gs and Ls is presented here, specifically: 2mp - m = m(2p-1) = 0. [Note: the referenced web page uses n as the number of trials, whereas here that value is m. The value n is used here to represent the number of averaging intervals. Also, the problem solved there is stated in terms of successes and failures. The logic is exactly the same. Simply substitute L for success and G for failure.]

The variance of the difference between the two random variables in a Bernoulli trial (see above reference) is: 4mp^{2} = m. Notice (!) that the variance depends on m. So, as the value of tau increases, so does the variance.

Your picture of an oscillator has evolved from a Gaussian distribution (in earlier posts) to a binomial distribution, but the distribution for any real oscillator is not purely one or the other. Oscillator noise is complicated -- it cannot (generally) be reduced to a simple Gaussian process with it's "jitter" described by a single parameter, i.e. by a standard deviation.

Given the capabilities of computers in the 1960s and 1970s, when Allan Variance was developed, it was necessary to increase tau in order to obtain long-term measures of clock stability. Today, computers are much more powerful. So, it would be interesting to determine the sample_time/tau ratio above which an analyst would be forced to increase tau in order to obtain practical clock evaluation results. This would, of course, depend on the computer available. However, I would guess most desktop systems these days could analyze a very long data set in a practical amount of time.

Allan Variance calculations are very simple calculations. They do not require much computational power, even when measuring long term stability. Tau is often increased for long term measurements because there are hardware advantages (reduced dead time, higher counter resolution, etc.) and it is computationally

*convenient* to simply increase tau instead of increasing the number of data points. There are no serious disadvantages to increasing tau instead of increasing the number of data points.

An example: Let's say you want to measure the Allan Variance from tau = 1 s to tau = 10

^{5} s. To get a reliable Allan Variance at 10

^{5} s, you will need 10

^{6} s of data.

Method 1) Collect 10

^{6} counter readings with a gate time of 1 s. Total number of data points is 10

^{6}, and the total acquisition time is about 278 hours.

Method 2) Collect 10

^{3} counter readings with a gate time of 1 s, then collect 10

^{4} counter readings with a gate time of 100 s. The total number of data points is about 100x less than method 1, the dead time is about 100x less than method 1, counter resolution is improved, but the total acquisition time is only about 17 minutes longer. Resulting Allan Variance will be consistent with that attained by method 1.