I finished reading section 8 of the Technical Report, which is the material germane to this thread. It is a good clear explanation of the problem/approaches and I found it very useful.
However, there is something missing from it and parenthetically from all of the material I have read so far. Specially, how do you use the Allan variance (or its square root, the Allan deviation) once you have computed it. I imagine someone setting up a system might want to know if a particular oscillator is suitable for use in that system. For example, a hobbyiest may wish to measure signal characteristics of some radio transmission system he is building. He wants to use a 10 MHz reference clock passed through a distribution amplifier to synchronize the instruments he is using to test his system.
I understand from the reading that I have done that computing the traditional variance of fractional frequency data doesn't work because it doesn't converge as the sample size increases. That is one of the reasons Allan created his variance measure. But, with the traditional variance (actually its square root, the standard deviation), if you assume a gaussian distribution of the fractional frequency process, 99.7% of the values lie within a 3 sigma band around the mean. So, if the designer had a traditional variance to work with, he could look at the range of frequencies within the 3 sigma band and decide whether that sort of jitter was acceptible for testing his system.
So far, I have found nothing like this for the Allan variance. How do you use it in a practical situation?
My "I read it on the internet" understanding of the Allan variance is it simply is treating the variance as a random variable. So one has then the mean and variance of the variance. One can take the variance and compute the FFT to look for periodicities in the variance. I *think* that would be the cyclostationarity, but I've never encountered that term before. So I'm just guessing.
I rather suspect that this is a lexical minefield where different specialties define slightly different meaning and scaling conventions to the same words.
https://fenix.tecnico.ulisboa.pt/downloadFile/3779572188799/Tn296.pdf "Characterization of Clocks & Oscillators"
I'm guessing a lot of people on this forum are familiar with GPS-disciplined oscillators, where a local oscillator is locked to a signal extracted from GPS satellites. The big advantage of these devices is the ability to impose the long term stability of the GPS (Cesium) clocks onto a less expensive, local clock. But, how does one marry the local clock to the GPS signal? In other words, how quickly or how tightly does one lock the local oscillator onto the GPS signal? The answer to that question depends on the characteristics of both the local oscillator and the GPS signal, and the Allan Variance gives you those characteristics. For instance, the Allan Variance of a simple TXCO might indicate it should be locked onto the GPS signal with a short time constant, whereas the Allan Variance of a Rb oscillator might indicate that it should be locked onto the GPS signal with a relatively long time constant.
This is all very reasonable, but doesn't solve the problem I posed. Let me try again.
I am a hobbyist who is building some circuit or system. I want to test that system using different pieces of equipment (e.g., oscilloscope, spectrum analyzer, frequency counter) simultaneously. I have two oscillators I can use to synchronize the equipment (through a distribution amp), say a rubidium oscillator and an ocxo. Which one do I use? From just playing around with my own rubidium and ocxo oscillators, it appears to me that the rubidium has good long-term stability, but not so good short-term stability. On the other hand, the ocxo has good short-term stability and not as good long-term stability.
What I need is information on the stability of these two oscillators that will help me choose which one to use. I am not designing oscillators, I am using them. Perhaps naively, I presumed that the stability measures now in use would provide information so I can make an informed choice. Did I presume incorrectly?
This strikes me as an interesting application for a sparse L1 pursuit.
My "I read it on the internet" understanding of the Allan variance is it simply is treating the variance as a random variable. So one has then the mean and variance of the variance. One can take the variance and compute the FFT to look for periodicities in the variance. I *think* that would be the cyclostationarity, but I've never encountered that term before. So I'm just guessing.
I rather suspect that this is a lexical minefield where different specialties define slightly different meaning and scaling conventions to the same words.
No, you missed the mark on this one.
This is all very reasonable, but doesn't solve the problem I posed. Let me try again.
I am a hobbyist who is building some circuit or system. I want to test that system using different pieces of equipment (e.g., oscilloscope, spectrum analyzer, frequency counter) simultaneously. I have two oscillators I can use to synchronize the equipment (through a distribution amp), say a rubidium oscillator and an ocxo. Which one do I use? From just playing around with my own rubidium and ocxo oscillators, it appears to me that the rubidium has good long-term stability, but not so good short-term stability. On the other hand, the ocxo has good short-term stability and not as good long-term stability.
What I need is information on the stability of these two oscillators that will help me choose which one to use. I am not designing oscillators, I am using them. Perhaps naively, I presumed that the stability measures now in use would provide information so I can make an informed choice. Did I presume incorrectly?
You have the information. Does your system require better short-term or long-term stability? Only you can answer that question.
I finished reading section 8 of the Technical Report, which is the material germane to this thread. It is a good clear explanation of the problem/approaches and I found it very useful. However, there is something missing from it and parenthetically from all of the material I have read so far. Specially, how do you use the Allan variance (or its square root, the Allan deviation) once you have computed it.
You seem to follow the "just try it and see what happens" school of design. That's fair. A lot of engineering, perhaps most, is done that way and I won't criticize it. But I am curious about the precise differences between various hobbyist oscillators; specifically their stability. That is why I am doing this project.
In order to understand those differences I want to understand how established measures of oscillator stability relate to practical questions, such as "if I use this oscillator, what are the probable bounds of its jitter? Is it likely that the frequency of this particular oscillator will vary by 10 Hz, 100 Hz, 1 KHz over a 2 hour period (given some parameters such as temperature, power line ripple, ...)?" Without understanding how Allan variance relates to this question, why should I be interested in it?
What I need is information on the stability of these two oscillators that will help me choose which one to use. I am not designing oscillators, I am using them. Perhaps naively, I presumed that the stability measures now in use would provide information so I can make an informed choice. Did I presume incorrectly?
My "I read it on the internet" understanding of the Allan variance is it simply is treating the variance as a random variable. So one has then the mean and variance of the variance. One can take the variance and compute the FFT to look for periodicities in the variance. I *think* that would be the cyclostationarity, but I've never encountered that term before. So I'm just guessing.
I rather suspect that this is a lexical minefield where different specialties define slightly different meaning and scaling conventions to the same words.
No, you missed the mark on this one.
Which part? Perhaps you would be so kind as to explain in more detail.
... the Allan variance is it simply is treating the variance as a random variable.
One can take the variance and compute the FFT to look for periodicities in the variance.
I rather suspect that this is a lexical minefield where different specialties define slightly different meaning and scaling conventions to the same words.
E.g., if you were looking to discipline a crystal oscillator with a rubidium standard, you might end up with a plot like this one. At taus below one second, the rubidium standard is noisier than the crystal oscillator. If you used a loop bandwidth much higher than 1 Hz, you would stabilize the crystal oscillator adequately but you would also lose its superior short-term noise performance. If you used a time constant much slower than that, though, there would be a big hump in the plot where the oscillator wanders around over intervals of a few seconds before being stabilized at longer taus. It will never be perfect, so your goal is to avoid corrupting the short-term performance while minimizing the hump. (You can see this optimization process at work in the plot of the rubidium standard by itself, in fact, since that's the exact problem its designers were faced with.)
One of the points Rubiola raises in his book is that the reason we use ADEV is because true frequency-domain analysis was computationally difficult back in the 1960s. You can go from an FFT to an ADEV plot, at least in theory, but not vice-versa. Ideally, the problem outlined above would be solved in the traditional phase-noise crossover sense.
You seem to follow the "just try it and see what happens" school of design. That's fair. A lot of engineering, perhaps most, is done that way and I won't criticize it. But I am curious about the precise differences between various hobbyist oscillators; specifically their stability. That is why I am doing this project.
In order to understand those differences I want to understand how established measures of oscillator stability relate to practical questions, such as "if I use this oscillator, what are the probable bounds of its jitter? Is it likely that the frequency of this particular oscillator will vary by 10 Hz, 100 Hz, 1 KHz over a 2 hour period (given some parameters such as temperature, power line ripple, ...)?" Without understanding how Allan variance relates to this question, why should I be interested in it?
E.g., if you were looking to discipline a crystal oscillator with a rubidium standard, you might end up with a plot like this one. At taus below one second, the rubidium standard is noisier than the crystal oscillator. If you used a loop bandwidth much higher than 1 Hz, you would stabilize the crystal oscillator adequately but you would also lose its superior short-term noise performance. If you used a time constant much slower than that, though, there would be a big hump in the plot where the oscillator wanders around over intervals of a few seconds before being stabilized at longer taus. It will never be perfect, so your goal is to avoid corrupting the short-term performance while minimizing the hump. (You can see this optimization process at work in the plot of the rubidium standard by itself, in fact, since that's the exact problem its designers were faced with.)
One of the points Rubiola raises in his book is that the reason we use ADEV is because true frequency-domain analysis was computationally difficult back in the 1960s. You can go from an FFT to an ADEV plot, at least in theory, but not vice-versa. Ideally, the problem outlined above would be solved in the traditional phase-noise crossover sense.
You make many good points in this post, but you are focusing on oscillator design, not oscillator use. I wouldn't even think of designing an oscillator (other than, perhaps, a simple colpitts oscillator for some throw away project) because I am not an experienced oscillator designer and, more importantly, you can buy simple oscillator modules very cheaply (I just bought 7 10 MHz oscillator modules from Jameco for $10). My interest is using existing oscillators. So, as an example, I have both a 10 MHz Rubidium oscillator (an FEI FE-5650) and two 10 MHz ocxos (one using a Bliley module and the other an Isotemp module). I built the enclosures for them and for one designed a simple filter to turn the square wave into a sine wave (which doesn't work very well). However, the core oscillators are off-the-shelf.
I bought the core oscillators on eBay. All were rescued from obsolete equipment and are probably 20 years old. That means they have aged. I would like to know how they compare to new core modules. Do they conform to the aging parameters in their data sheets? Has their performance degraded in a way that dramatically affects their jitter characteristics? I would imagine others might like to know this as well, since hobbyists rarely buy new rubidium or ocxo modules. Most, I would imagine, got them from eBay as I did.
At this point, I have no idea what you're asking. Maybe you can be more specific about what you want to do and what you want to know.
At this point, I have no idea what you're asking. Maybe you can be more specific about what you want to do and what you want to know.
See my post to KE5FX:
"I bought the core oscillators on eBay. All were rescued from obsolete equipment and are probably 20 years old. That means they have aged. I would like to know how they compare to new core modules. Do they conform to the aging parameters in their data sheets? Has their performance degraded in a way that dramatically affects their jitter characteristics? I would imagine others might like to know this as well, since hobbyists rarely buy new rubidium or ocxo modules. Most, I would imagine, got them from eBay as I did."
Without continuous monitoring you can't say much about the aging process, but you can certainly characterize the oscillators' current performance at both short- and long-term intervals. PN and ADEV are the core metrics needed for this.
Ideally, the manufacturers of your used/surplus oscillators will have specified the performance in terms of Allan deviation, phase noise, or both. So all you need is a reference with known performance to compare them to, and the necessary instrumentation to make the measurements. Now you have the classic man-with-two-clocks problem, of course. There is a reason why my forum avatar is a rabbit seen in infrared light, as might be encountered by a well-equipped explorer in the twisty passages of a deep, dark hole.
KE5FX gave you the answer ... measure the Allan Variance.
KE5FX gave you the answer ... measure the Allan Variance.
We are going in circles. How does the Allan Variance tell me anything practical about jitter?
KE5FX gave you the answer ... measure the Allan Variance.
We are going in circles. How does the Allan Variance tell me anything practical about jitter?
That is what it does.
Let's use a concrete example. The FEI FE-5650 spec gives an Allan Variance of 1.4*10-11/sqrt(t) when the unit is new. Using that number (if you need other information, the URL to the spec is in my post to KE5FX), tell me how to determine that the frequency of the unit will not vary by more than x% (you choose x) over a two hour period with a probability of p (you choose p).
An oscillator is mathematically characterized as:
v(t) = [V0 + e(t)] * cos[w0*t + phi(t)], where V0 is the base oscillator amplitide, w0 is the base oscillator frequency (in radians/sec), and both e(t) and phi(t) are stochasitic processes that respecitively add amplitude noise and phase noise to the oscillator's output.
For any practical oscillator, the stochastic processes e(t) and phi(t) are cyclostationary, which means their moments (e.g., mean and variance) are normally not constant (which would be true for a stationary process), but periodic. That means over time they change in value, but are periodic over some timeframe.
My problem is how to properly sample cyclostationary processes such as e(t) and phi(t).