The variability of the phase is called phase noise and related to this is what is called Allen variance. You will find a lot of information on technics how to measure phase noise in older HP documents.
If I understand correctly, you want to measure deviations in amplitude and phase for a nominally ideal 10MHz sinusoid?
If you have a stable (enough) 10MHz reference, why not just multiply the two signals together which will give you the sum and difference of the two (acting as a demodulator). The difference component contains information about the amplitude and phase differences. This concept is used for phase-sensitive detection (such as for lock-in amplifiers).
I'm not sure it is necessary to delve into stochastic processes etc.
The variability of the phase is called phase noise and related to this is what is called Allen variance. You will find a lot of information on technics how to measure phase noise in older HP documents.
In fact there are more modern versions of the Allan variance (e.g., the modified Allan variance, the Hadamard variance) that provide better confidence intervals than the Allan variance.
I have read quite a few NBS/NIST technical reports about processing the sample data of an oscillator once you have obtained it, but none of them (except perhaps the one referenced by Tomato above - I haven't completely read its relevant sections) discuss how to select the sample interval during which you make N measurements.
Thanks for your comment. However, you are describing a particular technique to make a measurement. I have read quite a bit about that topic and have a good handle on how to do it. However, in order to obtain statistically valid results you need to make multiple measurements and then process them. My question is about the interval over which these multiple measurements are made.
However, the stochastic processes e(t) and phi(t) are not stationary ... these processes normally have a weaker property known as cyclostationarity.
You're going to have to convince me of the validity of this as a general statement. The only non-stationary noise I've ever observed in an oscillator involved failing components or an improper test setup, and I've never observed cyclo-stationary noise in an oscillator.
Can you give an example of a cycle-stationary noise process in an oscillator?
I'm just quoting what I have read here (http://www.designers-guide.org/theory/cyclo-preso.pdf) (see slide 4), here (https://books.google.com/books?id=3-sF7xF-92oC&pg=PA25&lpg=PA25&dq=oscillator+cyclostationarity&source=bl&ots=Z_ZqGW3HbM&sig=3NsBZMH1OQ5GwjJxzOWxex5nh_8&hl=en&sa=X&ved=0ahUKEwjg5KP6ucfbAhUEFXwKHencD48Q6AEIWzAG#v=onepage&q=oscillator%20cyclostationarity&f=false), here (https://kenkundert.com/docs/msicd99.pdf) (2nd paragraph in section 1), and here (https://chic.caltech.edu/wp-content/uploads/2013/05/general.pdf) (1st partial paragraph under figure 3).
Intuitively, it makes sense. Consider just one parameter - temperature - that controls oscillator noise and for argument sake, ignore all others. For a given temperature, the noise should be pretty much identical each time the environment cycles through that value.
The second and fourth links are discussing noise that is cyclic at the period of the oscillator, which would be 100 ns for a 10 MHz oscillator. I assume you will be making measurements on time scales many orders of magnitude larger than this, so the cyclo-stationary aspect is not important.
Following up, let me ask your advice on averaging and sampling times for short-term stability. To define short-term, I was thinking that most hobbyists use a time standard (e.g., ocxo or rubidium oscillator) to synchronize test equipment like frequency counters, oscilloscopes, spectrum analyzers, ... when conducting measurement experiments. So, my first guess is they would be interested in short-term stability on the order of several minutes to several hours. So, following your advice in a previous post, I should make the averaging interval on the order of a minute and the sampling interval on the order of several hours. Have I understood you correctly?
Get a copy of "Random Data" by Bendat and Piersol. They treat the data analysis very thoroughly including how to deal with non-stationary series. That's been my go to for weird questions that walked into my office for 30 years and 3 editions. The 4th is the final one as Piersol passed away.
Fundamentally you take long samples, window them and average. The details depend upon what you want to characterize. In reflection seismology one is usually doing this to characterize attentuation so one is averaging amplitude spectra.
Get a copy of "Random Data" by Bendat and Piersol. They treat the data analysis very thoroughly including how to deal with non-stationary series. That's been my go to for weird questions that walked into my office for 30 years and 3 editions. The 4th is the final one as Piersol passed away.
Fundamentally you take long samples, window them and average. The details depend upon what you want to characterize. In reflection seismology one is usually doing this to characterize attentuation so one is averaging amplitude spectra.
The book you're looking for may be this one (https://www.amazon.com/Frequency-Stability-Oscillators-Cambridge-Engineering/dp/052115328X) by Rubiola.
NBS Monograph 140 (Time and Frequency: Theory and Fundamentals)
A few docs that may be of interest, if you haven't read them already:
http://tycho.usno.navy.mil/ptti/1985papers/Vol%2017_05.pdf (http://tycho.usno.navy.mil/ptti/1985papers/Vol%2017_05.pdf) "Characterization, Optimum Estimation, and Time Prediction
of Precision Clocks"
https://fenix.tecnico.ulisboa.pt/downloadFile/3779572188799/Tn296.pdf (https://fenix.tecnico.ulisboa.pt/downloadFile/3779572188799/Tn296.pdf) "Characterization of Clocks & Oscillators" Covers portions of Monograph 140 mentioned above by Tomato. Co-authored by Allan.
http://www.photonics.umbc.edu/Menyuk/Phase-Noise/rutman_ProcIEEE_910601.pdf (http://www.photonics.umbc.edu/Menyuk/Phase-Noise/rutman_ProcIEEE_910601.pdf) "Characterization of Frequency Stability
In Precision Frequency Sources" By J. Rutman & F. A. Wall
I finished reading section 8 of the Technical Report, which is the material germane to this thread. It is a good clear explanation of the problem/approaches and I found it very useful.
However, there is something missing from it and parenthetically from all of the material I have read so far. Specially, how do you use the Allan variance (or its square root, the Allan deviation) once you have computed it. I imagine someone setting up a system might want to know if a particular oscillator is suitable for use in that system. For example, a hobbyiest may wish to measure signal characteristics of some radio transmission system he is building. He wants to use a 10 MHz reference clock passed through a distribution amplifier to synchronize the instruments he is using to test his system.
I understand from the reading that I have done that computing the traditional variance of fractional frequency data doesn't work because it doesn't converge as the sample size increases. That is one of the reasons Allan created his variance measure. But, with the traditional variance (actually its square root, the standard deviation), if you assume a gaussian distribution of the fractional frequency process, 99.7% of the values lie within a 3 sigma band around the mean. So, if the designer had a traditional variance to work with, he could look at the range of frequencies within the 3 sigma band and decide whether that sort of jitter was acceptible for testing his system.
So far, I have found nothing like this for the Allan variance. How do you use it in a practical situation?
My "I read it on the internet" understanding of the Allan variance is it simply is treating the variance as a random variable. So one has then the mean and variance of the variance. One can take the variance and compute the FFT to look for periodicities in the variance. I *think* that would be the cyclostationarity, but I've never encountered that term before. So I'm just guessing.
I rather suspect that this is a lexical minefield where different specialties define slightly different meaning and scaling conventions to the same words.
https://fenix.tecnico.ulisboa.pt/downloadFile/3779572188799/Tn296.pdf (https://fenix.tecnico.ulisboa.pt/downloadFile/3779572188799/Tn296.pdf) "Characterization of Clocks & Oscillators"
I'm guessing a lot of people on this forum are familiar with GPS-disciplined oscillators, where a local oscillator is locked to a signal extracted from GPS satellites. The big advantage of these devices is the ability to impose the long term stability of the GPS (Cesium) clocks onto a less expensive, local clock. But, how does one marry the local clock to the GPS signal? In other words, how quickly or how tightly does one lock the local oscillator onto the GPS signal? The answer to that question depends on the characteristics of both the local oscillator and the GPS signal, and the Allan Variance gives you those characteristics. For instance, the Allan Variance of a simple TXCO might indicate it should be locked onto the GPS signal with a short time constant, whereas the Allan Variance of a Rb oscillator might indicate that it should be locked onto the GPS signal with a relatively long time constant.
This is all very reasonable, but doesn't solve the problem I posed. Let me try again.
I am a hobbyist who is building some circuit or system. I want to test that system using different pieces of equipment (e.g., oscilloscope, spectrum analyzer, frequency counter) simultaneously. I have two oscillators I can use to synchronize the equipment (through a distribution amp), say a rubidium oscillator and an ocxo. Which one do I use? From just playing around with my own rubidium and ocxo oscillators, it appears to me that the rubidium has good long-term stability, but not so good short-term stability. On the other hand, the ocxo has good short-term stability and not as good long-term stability.
What I need is information on the stability of these two oscillators that will help me choose which one to use. I am not designing oscillators, I am using them. Perhaps naively, I presumed that the stability measures now in use would provide information so I can make an informed choice. Did I presume incorrectly?
This strikes me as an interesting application for a sparse L1 pursuit.
My "I read it on the internet" understanding of the Allan variance is it simply is treating the variance as a random variable. So one has then the mean and variance of the variance. One can take the variance and compute the FFT to look for periodicities in the variance. I *think* that would be the cyclostationarity, but I've never encountered that term before. So I'm just guessing.
I rather suspect that this is a lexical minefield where different specialties define slightly different meaning and scaling conventions to the same words.
No, you missed the mark on this one.
This is all very reasonable, but doesn't solve the problem I posed. Let me try again.
I am a hobbyist who is building some circuit or system. I want to test that system using different pieces of equipment (e.g., oscilloscope, spectrum analyzer, frequency counter) simultaneously. I have two oscillators I can use to synchronize the equipment (through a distribution amp), say a rubidium oscillator and an ocxo. Which one do I use? From just playing around with my own rubidium and ocxo oscillators, it appears to me that the rubidium has good long-term stability, but not so good short-term stability. On the other hand, the ocxo has good short-term stability and not as good long-term stability.
What I need is information on the stability of these two oscillators that will help me choose which one to use. I am not designing oscillators, I am using them. Perhaps naively, I presumed that the stability measures now in use would provide information so I can make an informed choice. Did I presume incorrectly?
You have the information. Does your system require better short-term or long-term stability? Only you can answer that question.
I finished reading section 8 of the Technical Report, which is the material germane to this thread. It is a good clear explanation of the problem/approaches and I found it very useful. However, there is something missing from it and parenthetically from all of the material I have read so far. Specially, how do you use the Allan variance (or its square root, the Allan deviation) once you have computed it.
You seem to follow the "just try it and see what happens" school of design. That's fair. A lot of engineering, perhaps most, is done that way and I won't criticize it. But I am curious about the precise differences between various hobbyist oscillators; specifically their stability. That is why I am doing this project.
In order to understand those differences I want to understand how established measures of oscillator stability relate to practical questions, such as "if I use this oscillator, what are the probable bounds of its jitter? Is it likely that the frequency of this particular oscillator will vary by 10 Hz, 100 Hz, 1 KHz over a 2 hour period (given some parameters such as temperature, power line ripple, ...)?" Without understanding how Allan variance relates to this question, why should I be interested in it?
What I need is information on the stability of these two oscillators that will help me choose which one to use. I am not designing oscillators, I am using them. Perhaps naively, I presumed that the stability measures now in use would provide information so I can make an informed choice. Did I presume incorrectly?
My "I read it on the internet" understanding of the Allan variance is it simply is treating the variance as a random variable. So one has then the mean and variance of the variance. One can take the variance and compute the FFT to look for periodicities in the variance. I *think* that would be the cyclostationarity, but I've never encountered that term before. So I'm just guessing.
I rather suspect that this is a lexical minefield where different specialties define slightly different meaning and scaling conventions to the same words.
No, you missed the mark on this one.
Which part? Perhaps you would be so kind as to explain in more detail.
... the Allan variance is it simply is treating the variance as a random variable.
One can take the variance and compute the FFT to look for periodicities in the variance.
I rather suspect that this is a lexical minefield where different specialties define slightly different meaning and scaling conventions to the same words.
E.g., if you were looking to discipline a crystal oscillator with a rubidium standard, you might end up with a plot like this one. At taus below one second, the rubidium standard is noisier than the crystal oscillator. If you used a loop bandwidth much higher than 1 Hz, you would stabilize the crystal oscillator adequately but you would also lose its superior short-term noise performance. If you used a time constant much slower than that, though, there would be a big hump in the plot where the oscillator wanders around over intervals of a few seconds before being stabilized at longer taus. It will never be perfect, so your goal is to avoid corrupting the short-term performance while minimizing the hump. (You can see this optimization process at work in the plot of the rubidium standard by itself, in fact, since that's the exact problem its designers were faced with.)
One of the points Rubiola raises in his book is that the reason we use ADEV is because true frequency-domain analysis was computationally difficult back in the 1960s. You can go from an FFT to an ADEV plot, at least in theory, but not vice-versa. Ideally, the problem outlined above would be solved in the traditional phase-noise crossover sense.
You seem to follow the "just try it and see what happens" school of design. That's fair. A lot of engineering, perhaps most, is done that way and I won't criticize it. But I am curious about the precise differences between various hobbyist oscillators; specifically their stability. That is why I am doing this project.
In order to understand those differences I want to understand how established measures of oscillator stability relate to practical questions, such as "if I use this oscillator, what are the probable bounds of its jitter? Is it likely that the frequency of this particular oscillator will vary by 10 Hz, 100 Hz, 1 KHz over a 2 hour period (given some parameters such as temperature, power line ripple, ...)?" Without understanding how Allan variance relates to this question, why should I be interested in it?
E.g., if you were looking to discipline a crystal oscillator with a rubidium standard, you might end up with a plot like this one. At taus below one second, the rubidium standard is noisier than the crystal oscillator. If you used a loop bandwidth much higher than 1 Hz, you would stabilize the crystal oscillator adequately but you would also lose its superior short-term noise performance. If you used a time constant much slower than that, though, there would be a big hump in the plot where the oscillator wanders around over intervals of a few seconds before being stabilized at longer taus. It will never be perfect, so your goal is to avoid corrupting the short-term performance while minimizing the hump. (You can see this optimization process at work in the plot of the rubidium standard by itself, in fact, since that's the exact problem its designers were faced with.)
One of the points Rubiola raises in his book is that the reason we use ADEV is because true frequency-domain analysis was computationally difficult back in the 1960s. You can go from an FFT to an ADEV plot, at least in theory, but not vice-versa. Ideally, the problem outlined above would be solved in the traditional phase-noise crossover sense.
You make many good points in this post, but you are focusing on oscillator design, not oscillator use. I wouldn't even think of designing an oscillator (other than, perhaps, a simple colpitts oscillator for some throw away project) because I am not an experienced oscillator designer and, more importantly, you can buy simple oscillator modules very cheaply (I just bought 7 10 MHz oscillator modules from Jameco for $10). My interest is using existing oscillators. So, as an example, I have both a 10 MHz Rubidium oscillator (an FEI FE-5650) and two 10 MHz ocxos (one using a Bliley module and the other an Isotemp module). I built the enclosures for them and for one designed a simple filter to turn the square wave into a sine wave (which doesn't work very well). However, the core oscillators are off-the-shelf.
I bought the core oscillators on eBay. All were rescued from obsolete equipment and are probably 20 years old. That means they have aged. I would like to know how they compare to new core modules. Do they conform to the aging parameters in their data sheets? Has their performance degraded in a way that dramatically affects their jitter characteristics? I would imagine others might like to know this as well, since hobbyists rarely buy new rubidium or ocxo modules. Most, I would imagine, got them from eBay as I did.
At this point, I have no idea what you're asking. Maybe you can be more specific about what you want to do and what you want to know.
At this point, I have no idea what you're asking. Maybe you can be more specific about what you want to do and what you want to know.
See my post to KE5FX:
"I bought the core oscillators on eBay. All were rescued from obsolete equipment and are probably 20 years old. That means they have aged. I would like to know how they compare to new core modules. Do they conform to the aging parameters in their data sheets? Has their performance degraded in a way that dramatically affects their jitter characteristics? I would imagine others might like to know this as well, since hobbyists rarely buy new rubidium or ocxo modules. Most, I would imagine, got them from eBay as I did."
Without continuous monitoring you can't say much about the aging process, but you can certainly characterize the oscillators' current performance at both short- and long-term intervals. PN and ADEV are the core metrics needed for this.
Ideally, the manufacturers of your used/surplus oscillators will have specified the performance in terms of Allan deviation, phase noise, or both. So all you need is a reference with known performance to compare them to, and the necessary instrumentation to make the measurements. Now you have the classic man-with-two-clocks problem, of course. There is a reason why my forum avatar is a rabbit seen in infrared light, as might be encountered by a well-equipped explorer in the twisty passages of a deep, dark hole. :scared:
KE5FX gave you the answer ... measure the Allan Variance.
KE5FX gave you the answer ... measure the Allan Variance.
We are going in circles. How does the Allan Variance tell me anything practical about jitter?
KE5FX gave you the answer ... measure the Allan Variance.
We are going in circles. How does the Allan Variance tell me anything practical about jitter?
That is what it does.
Let's use a concrete example. The FEI FE-5650 spec gives an Allan Variance of 1.4*10-11/sqrt(t) when the unit is new. Using that number (if you need other information, the URL to the spec is in my post to KE5FX), tell me how to determine that the frequency of the unit will not vary by more than x% (you choose x) over a two hour period with a probability of p (you choose p).
An oscillator is mathematically characterized as:
v(t) = [V0 + e(t)] * cos[w0*t + phi(t)], where V0 is the base oscillator amplitide, w0 is the base oscillator frequency (in radians/sec), and both e(t) and phi(t) are stochasitic processes that respecitively add amplitude noise and phase noise to the oscillator's output.
For any practical oscillator, the stochastic processes e(t) and phi(t) are cyclostationary, which means their moments (e.g., mean and variance) are normally not constant (which would be true for a stationary process), but periodic. That means over time they change in value, but are periodic over some timeframe.
My problem is how to properly sample cyclostationary processes such as e(t) and phi(t).
Let's use a concrete example. The FEI FE-5650 spec gives an Allan Variance of 1.4*10-11/sqrt(t) when the unit is new. Using that number (if you need other information, the URL to the spec is in my post to KE5FX), tell me how to determine that the frequency of the unit will not vary by more than x% (you choose x) over a two hour period with a probability of p (you choose p).
It can't be determined from their specifications, because they do not state the range over which 1.4*10-11/sqrt(t) is valid.
Those are very broad statements, especially when the time frame can be from ms to years. Also, there are some assumptions that might be irrelevant, or simply wrong, depending on the situation.
You mentioned surplus oscillators, synchronize multiple instruments, square to sin conversion of 10 MHz, rubidium clock, and so on.
What are you after? What exactly are you trying to do, or to achieve?
What is your measuring setup? What exactly do you plan to measure with the given setup?
Let's use a concrete example. The FEI FE-5650 spec gives an Allan Variance of 1.4*10-11/sqrt(t) when the unit is new. Using that number (if you need other information, the URL to the spec is in my post to KE5FX), tell me how to determine that the frequency of the unit will not vary by more than x% (you choose x) over a two hour period with a probability of p (you choose p).
It can't be determined from their specifications, because they do not state the range over which 1.4*10-11/sqrt(t) is valid.
Make an assumption and specify how to do the computation. Right now the process is more important than the answer.
The plot above actually came from an eBay'ed FE-5680 with a ton of hours on it, so you can probably expect yours to perform about the same. As far as the error bars on "probably," that would be left as an exercise for the reader.
My understanding re. ADEV is that it calculates the standard deviation of the oscillator's signal over different time intervals and plots those deviations as a function of the time interval.
My understanding re. ADEV is that it calculates the standard deviation of the oscillator's signal over different time intervals and plots those deviations as a function of the time interval.
No, it's not calculating standard deviations.
Brief Explanation
Allan variance equation:
(http://www.allanstime.com/images/Equations/avar2.gif)
where the variance is taken on the variable y. Each value of y in a set has been averaged over an interval J and the ys are taken in an adjacent series, i.e. no delay between the measurements of each. The brackets <> denote the expectation value. For a finite data set, it is taken as the average value of the quantity enclosed in the brackets. The )y denotes the first finite difference of the measures of y; i.e. if i denotes the ith measurement of y, then )y = yi+1 - yi. In total, each adjacent finite difference of y is squared and these then are averaged over the data set and divided by 2. The divide by two causes this variance to be equal to the classical variance if the ys are taken from a random and uncorrelated set; i.e. white noise.
Even the man himself says it (http://www.allanstime.com/AllanVariance):QuoteBrief Explanation
Allan variance equation:
(http://www.allanstime.com/images/Equations/avar2.gif)
where the variance is taken on the variable y. Each value of y in a set has been averaged over an interval J and the ys are taken in an adjacent series, i.e. no delay between the measurements of each. The brackets <> denote the expectation value. For a finite data set, it is taken as the average value of the quantity enclosed in the brackets. The )y denotes the first finite difference of the measures of y; i.e. if i denotes the ith measurement of y, then )y = yi+1 - yi. In total, each adjacent finite difference of y is squared and these then are averaged over the data set and divided by 2. The divide by two causes this variance to be equal to the classical variance if the ys are taken from a random and uncorrelated set; i.e. white noise.
My understanding re. ADEV is that it calculates the standard deviation of the oscillator's signal over different time intervals and plots those deviations as a function of the time interval.
No, it's not calculating standard deviations.
Look at the equation. LHS is sigma squared.
The angle brackets are all important. You can't just do the 2-point difference once. It has to be done many times (hundreds or more) to reach the expectation value that Allan says. If you perform the calculation enough times then you'll get sigma2 or sigma (which he calls the deviation). i.e., you are calculating variance/standard deviation. The only difference between the two is that there'll be a factor 2 difference in the gradient on the log-log plot.
The reason you have to perform the calculation over many t (i.e., (y(t+tau)-y(t))2 is because of the pseudo cyclostochastic nature of the signal (noise).
My understanding re. ADEV is that it calculates the standard deviation of the oscillator's signal over different time intervals and plots those deviations as a function of the time interval.
No, it's not calculating standard deviations.
One reason Allan came up with the Allan Variation is the traditional standard deviation of real oscillators diverges as the sample size increases.
I began thinking about how to convert an Allan Variance/Deviation into a probabilistic bounds on frequency during a particular interval. However, it quickly became apparent that this is not a simple problem.
The Allan Variance uses the average frequency, fi (measured in radians/sec), over an averaging interval tau. These are normalized by the nominal frequency to ensure the Allan Variances of oscillators with different frequencies, w0, are comparable. This normalization produces what is called fractional frequency data ffi = fi/w0. Suppose these samples are generated by a stationary process. Then the standard variance is simple to compute.
However, the Allan Variance is a function of the differenced fractional frequency data: ai = ffi+1-ffi. It sums the square of these values and averages the sum (dividing by 2). The time series ai is autocorrelated. Now, it is possible for a stationary process to produce an autocorrelated series, but this is generally not the case. So, it is possible (likely) that ai represents samples from a non-stationary process. (Someone who is more knowledgable than I can correct me on this.) If so, the Allan Variance will not have the same properties as a standard variance. In particular, you can't use the Allan Deviation as you would a standard deviation from some pdf, defining probabilistic bounds based on it.
However, I am not an expert on Allan Variance/Deviation, so maybe there is some way to use it to compute the desired bounds. This is what I have been asking for someone to explain in recent posts.
You're starting to get it. You just need to abandon the idea of computing probabilistic bounds ...
Look at the equation. LHS is sigma squared.
The angle brackets are all important. You can't just do the 2-point difference once. It has to be done many times (hundreds or more) to reach the expectation value that Allan says. If you perform the calculation enough times then you'll get sigma2 or sigma (which he calls the deviation). i.e., you are calculating variance/standard deviation. The only difference between the two is that there'll be a factor 2 difference in the gradient on the log-log plot.
The reason you have to perform the calculation over many t (i.e., (y(t+tau)-y(t))2 is because of the pseudo cyclostochastic nature of the signal (noise).
There is no other way to say this -- you are wrong. The Allan Variance is not standard deviations calculated at different times. In simple terms, the standard deviation is calculated from the differences between data points and the mean of the data, whereas the Allan Variance is calculated from differences between data points separated in time. Those calculations are very different. Read one of the cited papers.
You're starting to get it. You just need to abandon the idea of computing probabilistic bounds ...
So far all I get is that the Allan Variance/Deviation is for oscillator designers not oscillator users.
There is no other way to say this -- you are wrong. The Allan Variance is not standard deviations calculated at different times. In simple terms, the standard deviation is calculated from the differences between data points and the mean of the data, whereas the Allan Variance is calculated from differences between data points separated in time. Those calculations are very different. Read one of the cited papers.
That's EXACTLY the point I'm making and is born out in the equations I included and my emphasis about expectation values. I CLEARLY state that you calculate the difference between two points in time (y(t+tau)-y(t))2. Certainly not the standard deviation of the all the data points between t and t+tau. I also state that you have to do this many times. That's what the < > expressly mean.
The algorithm to achieve this is trivial and actually faster than having to calculate standard deviations across many data points. It's identical to autocorrelation except that the latter uses the product of two points instead of the difference.
What I am really struggling with is the OP says (I think - it's not clear) that they want to use whatever the standard way is to compare the stability of two oscillators and understand why they are different. Allan variance is that way, isn't it? Well, I know how to construct the Allan variance function from raw data in an extremely efficient way (I've been doing it for a long time for autocorrelation which uses the same basic algorithm).
I CLEARLY state that you calculate the difference between two points in time (y(t+tau)-y(t))2. Certainly not the standard deviation of the all the data points between t and t+tau. I also state that you have to do this many times. That's what the < > expressly mean.
The angle brackets are all important. You can't just do the 2-point difference once. It has to be done many times (hundreds or more) to reach the expectation value that Allan says. If you perform the calculation enough times then you'll get sigma2 or sigma (which he calls the deviation). i.e., you are calculating variance/standard deviation. The only difference between the two is that there'll be a factor 2 difference in the gradient on the log-log plot.
The reason you have to perform the calculation over many t (i.e., (y(t+tau)-y(t))2 is because of the pseudo cyclostochastic nature of the signal (noise).
I'd like to suggest a data acquisition arrangement amenable to simple equipment.
On the 4 oscillators to be compared, attach 10 fast comparators (e.g ADCMP581), four to the zero reference and 6 pair wise among all the pairings. Multiply the GPSDO 10 MHz output to clock a fast ARM processor. The pair wise comparators will need to have the gain adjusted so that the amplitudes are as closely matched as possible. If matching proves problematic use two comparators per pair referencing the average of the two signals.
At each sampling clock tick read the comparators and use those as the address of a counter in memory and increment that counter. At the i PPS tick increment the base address of the array of 64 counters.
A potentially useful embellishment would be to add a comparator tracking a noise source and collect a second set of counts when the sample clock tick value of the noise source comparator is positive. That has the virtue that the random sampling precludes aliasing of harmonics produced by the oscillators but without requiring an antialias filter.
Bendat & Piersol and Octave will take care of things from there with ease.
After looking at the papers on cyclostationarity, I got the impression that's more a model for synthesizing noise in Spice than a model for analyzing clock data.
It's a general tool for characterizing oscillators. It's every bit as useful to users as it is to designers.
It's a general tool for characterizing oscillators. It's every bit as useful to users as it is to designers.
You keep making generalized statements without any supporting evidence.
Here is a concrete example of oscillator use that illustrates why Allan Variance is probably not very interesting to, at least some, oscillator users. This example focuses on doppler radar.
An amateur use of dopplar radar might be to track model drones in a drone air race. Doppler radar sends out signals at a specific frequency and receives reflected signals in which that frequency is shifted. The frequency shifts are processed and turned into estimates of the drones' velocity. It is important that the frequency source is stable, otherwise the velocity estimates will be erroneous. More to the point, the designer of the dopplar radar system wants to know the bounds on the frequency jitter of the source oscillator. From those bounds (which are probabilistic in nature, e.g., 99.7% of the oscillator frequency variation is between w0-b0 and w0+b1), he can produce error bounds on the computed velocities.
The designer couldn't care less what is the Allan Varience of the oscillator or the power law exponents of the component noise sources. He wants to know the jitter bounds. If you can't get the jitter bounds from the Allan Variance, then it has no value in this particular application.
I began thinking about how to convert an Allan Variance/Deviation into a probabilistic bounds on frequency during a particular interval. However, it quickly became apparent that this is not a simple problem.
I fail to see how my description is inconsistent with the equation Allan presents:
(http://www.allanstime.com/images/Equations/avar2.gif)
How are these statements inconsistent with the equation?QuoteI CLEARLY state that you calculate the difference between two points in time (y(t+tau)-y(t))2. Certainly not the standard deviation of the all the data points between t and t+tau. I also state that you have to do this many times. That's what the < > expressly mean.QuoteThe angle brackets are all important. You can't just do the 2-point difference once. It has to be done many times (hundreds or more) to reach the expectation value that Allan says. If you perform the calculation enough times then you'll get sigma2 or sigma (which he calls the deviation). i.e., you are calculating variance/standard deviation. The only difference between the two is that there'll be a factor 2 difference in the gradient on the log-log plot.
The reason you have to perform the calculation over many t (i.e., (y(t+tau)-y(t))2 is because of the pseudo cyclostochastic nature of the signal (noise).
Either the equation he put in his own article is wrong or we are talking about two different things.
I don't have access to that reference. Can you not share what it says?
I don't have access to that reference. Can you not share what it says?
The link was in an earlier post (#21?) by GerryBags.
It seems to me you are trying to determine how much "off frequency " you are going to be based on the ADEV of the oscillator.
So far you have not mentioned (or I missed it) what your target frequency is.
A 1x10-6 stable oscillator will be "off" 10Hz @10 MHz and 100KHz "off" @ 10GHz ;is that what you are looking for ?
It seems to me you are trying to determine how much "off frequency " you are going to be based on the ADEV of the oscillator.
So far you have not mentioned (or I missed it) what your target frequency is.
A 1x10-6 stable oscillator will be "off" 10Hz @10 MHz and 100KHz "off" @ 10GHz ;is that what you are looking for ?
The "target frequency", if I understand your question, is 10 MHz. I have a bunch of 10 MHz oscillators that I want to compare.
I don't think a 10 MHz oscillator with an Allan Variance of 1x10-6 will be "off" 10 Hz. That isn't the nature of this measure of stability.
The reason I have not walked away from this discussion as it increasingly goes on walkabout is I am building a poor man's time lab. I intend to measure the performance of the oscillators alluded to above, but I need to know what data to gather. Without understanding this, I am likely to measure attributes of the oscillators that have no practical value (I am beginning to think Allan Variance falls into this category). Also, if I have no idea how to analyze the data, after gathering it, what the heck am I going to do with it? So, I will keep reading and keep asking questions until someone provides useful advice (I'm not saying some haven't done this already; they have. But there is a lot chaff in this thread).
Have you read any of Bill Riley's work yet? Spend some time with the Stable32 manual and see what you think. (Stable32 is actually free now, and is worth becoming familiar with.)
FWIW to test AllanTools there's a Kasdin&Walter noise-generator that generates phase-noise with different power-law coefficients and one can then plot ADEV, MDEV, phase-PSD and frequency-PSD like so:
https://github.com/jleute/colorednoise (https://github.com/jleute/colorednoise)
the example-code that generates that figure contains the relations between phase-PSD, frequency-PSD, ADEV, and MDEV. Your patches for e.g. HDEV etc are welcome ;)
https://github.com/jleute/colorednoise/blob/master/example_noise_slopes.py (https://github.com/jleute/colorednoise/blob/master/example_noise_slopes.py)
Some of the theoretical expressions for ADEV/MDEV are in the IEEE-1139 standard (but not all IIRC).
A simulation with suitable power-law noise components and possibly some deterministic drift added should allow you to explore a lot of scenarios..
For the Arduino stuff a resonable start is the TICC https://www.tapr.org/kits_ticc.html (https://www.tapr.org/kits_ticc.html)
an alternative could be the digilent analog discovery which was used in a recent "sine-wave fitting ADEV" paper https://arxiv.org/abs/1711.07917 (https://arxiv.org/abs/1711.07917)
cheerio,
A
The formulae for AVAR, MVAR and TVAR in terms of spectral transfer functions can be found on pp 104-106 of:
https://tf.nist.gov/general/pdf/1168.pdf
Even though I am unconvinced of the practical usefulness of Allan Variance and its derivatives, I appreciate the links.
The formulae for AVAR, MVAR and TVAR in terms of spectral transfer functions can be found on pp 104-106 of:
https://tf.nist.gov/general/pdf/1168.pdf
Even more reading to do :P. Nevertheless, thanks for the link.
The designer couldn't care less what is the Allan Varience of the oscillator or the power law exponents of the component noise sources. He wants to know the jitter bounds. If you can't get the jitter bounds from the Allan Variance, then it has no value in this particular application.
Some may criticize this example, pointing out that I know very little about doppler radar. That is absolutely correct. So, if there are any out there reading this thread who have experience in either professional or amateur doppler radar, I welcome their comments.
Radar people don't tend to use Allan deviation. They care about phase noise at offsets close to the carrier -- which, again, is the same basic measurement as ADEV, but without the ambiguity in the frequency domain. ("Jitter" is just another way of saying "phase noise" within specified limits of integration. "Jitter bounds" isn't a recognized technical term.)
Specifically, radar people don't care about ADEV because the long-term stability of the reference is not of interest. Ordinary frequency drift is disregarded by the signal-processing math simply because radar is inherently a residual measurement, where the returned echo is compared to the transmitter output.
Time-oriented folks are more likely to care about ADEV and related metrics. Need to know which oscillator keeps better time over intervals ranging from minutes to months? Measure the ADEV. Need to know which oscillator keeps better time from microseconds to seconds? Measure the PN.
Hard to see how to make it much more clear than this... but speaking as someone who occasionally needs to write user manuals and tutorials on the subject, I'm always open to suggestions. :)
Radar people don't tend to use Allan deviation. They care about phase noise at offsets close to the carrier -- which, again, is the same basic measurement as ADEV, but without the ambiguity in the frequency domain. ("Jitter" is just another way of saying "phase noise" within specified limits of integration. "Jitter bounds" isn't a recognized technical term.)
At some point you have to convert phase noise to frequency bounds (what I called "Jitter bounds", which, I admit, is a term I made up in an attempt to get my point across - what would be the recognized technical term?) Take the example I gave of doppler radar. Unless I completely misunderstand how it works (a real possibility), if you want to specify the error bounds on object velocity, you have to factor in the error in the frequency source - how its output varies in frequency around the desired carrier frequency. Phase noise is normally specified as dBm/Hz at several narrow side bands of the carrier. If you know how to convert that into errors in velocity estimates I would be extremely interested in learning about it (either by explaining it or pointing me to an appropriate reference).
I don't know what are the objectives of what you call "Time-oriented folks", but my guess is they are interested in keeping time, not using it in an application. Or, perhaps more accurately, keeping time is the application. My interests are different. I want to know what makes one oscillator better than another when used in an application.
Unfortunately I don't think anyone here has the faintest idea of the distinction you're trying to make. :( Definitely read everything you can find by Bill Riley, though. He can be considered a primary source for this stuff.
Take a look at this article (https://testworld.com/wp-content/uploads/2014/08/Phase-Noise-and-its-Changing-Role-in-Radar-Design-and-Test.pdf) for some example numbers. In practice, a radar designer would look at the area under the phase noise curve between selected integration limits, based on the performance range of interest. The result of that integration can be express in RMS seconds of jitter, and the term "jitter bounds" would most likely refer to the limits of integration used to calculate it.
With regard to stationarity and cyclostationarity, current noise and thermal noise are non-stationary processes in the context of oscillator design where you are attempting to model performance over infinitessimal increments of time. However, in the context of evaluating oscillator performance, it's not really relevant as you correctly concluded. Over observational periods of many cycles the process is stationary.
These look pretty good:
https://www.keysight.com/upload/cmc_upload/All/PhaseNoise_webcast_19Jul12.pdf (https://www.keysight.com/upload/cmc_upload/All/PhaseNoise_webcast_19Jul12.pdf)
https://publications.npl.co.uk/npl_web/pdf/mgpg68.pdf (https://publications.npl.co.uk/npl_web/pdf/mgpg68.pdf)
https://tf.nist.gov/general/tn1337/Tn190.pdf (https://tf.nist.gov/general/tn1337/Tn190.pdf)
Measuring phase noise is an interesting problem. I noticed in the NIST update to 140 that improvements in practice must await better phase measurements.
If one gets good time domain data, one can do essentially the same analysis numerically: start with a Hilbert transformation of some kind to get phase data. This can include a mixing step (I/Q like) to also go to a lower frequency domain - this is the kind of easy way to do the Hilbert transformation. So one will get phase and amplitude data an a somewhat slower time scale, which is usually sufficient and a nice reduction in data rate, without loosing significant information.
The first question is: do practical oscillators represent erodic processes? I have seen it stated in several places that their associated processes are stationary, but I have not seen anywhere that they are ergodic.
The second question centers on the relationship between instantaneous frequency and instantaneous phase, f(t)=d/dt[phi(t)]. What does this mean when phi(t=ti) is a random variable? Not clear. In order to compute the derivative, you have to take the limit as h->0 of [phi(t+h)-phi(t)]/h. But this usually presumes phi(t) is a continuous function in the vicinity of t. Random variables are not functions in this sense. They return different values each time they are "accessed", so I don't know how to compute this derivative.
The fourth question is, even if the last equation is true, how is phi(t) obtained?
The phase noise measurement described in the NIST link is using a mixer and reference signal of some kind to convert the signal, before sending it the the spectrum analyzer. If done in a way to mainly get the quadrature signal, the signal is rather insensitive to AM and mainly reflects phase modulation / phase noise.
Having the mixer part before the analysis helps in that the part behind the mixer can be considerably lower frequency and thus less critical with respect to sampling frequency stability. If one gets good time domain data, one can do essentially the same analysis numerically: start with a Hilbert transformation of some kind to get phase data. This can include a mixing step (I/Q like) to also go to a lower frequency domain - this is the kind of easy way to do the Hilbert transformation. So one will get phase and amplitude data an a somewhat slower time scale, which is usually sufficient and a nice reduction in data rate, without loosing significant information.
In general, oscillators are not ergodic. They *are* if and only if they are constant amplitude. c.f. Bendat & Piersol example 5.11 pp 144-145 3rd ed.
Random processes are continuous except perhaps for some pathological examples.
I've attached the reference from B & P.
If over a sampling period, the variation is rapid and random, then *over a period at least as long* it may be considered ergodic. One can, however, get into trouble if the sampling interval is correlated with the amplitude variations. That's why I have repeatedly suggested sampling randomly.
In practice I have never encountered an instance where ordinary calculus was not sufficient. Yes, it is true that the conditions of ordinary calculus may not be met, but it doesn't matter. So far as I can see, all the exotic integrals are of necessity backward compatible with the traditional integral of Newton and Liebniz.
Judging by the radar problem and the time sampling/counting approach mentioned before, I think you need to lookup the random walk problem, like in: We have 10MHz osc with its (white) phase noise, and want to know the random distribution of the 10th's million rising edge.
How would it be the 10th million rising edge probability distributed around the ideal 1 second mark?
Is this what you are looking for?
what oscillator parameters are important when considering the selection of such an oscillator?You might just have to contact the guy at match.com. They claim to be experts at doing this exact thing.
Restating the basic equation for a real oscillator:
v(t) = [V0 + e(t)] * cos[w0*t + phi(t)]
Quote
Restating the basic equation for a real oscillator:
v(t) = [V0 + e(t)] * cos[w0*t + phi(t)]
There are 4 unknowns in this equation. If one compares the voltages of 8 oscillators in a pair wise fashion for all 32 permutations one has an even determined system of equations. I *think* that by counting the number of times each permutation exists at random times over periods of several cycles one can solve for all the variables without having to assume the availability of a perfect reference.
The random sampling is important as it precludes aliasing taking place. Mathematically this is rather exotic, at least for me, so there may be complications I've not spotted yet. It would be *very* interesting to know if there is anything in the professional literature related to this. I only became aware of the anti-aliasing properties of random sampling recently. In many applications one still needs a precise reference clock to record the time of the samples. But in this case, I think simply counting states will suffice.
For a sanity check, put a scope in XY mode and display the Lissajous figure. I was doing that at 10 MHz with an 8648C with the OXCO option and a 33622A yesterday. Also measure both frequencies with the same counter.
I don't think one can say that an oscillator is *best* without stating an application.
For a sanity check, put a scope in XY mode and display the Lissajous figure. I was doing that at 10 MHz with an 8648C with the OXCO option and a 33622A yesterday. Also measure both frequencies with the same counter.
I tried this, but the Lissajous figure wouldn't slow to a point that it was recognizable.
I decided to get some experience measuring phase delay between two oscillator signals. The oscillators I chose were my FEI FE-5650 and Rigol DG1022 set to 10 MHz. I connected these two oscillators to the AD8302 evaluation board I purchased and looked at the VPHS signal it generates. This output ranges from 0 to 1.8V and is interpreted as follows: 1) phase difference of 180 degrees - 30 mV; 2) phase difference of +/- 90 degrees - 900 mV; 3) phase difference of 0 degrees - 1.8V. VPHS tracks the phase differences with an advertised bandwidth of 30 MHz...
To asnwer your question: Best stability and phase noise for a given working voltage and power consumption in a size that fits and doesn't cost an arm a a leg. Depending on what you are building and who is paying for it.
To asnwer your question: Best stability and phase noise for a given working voltage and power consumption in a size that fits and doesn't cost an arm a a leg. Depending on what you are building and who is paying for it.
Best stability measured by what metric?
When going window shopping I look at published specs.
This one is 0.1ppb and it's just under $1800 (so just an arm, you get to keep the leg ), and if you need more that three, you gotta wait 17 weeks :)
https://www.digikey.com/product-detail/en/abracon-llc/AOCJY6-10.000MHZ-1/535-11919-ND/3641391 (https://www.digikey.com/product-detail/en/abracon-llc/AOCJY6-10.000MHZ-1/535-11919-ND/3641391)
I've got my 8648C and 33622A within 0.01Hz which as close as I can adjust the 33622A. So it takes a minute or two to go through a complete cycle. I can see the Lissajous figure "breathe" as it rotates slowly. So phase noise close in is clearly visible. The "breathing" is at 0.1-1 Hz. However, I have no way of knowing if it's real or an artifact of the DSO timebase.
Looks pretty good. But where's the Lissajous?
I don't think stating the obvious in a project like this is bad. I think it needs to be repeated regularly. Otherwise there is a great tendency to run off into the ditch.
Looks pretty good. But where's the Lissajous?
I don't think stating the obvious in a project like this is bad. I think it needs to be repeated regularly. Otherwise there is a great tendency to run off into the ditch.
I finally got the Lissajous to work, but on my scope it was basically useless. I could get it to almost stop, but the lines defining it were so thick that you couldn't really see any "breathing".
As a perfect reference does not exist, the only way to do that is to set up a system of equations with as many equations as there are unknowns. We can do that with 8 oscillators by comparing phase pair wise for all pairs. However that leads to 32 equations.
The preceding is a national lab level measurement. We need to determine if it has been tried or studied and found to have a fatal flaw other than the problem of output data volume. I don't think it sensible to move on implementing it without having completed a thorough analysis.
I am rather surprised by your assertion that this has not been investigated. While the problem was for a long time computationally difficult, modern computers can handle it with ease.
Do you know the name of anyone at NIST who works on such things?
The correct equation is (N-1)*(N/2) = 4*N. So one needs 9 oscillators to produce 36 equations with 36 unknowns.
v(t) = [V0 + e(t)] * cos[w0*t + phi(t)], where V0 is the base oscillator amplitide, w0 is the base oscillator frequency (in radians/sec), and both e(t) and phi(t) are stochasitic processes that respecitively add amplitude noise and phase noise to the oscillator's output.
So, when does it become necessary to consider the Allan Variance/Deviation of a clock for a particular application? I hypothesize in order to start discussion that it has something to do with the knee shown in Figure 3 of the paper referenced above. As long as the local variance and Allan variance are for all practical purposes equal, then traditional oscillator stability measures (e.g., parts-per-whatever/(minute, hour, day) frequency error rates, phase noise values/plots in rad2/Hz) are sufficient. Applications needing clocks to operate for sufficiently long periods of time (that period being related to the knee in Figure 3), on the other hand, probably should consider the Allan Variance/Deviation of the clocks they select for use.
So, when does it become necessary to consider the Allan Variance/Deviation of a clock for a particular application? I hypothesize in order to start discussion that it has something to do with the knee shown in Figure 3 of the paper referenced above. As long as the local variance and Allan variance are for all practical purposes equal, then traditional oscillator stability measures (e.g., parts-per-whatever/(minute, hour, day) frequency error rates, phase noise values/plots in rad2/Hz) are sufficient. Applications needing clocks to operate for sufficiently long periods of time (that period being related to the knee in Figure 3), on the other hand, probably should consider the Allan Variance/Deviation of the clocks they select for use.
Question: How do you know where the knee is?
Answer: Allan Variance
General:
1. There is nothing about the Allan Variance that restricts it's use to the "long term."
2. There is no information that can be derived from the standard deviation that cannot be derived in a more unambiguous way from the Allan Variance.
3. The Allan Variance tells you when the standard deviation is a useful parameter.
Perhaps you would be so good as to produce a concrete mathematical example.
So far all you have done is thump your chest claiming great authority and expertise.
Anyone who actually understands anything can explain it to a 12 year old. So how about explaining it to PhDs. That should be even easier.
Someone providing janitorial services to NIST has NIST as a customer. But they're still just a janitor.
"An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. tau = 1 s) should be interpreted as there being an instability in frequency between two observations 1 second apart with a relative root mean square (RMS) value of 1.3×10−9. For a 10 MHz clock, this would be equivalent to 13 mHz RMS movement. If the phase stability of an oscillator is needed, then the time deviation variants should be consulted and used."
It is not entirely clear, at least to me, what this means. Variance is a measure related to a probability distribution function (pdf), specifically that distribution's second moment. The statement in the above quote seems to suggest a deterministic "movement" in the signal's frequency. This is more apparent in the example, where it is suggested that for a 10 MHz clock, the movement would be equivalent to "13 mHz RMS movement". I think the problem is the idea of "movement" is left undefined. Given that Allan Variance is a variance, I would have expected an interpretation that referenced a probability bounds on the signal's frequency at the end of the 1 s period.
I would like to make one other point. When the article states, "If the phase stability of an oscillator is needed, then the time deviation variants should be consulted and used.", does this mean that the traditional variance of frequency fluctuations is the appropriate measure to use when computing phase stability? If someone knows the answer to this question, would they respond?
That's a reasonable explanation, although note that conventional RMS is 'global' i.e. you compare each data-point to the mean of all data and compute the root-mean-square, while ADEV is 'local' in the sense that you only take consecutive frequency-points/pairs from the time series in order to build the sum.
....
maybe the wikipedia use of 'movement' is not the best here - I don't think any deterministic movement should be understood.
In the example if you take a time-series of frequency-points, each averaged for 1s, and histogram the difference between consecutive points, you should get some (not necessarily known..) distribution with a width of 1.3e-9 in relative units (13 mHz if the time-series in in Hz).
....
TVAR is just MVAR scaled with the averaging-time (usually 'tau'), and thus TDEV has units of time (seconds). It predicts how much variance in phase (in units of time) to expect (in an RMS-sense) from one phase point to the next (where the spacing between points is tau).
In practice there are technical problems with measuring a (gap-free!) frequency time-series and then predicting (integrating) phase from that - not recommended. For timekeeping measure phase with a time-interval counter.
...
TVAR is just MVAR scaled with the averaging-time (usually 'tau'), and thus TDEV has units of time (seconds). It predicts how much variance in phase (in units of time) to expect (in an RMS-sense) from one phase point to the next (where the spacing between points is tau).
In practice there are technical problems with measuring a (gap-free!) frequency time-series and then predicting (integrating) phase from that - not recommended. For timekeeping measure phase with a time-interval counter.
...The classical M-sample variance of frequency was analysed by David Allan[3] along with an initial bias function. That article tackles the issues of dead-time between measurements and analyses the case of M frequency samples (called N in the article) and variance estimators...
maybe the wikipedia use of 'movement' is not the best here - I don't think any deterministic movement should be understood.
In the example if you take a time-series of frequency-points, each averaged for 1s, and histogram the difference between consecutive points, you should get some (not necessarily known..) distribution with a width of 1.3e-9 in relative units (13 mHz if the time-series in in Hz).
Consider the situation in Figure 1. Each period produces a result - either G or L. These results are analyzed over the averaging interval. If the probabilithy of obtaining G is p, then the probability of obtaining a L is 1-p. For simplicity it is assumed that p=1-p=.5.
For measuring oscillator stability the statistic of interest is not how many Gs or Ls appear in an averaging interval, but the difference between these values. The process represented by an averaging interval is well-known and is called a bernoulli trial. The expected value of the difference between the number of Gs and Ls is presented here (https://stats.stackexchange.com/questions/258011/expected-value-and-variance-of-number-of-successes-minus-number-of-failures), specifically: 2mp - m = m(2p-1) = 0. [Note: the referenced web page uses n as the number of trials, whereas here that value is m. The value n is used here to represent the number of averaging intervals. Also, the problem solved there is stated in terms of successes and failures. The logic is exactly the same. Simply substitute L for success and G for failure.]
The variance of the difference between the two random variables in a Bernoulli trial (see above reference) is: 4mp2 = m. Notice (!) that the variance depends on m. So, as the value of tau increases, so does the variance.
Given the capabilities of computers in the 1960s and 1970s, when Allan Variance was developed, it was necessary to increase tau in order to obtain long-term measures of clock stability. Today, computers are much more powerful. So, it would be interesting to determine the sample_time/tau ratio above which an analyst would be forced to increase tau in order to obtain practical clock evaluation results. This would, of course, depend on the computer available. However, I would guess most desktop systems these days could analyze a very long data set in a practical amount of time.
In particular, it leads to the formulation of a generalized Wiener process with continuous time and continuous variable. Believe me, this is not something an amateur wants to get anywhere near.
QuoteIn particular, it leads to the formulation of a generalized Wiener process with continuous time and continuous variable. Believe me, this is not something an amateur wants to get anywhere near.
I don't think it's all that bad, but then I've been doing it for 30 years. But, yes, they had good reason to call the classified version of "Extrapolation, Interpolation and Smoothing of Stationary Time Series" the "yellow peril" in reference to the yellow covers indicating a classified document.
I'll comment further later, but I did an interesting experiment today. Using a Rohde & Schwarz RTM3104 I measured the period of my GPSDO at an output frequency of 10 MHz. I also measured the output of my 33622A at 10 MHz. I don't know the jitter spec for Leo Bodnar's dual output GPSDO, but Keysight claims less than 1 pS for the 33622A and less than 0.5 pS with the OXCO option which I don't think I have. The statistics function of the RTM3K gave a standard deviation of ~26 pS for both sources. At present I have to attribute that to the RTM3K time base.
Given this background, the confusing result is immediate. The tau interval frequency is the sum of the average period frequencies divided by the number of periods (m, where tau = m * oscillator period length). The central limit theorem (see Central Limit Theorem (https://www.probabilitycourse.com/chapter7/7_1_2_central_limit_theorem.php)) stipulates that the average of the sum of any set of i.i.d. random variables, independent of their underlying probability distribution, converges to a random variable with a normal distribution with mean mu (the expected value of each period distribution) and variance sigma2/m (where sigma2 is the variance of the period distribution). Consequently, the tau interval frequency should decrease with increasing tau, since when tau increases, so does m.
Can anyone figure out where have I gone wrong? Might it be that I have assumed non-random effects are first removed before analyzing the data? If so, then Allan Variance has little attraction, since regression analysis has improved considerably since the 1960s and 1970s. The sample time average is the average of the tau averages, so the central limit theorem applies to the sample average as well. One could utilize the 3 sigma rule on the standard variance of the sample time distribution to compute a probability bounds on clock jitter.
.. This means it slips ~1.15 cycles per hour or ~27.7 cycles per day. At 100 ns per cycle, this means it is slipping 2.77 msec per day (27.7 cycles*(100 ns/cycle)). Comparing that to the FEI FE-5650 spec of 2*10e-11/day for drift, my oscillator is over 1000 times worse. For example, see this list (https://www.meinbergglobal.com/english/specs/gpsopt.htm) where it also specifies a drift figure of 2*10e-11/day and equates this to gaining/losing +/- 1.1 usec per day.27.7 * 100E-9 s per day = 2.77E-6 s per day
...
27.7 * 100E-9 s per day = 2.77E-6 s per day
relative error = 2.77E-6 s per day / 8.64E4 s per day = 0.32 * 1E-6 * 1E-4 = 3.2E-11
So your Rubidium reference is a little out of spec., but not way out. May I suggest leaving it powered, uninterrupted, for a week, and checking the drift once per day. You might find it comes back into spec. The good news is that the servo loop that drives the Rb resonance cell seems to be working.
Usually, Rb references come with a C-field adjustment that you can fine-tune, once you are satisfied the drift rate is stable. See for example the C-field section in:
http://www.wriley.com/Rubidium%20Frequency%20Standard%20Primer%20102211.pdf (http://www.wriley.com/Rubidium%20Frequency%20Standard%20Primer%20102211.pdf)
Could the quality/strength of the GPSDO reception be causing this ?
There's a host of simple sanity checks youcanshould do. The first is to connect one oscillator to both inputs, with one arm delayed by a long (20-50 ft.) piece of coax.
Good thought. Anything else?
There's a host of simple sanity checks youcanshould do. The first is to connect one oscillator to both inputs, with one arm delayed by a long (20-50 ft.) piece of coax.
There's a host of simple sanity checks youcanshould do. The first is to connect one oscillator to both inputs, with one arm delayed by a long (20-50 ft.) piece of coax.
r=randg(:,2);
g=randg(:,3);
r_h=hilbert(r);
g_h=hilbert(g);
phase_rad = angle(r_h ./ g_h);
Following this suggestion, I performed the indicated delay check on both the Rubidium and GSPDO oscillators. Instead of using a 20-50 foot coax, I used an 83 foot RG-58 coax I had lying around. The delay for this coax (using the established signal delay for RG-58 of 1.541 ns/foot) is ~ 128 ns. When I eye-balled the cursors to the peaks of the signal and its delayed twin, I measured 29.2 ns. So, the delayed signal slipped a period and represented 129.2 ns of delay...
At this point, I am beginning to believe the large phase difference result as an actual phenomenon, rather than a measurement blunder. However, I am not confident of this conclusion and think it is now time to ask others (in the spirit of scientific enquiry) who have a Rubidium oscillator or GPSDO oscillator (of the same kind I have) if they would run tests on their units to see if they confirm or repudiate these results. It is unnecessary to use an AD8302 in this process. Any method that indicates the phase differences of a delayed GPSDO, delayed Rubidium or Rubidium versus a GPSDO would suffice.
Here's the sanity check: You are seeing phase measurements that range from 16 degrees to 41 degrees, when you measure from one cycle of an oscillator to the next cycle ... does that seem reasonable?
If you really want to make this unambiguous, cut your cable down to 20' and repeat the measurement.
I did a quick measurement (with a long cable delay in one arm) using a time interval counter. I could see 0.01 deg variations using a good oscillator, which is likely limited by the 4ps resolution of the counter. A lower quality oscillator gave variations of about 0.1 deg. The only way I could get the variations to approach 1 deg was by attenuating the oscillator amplitude, thereby decreasing the rise time and introducing discriminator errors.
I did a quick measurement (with a long cable delay in one arm) using a time interval counter. I could see 0.01 deg variations using a good oscillator, which is likely limited by the 4ps resolution of the counter. A lower quality oscillator gave variations of about 0.1 deg. The only way I could get the variations to approach 1 deg was by attenuating the oscillator amplitude, thereby decreasing the rise time and introducing discriminator errors.
I do not understand your point. What do you mean by "when you measure from one cycle of an oscillator to the next cycle"? The total variation is observed over 120,000 cycles.
I specifically asked you whether delaying by more than one period would be important. You did not answer. If you think this is an important point, would you answer it now and explain why it is important?
Also, the total variation is not a per period statistic. It represents the maximum phase difference minus the minimum phase difference over the whole interval of 120,000 cycles. For example, taking the GPSDO delayed case, one period during the interval had a phase shift of 90.9 degrees, while another, probably distant from the first period, had a phase shift of 49.3 degrees. It is extremely unlikely that the change from maximum to minimum phase shift occurred over a single period.
Would you identify the oscillators you tested? My observations were for specific oscillator types (specifically, a cheap eBay GPSDO and a FEI FE-5650A Rubidium oscillator). I am not claiming the results apply to all oscillators.
But more importantly, I don't understand your test set-up. Would you elaborate? Does your statistic of 0.1 degree variation represent the maximum phase fluctuation over a large number of cycles?
Thinking about this, it is possible there is non-random drift in the data. If you observed 0.1 degree variation between consecutive periods, then if this was completely non-deterministic drift in one direction, over 120,000 cycles, the phase difference would have changed by 12,000 degrees (obviously, this is an extreme example, which I make only to suggest how your results and mine might be harmonized). I will look to see if there are any non-deterministic factors in the data.
But, even if there are, the changes I reported seem large considering the measurement interval was only 12 msec long. If these results stand-up, I would say using either of these oscillators as the distribution signal to synchronize instruments in a lab would be problematic for most (or at lease many) tests.
FFS there's an easy way to quantify this.
Did you not follow my link in reply #163 or not understand its content ?
All the info and clues you need to perform the same measurement are contained in the screenshot.FFS there's an easy way to quantify this.
Did you not follow my link in reply #163 or not understand its content ?
I had a lot of trouble understanding it, since the poster doesn't seem to be a native English speaker.
All the info and clues you need to perform the same measurement are contained in the screenshot.
Please study it again in depth......every little snippet of info.
Clue, look at the Stats box and the # count = 1000s, so with infinite persistence the jitter (ch4) WRT 1pps (ch1) is under 4ns for a 1000s of a 10 MHz ref signal !
Very clever use of standard features in a DSO. ;)
You've got infinite persistence, cursors and a stopwatch haven't you ? ;)All the info and clues you need to perform the same measurement are contained in the screenshot.
Please study it again in depth......every little snippet of info.
Clue, look at the Stats box and the # count = 1000s, so with infinite persistence the jitter (ch4) WRT 1pps (ch1) is under 4ns for a 1000s of a 10 MHz ref signal !
Very clever use of standard features in a DSO. ;)
I looked at the Rigol 1104Z manual, but could find no way to measure the skew between channels. So, I don't think I can perform the procedure suggested by the screen shot.
Not that I think it important, but to get this red herring off the table, I found two 10' coax patch cables and attached them to each other to create a 20' length. I then ran the GPSDO delayed test. The results are not suprising. Here are the maximum and minimum phase differences.
max=57.6 degrees
min=35.4 degrees
variation = 22.2 degrees
Consequently, reducing the length of the delaying coax provides no insight into the problem.
If the phase difference between the signal from the same source traveling down two pieces of coax is not constant, then the experimental setup is flawed.
Fc=10000000;
Fsam=500000000;
Fnyq=Fsam/2;
[b,a]=butter(6, Fc/Fnyq);
output=filter(b,a,p);
pf=output;
pfn=pf(:,2);
pfn=pfn.-mean(pfn);
If the phase difference between the signal from the same source traveling down two pieces of coax is not constant, then the experimental setup is flawed.
That is what I thought. But, when I studied the data, another possibility arose, which I think is the real answer. (And it has nothing to do with the length of coax used for the delayed signal)
I couldn't figure out how the phase difference between a signal and its constantly delayed image could vary as much as the data indicated. Then I noticed an artifact. (see Figure 1)
Figure 1 - (https://www.eevblog.com/forum/metrology/an-advanced-question-sampling-an-oscillators-signal-for-analysis/?action=dlattach;attach=473108)
Figure 1 is an image produced by plotting the (1st 20000 data points in the) result of the following Octave code (where "p" holds the GPSDO delayed phase difference data).Code: [Select]Fc=10000000;
Fsam=500000000;
Fnyq=Fsam/2;
[b,a]=butter(6, Fc/Fnyq);
output=filter(b,a,p);
pf=output;
pfn=pf(:,2);
pfn=pfn.-mean(pfn);
This code normalizes the data by first applying a 5th order 10 MHz low pass Butterworth filter to it (to eliminate the 20 MHz superimposed signal) and then normalizing it by subtracting the mean from each element. I have marked with red lines prominent spikes in the data.
A free running oscillator would not have such spikes, but neither the Rubidium oscillator nor the GPSDO are free running oscillators. They are disciplined oscillators comprising a crystal oscillator that is periodically corrected by a reference signal. The periodicity of this correction (technically, its reciprocal) is commonly referred to as the servo loop bandwidth. My current hypothesis is the spikes represent periodic corrections to the frequency/phase of the crystal oscillator.
The effect of this is the crystal oscillator free runs for a while and then experiences a movement in frequency/phase. Sometimes this movement is significant, which appears as a large change in the phase difference between the signal and its delayed image.
One question that presented itself is how could the free running oscillator drift so far in frequency as to require a significant correction? One possibility is a previous change overcorrected the error, which then requires a significant movement in the opposite direction. That is speculation, but it is at least plausible.
I eye-balled the distance between two spikes and it was about 1200 points apart. At 2 ns between data points, this represents about 2.4 usec separation. That would imply a servo loop bandwidth of ~417 KHz.
Since I do not have access to the circuit diagrams and design information for the GPSDO, this is still a working hypothesis. However, it is a plausible explanation for the significant differences in the phase difference data. I don't know any engineers who have designed either a GPSDO or a Rubidium oscillator, so I cannot ask them whether this hypothesis makes sense. If anyone reading this thread is such an engineer or knows someone with such experience, comments from them would be appreciated.
Feed the signals from the two different lengths of coax to your DSO. Place cursors at the zero crossings. Let it run for as long as you like. The phase relationship should not change unless the coax is bad or you have a reflection problem. Until you can get a reliable signal to the instrument there is no point in speculating about possible causes of artifacts in the phase measurements.
You will get apparent jitter in a stable signal because the scope is interpolating the trigger point and the measurement points. The 40 pS pulser is far less stable than the 33622A or GPSDO, but it *appears* to have less jitter because it has a very fast edge. I don't know the jitter spec for Leo's GPSDO, but the 33622A is specified at less than 1 pS but the RTM3K indicated ~24 pS standard deviation for the time period. That's not real. It's a DSO artifact.
7042 shows the 33622A hooked up with what should be a good piece of coax, but there is an obvious mismatch. There is not 30 pS of jitter in the 33622A. The GPSDO has a faster rise time so the step is more pronounced as seen in 7037, but when I connected the GPSDO directly the step went away. So my "To Do" list got testing and culling BNC cables added.
Tomato is absolutely correct, though not very clear. The first requirement is to verify that you can get accurate signals to the test device. There is no reason to assume that the inputs to the AD board are actually 50 ohms. Or that anything else is 50 ohms. 10 MHz is not all that high, but it is still RF and can be confusing because of the speed. I makde the mistake of buying 10 Chinese BNC cables. They make great 50 MHz notch filters, but are useless for anything else.
I suggest you start by sweeping your cables on the spectrum analyzer. If at all possible do the cal with a known high quality N cable. My "To Do" list already has testing a bunch of Chinese adaptors of which I know at least one is bad and I suspect there are others.
I think it is best at this point to document the test setup and seek constructive criticism of it.
I think it is best at this point to document the test setup and seek constructive criticism of it.
1) You've got some termination issues. You can't just connect your signal to the AD chip with BNC tees, because the AD inputs are terminated with 51Ω resistors. You need to connect via splitters or directional couplers.
1) You've got some termination issues. You can't just connect your signal to the AD chip with BNC tees, because the AD inputs are terminated with 51Ω resistors. You need to connect via splitters or directional couplers.
2) Why in the world do you have 30dB attenuators on the inputs of the AD chip?
I think it is best at this point to document the test setup and seek constructive criticism of it.
1) You've got some termination issues. You can't just connect your signal to the AD chip with BNC tees, because the AD inputs are terminated with 51Ω resistors. You need to connect via splitters or directional couplers.
A mild understatement.
1) Tee + terminator != thru terminator. That little stub rings like mad, but because it's short you can't see it on the Rigol. Can't see it on my 200 MHz Instek either. But it's there.
#6 40 pS pulser to 50 ohm thru
#7 same but Tee + terminator
#8 Tee+terminator but with a short BNC cable between the Tee and the terminator
The white reference trace in #7 & #8 is the trace in #6
#9 pulser feeding a Tee and BNCs w/ thru terminators. One cable is a couple of inches longer. Again, the reference trace is #6. Note the apparent increase in gain as we now have approximately 25 ohms terminating the pulser rather than the 50 it needs.
To summarize: You cannot make meaningful measurements with things connected the way you have them. I suggest a quick review of transmission lines.
Thank you for your comments and question. I will address them in reverse order.
Since you are asking about attenuators on the AD8302 inputs, I presume you have read the device data sheet. If that presumption is correct, then you know the input power range is 0 dBm to -60 dBm (with respect to a 50 ohm load).
I have several oscillators I want to characterize using the test set-up. There are (among a larger set) the GPSDO, which outputs 1.25V P-P sine wave, the Rubidium, which outputs a 1 V P-P sine wave, and an OCXO, which outputs a 50% duty cycle square wave from 0 to 3.5V. The RMS voltage of a 50% duty cycle non-negative square wave is VP-P/sqrt(2). So, the RMS voltage of the OCXO output is ~2.47V. Looking into a 50 ohm load its power is Vrms2/R ~= 6.1/50 = .122 watt =~20.6 dBm. So, a 20 dBm attentuator just misses the mark, which implies the next common attenuator value of 30 dB. That is why I put them in front of the AD8302 inputs.
I wanted to address the pad issue first ... now is a good time to investigate how the input circuit might affect the results I seek
This leads me to believe the phase difference data should be uneffected by the termination issues you raise. I am, of course, open to clear arguments that suggest otherwise.
You're making things too complicated again. A properly designed attenuator terminated by 50Ω will appear as 50Ω at it's input.
The problem is that your signal sees 25Ω at the BNC tee, because it is split into two paths that are both 50Ω. You need a splitter or directional coupler instead of the BNC tee.
OK. I need some help finding a splitter that satisfies the requirements you think important. Will this one (https://www.ebay.com/itm/ZFSC-2-2A-POWER-SPLITTER-FREQ-10MHZ-TO-1000-MHZ-50-OHMS-BNC-NEW-OLD-STOCK/291026518542?hash=item43c288420e%3Ag%3AYWUAAOxyzi9SlmL5&_sacat=0&_nkw=50+ohm+10+MHz+bnc+splitter&_from=R40&rt=nc&_trksid=m570.l1313) work?
Oscillator | Lower | Upper | Bandwidth |
GSPDO | 9.589 | 10.402 | 813 KHz |
Rubidium | 9.669 | 10.328 | 659 KHz |
Rigol | 9.688 | 10.309 | 621 KHz |
I think you're measuring the phase noise of the spectrum analyzer; a good oscillator has> 150 dBc at 1 kHz offset.
My intention was to explore the phase noise bandwidth issue, not to obtain definitive measurement values.
Oscillator Lower Upper Bandwidth GSPDO 9.589 10.402 813 KHz Rubidium 9.669 10.328 659 KHz Rigol 9.688 10.309 621 KHz
Setting your markers to where the curve falls off the bottom of the screen is not a valid way to measure the bandwidth. Adjust your vertical scale and measure the actual width of the curve.
You can read "Choosing a Phase Noise Measurement Technique" from HP or Agilent.
One problem I have is using the techniques described in these (and your) references requires an existing test setup. The one I have been using has the disadvantage that the data capture device is my Rigol 1104Z oscilloscope. The lowest sample rate I can select is 25 Msa/s and since I have only 6 Mpts of memory depth, the longest sample I can capture is 240 msec. While I can process the data from such a sample using software filters and FFT based spectral analysis, I am worried that this short sample limitation may not allow me to get an estimate of the noise bandwidth I will need to handle when I start looking at longer sample intervals. I need that estimate to determine the rate of sampling I need to support, which influences the backend data storage system design.
One problem I have is using the techniques described in these (and your) references requires an existing test setup. The one I have been using has the disadvantage that the data capture device is my Rigol 1104Z oscilloscope. The lowest sample rate I can select is 25 Msa/s and since I have only 6 Mpts of memory depth, the longest sample I can capture is 240 msec.
Are you sure about that? It's impossible to believe any scope would be that hamstrung.
Are you sure about that? It's impossible to believe any scope would be that hamstrung.
2) will the phase noise bandwidth in these sub-minute samples accurately represent the phase noise bandwidth in longer intervals?
Read Bendat & Piersol. The time window controls your RBW. The sample rate controls the Nyquist BW.
Why do you think it is necessary to sample the phase noise for ~minutes to determine the bandwidth of the phase noise?
Right now I am attempting to design a test setup that I can use to explore oscillator properties in the face of any eventualities. Some of what I have read suggests some components of oscillator phase noise are cyclostationary. Other authors dispute this, but indicate that the noise sources are correlated in such a way that provides the appearance of cyclostationarity (see Cyclostationary Noise in RF Circuits (https://kenkundert.com/docs/msicd99.pdf)).
I don't want to build the test setup under the assumption that all oscillator noise sources are i.i.d., since somewhere down the road I may find out this is not true. Limiting samples to a short time period could hide properties (like non-stationarity or cyclostationarity) that may turn out to be important. Getting a handle on phase noise bandwidth defined over reasonably long sample periods will allow me to design the test system to handle whatever turns up. Actually, I don't need the exact phase noise bandwidth observed over long periods; I need an upper bound on phase noise bandwidth in order to properly design the data acquisition system.
If you can make a convincing argument (not just a proof by emphatic assertion) that 50 seconds of data at 20 Ksa/s is sufficient to develop the upper bound I am looking for, I would be deeply grateful.
You don't seem to have a lot of experience in this field, and I'm just trying to save you some effort.
I'm sorry, but I don't have time to write lengthy posts "proving" anything. I will continue to make suggestions, but it doesn't offend me if you ignore them.
Read Bendat & Piersol. The time window controls your RBW. The sample rate controls the Nyquist BW.
I plan on getting back to Bendat & Piersol when I start getting some data that isn't corrupted by poor measurement techniques. Right now I am focusing on getting the test setup and test procedures properly designed.
Until you have read B&P cover to cover at least once, and in your case probably twice, you will not be able to acquire usable data. To design the experiment you have to be able to write out and solve the equations which describe any experimental setup you are considering.
If you want to investigate cyclostationarity, you've got a lot of math to master. I offered a design using comparators and multiple oscillators which should work. But you rejected that.
You are fundamentally limited by the phase noise of the instrument you use to make the measurements whether you use a DSO or an SA. There are ways to address that, but until you have a good bit of experience with things like Wiener prediction error filters and can look at the equation for a time domain signal and immediately write out the Fourier transform you're not going to get anywhere.
Wiener prediction error filters have a habit of blowing up, so designing them is a rather ticklish and typically iterative process. This is why I suggested the comparator arrangement.
In the above quote, you address the data analysis problem. Once the type of signal (e.g., phase difference, zero-crossing count per unit time), the sample rate, error bounds, and precision of the data are known (there may be other factors, but these are the major ones), the data analysis problem need not consider how the data was captured.
As I recall, you have a Siglent SSA3021X. Did you connect the 10 MHz ref in to the GPSDO or any of your other reference oscillators?
No matter how you acquire data, you are always going to have the convolution of a reference oscillator and the DUT to deal with.
Why don't you connect one of the reference oscillators to the ref in and set the instrument to use that? Then examine the other oscillators with that as the reference. You're attempting to evaluate reference oscillators without using them. If you're not going to use one, why bother having one?
I'm afraid I don't see how you can measure the phase noise of a 10 MHz oscillator with a delay line. I think I understand how to do it at several GHz, but it seems physically impractical at HF. And I'm not entirely sure you can measure phase noise with a delay line at any frequency. I sort of *think* it might be possible with a variable delay line at several GHz, but I've not convinced myself it's true.
But if you can do it, why would you do anything else?
Whether you sample with a DSO or heterodyne in an SA, you are doing a multiplication in time. So the spectra of the two oscillators are convolved with each other. While it was not stated in those terms, that was the point of the comment that you were observing the phase noise of the SA, not the oscillators.
I am not sure why you think you cannot use the delay line technique on a 10 MHz signal. The wavelength of 10 MHz in RG-58 coax is 64.9 feet (see this post (https://www.eevblog.com/forum/metrology/an-advanced-question-sampling-an-oscillators-signal-for-analysis/msg1667915/#msg1667915)). I have 83 feet of coax and have just received another 100 feet. That is a total of 183 feet. That is ~2.8 wavelengths. I am in the process of building a selectable delay device that will give me between 1 and50100 ns (i.e., up to another1/2wavelength) of delay. So, with this equipment, I can get the delayed signal3.33.8 wavelengths away. I will use the selectable delay to put the original and delayed signal in or close to quadrature to get the most precise measurement from the AD8302.
The delay line method is a perfectly good way to make measurements, but you will want to buy a giant spool of coax if you want to do it. Your (relatively) short delay line will only allow you to see higher frequency phase noise. A much longer delay line is needed if you want to measure phase noise near the carrier.
Good point. According to Phase Noise and AM noise measurement in the Frequency Domain (https://tf.nist.gov/general/tn1337/Tn190.pdf) page TN-222 in the paragraph following equation 85, in a power limited system (which is true for my setup, i.e., the power of the Oscillator under test is fixed), the optimal coax length occurs when the attenuation it induces is 8.686 dB. The attenuation for RG-58 is 1.4 dB/100 feet (see this coax attenuation chart for 10 MHz (http://www.w4rp.com/ref/coax.html)), which means the maximum length of coax I need is 8.686/1.4=~620 feet. I already have 183 feet, so I need another 440 feet. Actually, I will be using RG-174 to implement the selectable delay device and it has an attenuation of 3.3 dB per 100 feet, so I can probably get away with another 400 feet of RG-58. That will cost about $56.
Good point. According to Phase Noise and AM noise measurement in the Frequency Domain (https://tf.nist.gov/general/tn1337/Tn190.pdf) page TN-222 in the paragraph following equation 85, in a power limited system (which is true for my setup, i.e., the power of the Oscillator under test is fixed), the optimal coax length occurs when the attenuation it induces is 8.686 dB. The attenuation for RG-58 is 1.4 dB/100 feet (see this coax attenuation chart for 10 MHz (http://www.w4rp.com/ref/coax.html)), which means the maximum length of coax I need is 8.686/1.4=~620 feet. I already have 183 feet, so I need another 440 feet. Actually, I will be using RG-174 to implement the selectable delay device and it has an attenuation of 3.3 dB per 100 feet, so I can probably get away with another 400 feet of RG-58. That will cost about $56.
Keep in mind, that calculation is about keeping cable losses under control. If you use the "optimum" cable length, you still will not be able to measure phase noise near the carrier. That can be a serious limitation of the delay line method. You will have to decide if it is a deal-breaker.
Understood. The objective of the delay line approach is to get an estimate of phase noise and phase noise bandwidth for each oscillator. I need to know if the short-term phase noise of the GPSDO (for which I have no specification) is sufficiently smaller than the other oscillators in order to use it as the reference in a two oscillator configuration. While I may not get phase noise close to the carrier for each oscillator, I should get enough information to reasonably conjecture that the GPSDO has (or does not have) sufficiently lower phase noise than the other oscillators to use it as a reference. The reason I am worried about this (at least for short-term stability characterization) is the GPSDO has an OCXO as the base oscillator that is corrected by the GPS signal periodically. Short-term its stability may be no better than the other OCXOs I have.
You're using the phrase "short term" a lot, yet it has different meanings in different contexts. Short term as it relates to a GPS disciplined oscillator is orders of magnitude longer than short term as it relates to delay line measurements.
I think you may find that none of the phase noise of the OCXO is "short term", as defined by the delay line. In other words, all the phase noise is close to the carrier and, therefore, not readily measured by the delay line method.
In regards to the delay line measurements, it is not my objective to establish the complete phase noise characterization of each oscillator using this technique. I just want to see if the GPSDO can be used as a reference in each type of experiment (i.e., short-term (seconds), medium-term (minutes) and long-term (hours)). While I may not be able to measure phase noise close the carrier using the delay line approach for any oscillator, if the GPSDO has less phase noise than another oscillator away from the carrier, then I can conjecture it will have less phase noise near the carrier than the other oscillator.
For example, suppose I measure the phase noise for the GPSDO using the delay line technique and come up with -100 dBc @ 10 Hz, -125 dBc @ 100 Hz and -145 dBc @ 1 KHz. I then measure another oscillator using the delay line technique and come up with -90 dBc @ 10 Hz, -110 dBc @100 Hz, and -120 dBc@ 1 KHz. It is then likely that the phase noise of the GPSDO will be better than the other oscillator for Fourier frequencies nearer to the carrier. This will give me confidence that I can use the GPSDO as the reference oscillator in the two oscillator test setup.
In the context of delay line measurements, 10 Hz - 10kHz from the carrier is near the carrier. You will not be able to measure this with a delay line that is a few hundred feet long.
The delay line will be almost 600 feet long.
Yes, I know. You might want to do a quick calculation of how much the amplitude of a 10 Hz sideband is attenuated when measured with a 600' delay line.
Here's another quick sanity check: Add 1 kHz sidebands to a 10MHz oscillator. Make a direct measurement of the modulation index with your spectrum analyzer. Then set up a delay line measurement and measure the apparent modulation index at various delay line lengths. Repeat the experiment with 10 kHz, 100 kHz, and 1 MHz sidebands and look for a trend.
For the first calculation, do you mean add a 10 Hz sideband in the frequency domain?
Sorry, my choice of of the term attenuation was unfortunate, as it led to confusion. I was referring to attenuation of the signal due to the measurement technique, not due to cable losses.
1) Here is the calculation: If you have a 10 MHz carrier phase modulated at 10 Hz, calculate how large the detected (10 Hz) signal will be if you measure phase with a delay line setup and a 600' delay line. (You can ignore cable losses for the calculation.) You don't actually have to do an exact calculation; a "back of the envelope" calculation will still be enlightening.
2) The sanity check is to actually measure some sidebands using both the spectrum analyzer and a delay line setup. For convenience, modulate the 10 MHz carrier at 1 kHz, 10 kHz, 100 kHz, and 1 MHz. The spectrum analyzer will easily resolve these sidebands and give you a confirmation of the modulation amplitude. Compare these results to those measured with the delay line.
I spent yesterday evening and this morning thinking about this and for the life of me, I cannot understand what you are getting at.
I plan to start out with a one oscillator test set up (using the delay line approach), to get an estimate of phase noise of each oscillator...
The delay line method is a perfectly good way to make measurements, but you will want to buy a giant spool of coax if you want to do it. Your (relatively) short delay line will only allow you to see higher frequency phase noise. A much longer delay line is needed if you want to measure phase noise near the carrier.
I think you may find that none of the phase noise of the OCXO is "short term", as defined by the delay line. In other words, all the phase noise is close to the carrier and, therefore, not readily measured by the delay line method.
In the context of delay line measurements, 10 Hz - 10kHz from the carrier is near the carrier. You will not be able to measure this with a delay line that is a few hundred feet long.
You might want to do a quick calculation of how much the amplitude of a 10 Hz sideband is attenuated when measured with a 600' delay line.
Here is the calculation: If you have a 10 MHz carrier phase modulated at 10 Hz, calculate how large the detected (10 Hz) signal will be if you measure phase with a delay line setup and a 600' delay line. (You can ignore cable losses for the calculation.)
Follow the ball ...
1) Here is the calculation: If you have a 10 MHz carrier phase modulated at 10 Hz, calculate how large the detected (10 Hz) signal will be if you measure phase with a delay line setup and a 600' delay line. (You can ignore cable losses for the calculation.) You don't actually have to do an exact calculation; a "back of the envelope" calculation will still be enlightening.
If you have a 10 MHz carrier phase modulated at 10 Hz, calculate how large the detected (10 Hz) signal will be if you measure phase with a delay line setup and a 600' delay line.
I think the ball you have launched bounces all over the place and I am having trouble following it. You write:1) Here is the calculation: If you have a 10 MHz carrier phase modulated at 10 Hz, calculate how large the detected (10 Hz) signal will be if you measure phase with a delay line setup and a 600' delay line. (You can ignore cable losses for the calculation.) You don't actually have to do an exact calculation; a "back of the envelope" calculation will still be enlightening.
You suggest calculating the strength of a 10 Hz signal that modulates a 10 MHz carrier after transiting a 600 foot coax, but I am to ignore cable losses. What property of the coax am I to use to carry out this calculation? What effects the diminution of modulating signal strength over a coax other than attenuation due to its lumped elements (fundamentally, its resistance per unit length)?
I measured the diminution of a 1 KHz modulating signal on a 10 MHz carrier over 183 feet of coax and found that it looses strength at the same rate as the carrier. What makes 10 Hz modulating 10 MHz over 600 feet different?
What are you trying to get at in your proposal:If you have a 10 MHz carrier phase modulated at 10 Hz, calculate how large the detected (10 Hz) signal will be if you measure phase with a delay line setup and a 600' delay line.
If you mean the delayed signal amplitude will be lower than the non-delayed signal, that is pretty obvious (and is a result of cable losses you suggest I ignore). However, the AD8302 uses logarithmic amplifiers to ensure the two signals are at roughly the same amplitude before presenting them to the phase detector circuit. If you mean something else, just state it. Stop trying to mimic Aristotle.
So far, I am unconvinced that the delay line approach has any problem that the two oscillator approach doesn't have, other than a higher noise floor. In addition, I think digging the noise signal out of the modulated oscillator signal is by far the hardest problem to solve. This is true whether one uses the one or two oscillator setup.
Not to put too sharp a point on it, but:
sin(w*t) - sin(w*(t+dt)) = 2*cos(w*(2*t+dt)/2)*sin(w*dt/2)
So the result is a cosine wave at over twice the frequency and the maximum amplitude goes to *zero* at certain delays.
In the context of delay line measurements, 10 Hz - 10kHz from the carrier is near the carrier. You will not be able to measure this with a delay line that is a few hundred feet long.
I am going to go out on a limb and say you are wrong. There is no structural reason why the delay line (aka one oscillator) set up cannot measure phase noise arbitrarily close to the carrier.
According to Phase Noise and AM noise measurement in the Frequency Domain (https://tf.nist.gov/general/tn1337/Tn190.pdf), the noise floor of the one oscillator set up is reduced compared to the two oscillator approach. The graph given in support of this shows the noise floor rising as the the Fourier frequency of the phase noise approaches that of the carrier. Unfortunately, the justification for this graph is another paper that I have not been able to acquire. So, there is no way to check the argument that led to that graph. However, the text makes no mention of a "structural problem" that leads to the result.
There are plenty of practical problems with measuring phase noise close to the carrier. However, these are not specific to the one oscillator set up. They apply equally to the two oscillator set up. I will describe them in a separate post.
So, on to the argument that the delay line/one oscillator measurement set up is not structurally deficient as a measurement technique. This argument follows your lead in assuming transmission lines are perfect (not lossy and linear) and it assumes all electronic circuits are perfect (e.g., filters have cutoff frequencies that are exact - they do not drop off over a range of frequencies). In this regard, the argument assumes a bandpass filter that passes only the carrier frequency and the carrier frequency plus 1 Hz. This filter is placed on the oscillator output before the signal enters one side of the mixer and the delay line. So, the signal presented to the double balanced mixer on both sides comprises a 2 Hz band limited to the carrier frequency and the carrier frequency plus 1 Hz. Since the delay line is perfect, the amplitudes of the generated and delayed signal are exactly equal.
If you disagree or find fault with this argument, I welcome you to provide a counter-argument or refutation. However, I am not interested in playing 20 questions with you. So, if you follow your recent habit of patronizating discourse, I probably will not respond.
But, in order to learn, I have to understand what I am doing and why... I don't want to make the same mistakes others have turned into knowledge.
... my point is that just telling someone new to the field to do something is useful, but limited. It is better to explain why they should do it - what is the experience on which the advise is based.
While I don't believe there are structural reasons why the single oscillator setup could not achieve this, there are plenty of practical reasons why this objective is outside the capabilities of a hobbiest whether the one or two oscillator set up is used.
YMMV with cheaper two-channel SDRs (like red-pitaya or similar). Noise floor should scale with bit-depth, so if you are just into frequency comparisons of Rb-clocks/GPSDOs an 8-bit two-channel SDR might be enough. If H-masers is more of a thing for you then look at 14-bit or 16-bit SDRs. It would be good to come up with common gnu-radio and UI software for this, so that time-nuts worldwide could evaluate and compare the bang-for-buck of different SDR setups.
YMMV with cheaper two-channel SDRs (like red-pitaya or similar). Noise floor should scale with bit-depth, so if you are just into frequency comparisons of Rb-clocks/GPSDOs an 8-bit two-channel SDR might be enough. If H-masers is more of a thing for you then look at 14-bit or 16-bit SDRs. It would be good to come up with common gnu-radio and UI software for this, so that time-nuts worldwide could evaluate and compare the bang-for-buck of different SDR setups.
Stumbled on this thread about a Fluke counter that just might give to the info you need for oscillator analysis:
https://www.eevblog.com/forum/testgear/fluke-pm6690-12-digits-frequency-counter/ (https://www.eevblog.com/forum/testgear/fluke-pm6690-12-digits-frequency-counter/)
Lots of examples of what it can do later in the thread.
Stumbled on this thread about a Fluke counter that just might give to the info you need for oscillator analysis:
https://www.eevblog.com/forum/testgear/fluke-pm6690-12-digits-frequency-counter/ (https://www.eevblog.com/forum/testgear/fluke-pm6690-12-digits-frequency-counter/)
Lots of examples of what it can do later in the thread.
The HP 5335a has 0.1deg phase resolution. It is 30Hz - 1MHz , though.
Great job documenting all of that.
I'm unclear on how you can tell that your measurement is above the _phase noise_ floor of the SSA?
I thought that you need a low frequency SA which itself has better phase noise performance than the DUT to do a PN measurement using a delay line.
The only way I know of to measure below the PN floor of the SA is to use NFE to subtract a pre-measured PN floor. I'm not an expert by any means so correct me if I'm wrong.
Thanks!
Reading up about it in the pdf you linked - note there is a extra character at the start and end of your link to this pdf.
http://hpmemoryproject.org/an/pdf/pn11729C-2.pdf (http://hpmemoryproject.org/an/pdf/pn11729C-2.pdf)
• There is a problem with my measurement methodolgoy or its execution. |
• There is a problem with the published results. |
• The FE-5680A and FE-5650A are not identical except for packaging. The published results do not apply to the FE-5650A. |
• The published results are for a freshly minted FE-5680A, whereas my experimental results are for a 15 year-old FE-5650A. Aging has deteriorated the phase noise performance of the latter. |
Maybe try a phase-locked measurement instead of a frequency discriminator measurement. The calibration process for that should rule out any gain/loss problems in your test signal path.
Frequency Offset | dBc/Hz |
1 Hz | -105 |
10 Hz | -130 |
100 Hz | -145 |
1 Khz | -150 |
10 KHz | -155 |
Frequency Offset | dBc/Hz |
10 Hz | -100 |
100 Hz | -125 |
1 Khz | -145 |
I have an MV89 I got from China years ago and it seems to work fine (though I don't have anything that will measure its phase noise though.
The main issue with them I seem to remember for Time Nuts postings is that the 10MHz devices are frequency doubled 5MHz devices:
https://www.mail-archive.com/time-nuts@febo.com/msg58269.html (https://www.mail-archive.com/time-nuts@febo.com/msg58269.html)
The reliability issue is to do with a capacitor going bad but that shows up as a low level output I think.
It will be interesting to see what your measurements show.
I have an MV89 I got from China years ago and it seems to work fine (though I don't have anything that will measure its phase noise though.
The main issue with them I seem to remember for Time Nuts postings is that the 10MHz devices are frequency doubled 5MHz devices:
https://www.mail-archive.com/time-nuts@febo.com/msg58269.html (https://www.mail-archive.com/time-nuts@febo.com/msg58269.html)
The reliability issue is to do with a capacitor going bad but that shows up as a low level output I think.
It will be interesting to see what your measurements show.
The reason that the case of an MV89 is quite hot is that it has a large outer oven very close to the case. There's a picture of one opened up with the outer oven top removed on this page:
http://www.rbarrios.com/projects/MV89A/ (http://www.rbarrios.com/projects/MV89A/)
So definitely no heatsink required.
The temperature control on an oven oscillator can fail, and then it can get really hot. That's one of the reasons that you have to be careful if you ever cover them.
Hi,
The frequency control pin - 'Uin' in the data sheet - on the MV89A the I have is biased to half the reference voltage 'Uref'. In this case it's 4.96V for Uref and 2.48V on the Uin pin if nothing is connected to it.
Connecting Uin to 0V with a 1K resistor pulled the voltage on it down to 64mV (so 64uA current), and connecting it to Uref with the 1K takes it up to Uref less 63mV. Like most frequency control inputs on oscillators, it's quite high resistance so is not hard to drive.
The frequency change was -3.53Hz and +3.47Hz. Data sheet spec is >+/-2.5Hz, so most counters should see it fine.
[snip]
Connecting Uin to 0V with a 1K resistor pulled the voltage on it down to 64mV (so 64uA current), and connecting it to Uref with the 1K takes it up to Uref less 63mV. Like most frequency control inputs on oscillators, it's quite high resistance so is not hard to drive.
The frequency change was -3.53Hz and +3.47Hz. Data sheet spec is >+/-2.5Hz, so most counters should see it fine.
R_out_test | V_out_test | Output Impedance |
200.55 | 3.35 | 104.16 |
399.9 | 4.02 | 106.44 |
R_Input_Test | V2 | V1 | Iin | Zin |
1.0001K | 3.0095 | 2.9945 | 15uA | 199633 |
10.255K | 3.0095 | 2.9479 | 6.024uA | 489320 |
100.39K | 3.0095 | 2.7225 | 2.856uA | 952305 |
Voltage Source | Frequency |
5V | 9999999.61 |
2.5V | 9999999.58 |
0V | 9999999.53 |
Adjust Pin Voltage | Frequency |
open | 10,000,151 |
0V | 10,000,147 |
2.5V | 10,000,151 |
5V | 10,000,154 |
Adjust Pin Voltage | Frequency |
open | 10,000,153 |
0V | 10,000,149 |
2.5V | 10,000,153 |
5V | 10,000,156 |
Adjust Pin Voltage | Frequency |
open | 10,000,153 |
0V | 10,000,150 |
2.5V | 10,000,153 |
5V | 10,000,157 |
When I first turn on the oscillator, my frequency counter shows 9,999,995Hz or thereabouts. As the oscillator warms up it reaches 9,999,999.5Hz.
When I first turn on the oscillator, my frequency counter shows 9,999,995Hz or thereabouts. As the oscillator warms up it reaches 9,999,999.5Hz.
Oops, I didn't pay enough attention to what you said there. The frequency change during warm up should be in the hundreds of hertz (it's 310Hz low when cold at 20°C ambient on my one), so that was a giveaway.
The HP5335A frequency error sounds like a fault rather than a calibration issue.
As you mentioned, the first thing to try would be an external reference.
I think that the HP5335A normally used a 10811 OCXO, which has a hole on the top for the frequency adjustment trimmer capacitor in it, but this will not adjust it by 150Hz.
The 10811 that I have is about 210Hz low when cold at 20°C ambient, so if your one is 150Hz low it might be worth checking if the oven is heating up - if it's there at all!
If you get to looking at the oscillator, there are various versions of the manual and other info online, such as:
http://ftb.ko4bb.com/getsimple/index.php?id=manuals&dir=HP_Agilent/HP_10811_Crystal_Oven_Oscillator (http://ftb.ko4bb.com/getsimple/index.php?id=manuals&dir=HP_Agilent/HP_10811_Crystal_Oven_Oscillator)
The thermal fuse and its connections often cause problems, so if the power is getting to it that might be a good place to start.
If you get to looking at the oscillator, there are various versions of the manual and other info online, such as:
http://ftb.ko4bb.com/getsimple/index.php?id=manuals&dir=HP_Agilent/HP_10811_Crystal_Oven_Oscillator (http://ftb.ko4bb.com/getsimple/index.php?id=manuals&dir=HP_Agilent/HP_10811_Crystal_Oven_Oscillator)
The thermal fuse and its connections often cause problems, so if the power is getting to it that might be a good place to start.
Sorry, I've only just seen this and now it is rather too late to reply!I have an MV89 I got from China years ago and it seems to work fine (though I don't have anything that will measure its phase noise though.
The main issue with them I seem to remember for Time Nuts postings is that the 10MHz devices are frequency doubled 5MHz devices:
https://www.mail-archive.com/time-nuts@febo.com/msg58269.html (https://www.mail-archive.com/time-nuts@febo.com/msg58269.html)
The reliability issue is to do with a capacitor going bad but that shows up as a low level output I think.
It will be interesting to see what your measurements show.
Thanks for the info, jpb.
Given your experience with the MV89, I have a question. When I ran one of the MV89s for an hour or so, I noticed it became quite hot. I am still able to pick it up and hold it my hand, but it is on the borderline of that. When you were working with yours, did you have a heatsink on it? If so, how did you attach it (as there are no screw holes for this purpose on the top)?
I'm looking forward to seeing how your phase noise measurements go dnessett.
I am thinking of getting a 16bit picoscope myself but I'm torn between this and an audio interface with word clock. Are you still happy with the picoscope?
I wish it had 4 inputs instead of 2. I want to use it for ADEV measurements but for three-cornered-hat measurements I really need 3 or 4 inputs.
Also it would be nice to have an external clock reference option - I can't work out if this matters or not if one input is used for a reference.
I only use my Picoscope as a spectrum analyzer. It has the advantage of analyzing down to 1 Hz, whereas my Siglent 3032X only goes down to 9 KHz. For phase noise measurements, the Picoscope is crucial. Also, the 16 bits of the 4262 is necessary to get enough precision to capture low power phase noise values.I'm sorry to hear about your mother. I wish her a speedy recovery.
My mother broke her hip and I have had to dial back my work on this project in order to interact with doctors and help her with her rehabilitation. However, I hope to have some results in the next week or two.