Author Topic: Measuring periodic phase noise by phase structure and difference functions  (Read 7429 times)

0 Members and 1 Guest are viewing this topic.

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
The first question will have to wait until tomorrow morning.

The second is easy.  There are 32 comparator outputs each of which can be 0 or 1.  So the possible states among the comparator outputs is a 32 bit unsigned integer.

I built 3D models of rock properties over an 600 x 300 mile expanse of the Gulf of Mexico using 135 GB of unvetted wireline data from 19,000 wells,  drilling mudweights from 42,000 wells, directional surveys from 37,000 wells, initial temperature and pressure from 12,000 reservoirs and all the NOAA bathymetry and thermosonde data.  I estimated overburden stress, pore pressure, fracture pressure, temperature, effective stress, velocity, density and a bunch of other stuff.  The company research guys said it couldn't be done, but they could do a better job.  They would have required 1900 man days of skilled labor that was not available at any price.  I wrote all the software and delivered it in 9 months to a brickwall deadline.  I had 1 person working with me to do the QC.  It was a huge success.  I delivered a slew of related products no one had asked for because it was a matter of a day or two of work.

I was going to tease you after a post you made that said, "It can't be done."

I happen to think that a synthetic "golden" reference is possible.  I think the mathematics will be a considerable amount of work, but I think ti can be done.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
This is really very interesting.

An additional comparator tracks the zero mean output of a physical noise source which has been low pass filtered so that the maximum frequency is less than the time required to read the word, fetch the count at that address, increment it and store it.  At every clock tick on the GPIO bus, you read the random noise comparator.  If it is high you read the phase word and increment the appropriate counter.

So, in effect, you end up sampling on average at half the GPIO clock rate with a lot of what I'd call jitter or spread, right?


The simple answer is yes, except it would be at the full  GPIO clock rate, not  half..    I treated much related to this in the other thread earlier this monring.

The data rate and volume are problematic.  There is also the question of resolving variations in phase noise over a cycle as described by the term cyclostationarity with which this all began.

It seems to me that if we multiply one of the 8 oscillators up to a high order harmonic and then use that with a counter to select narrow sampling windows over the course of a cycle we can resolve statistics with very fine granularity. However, absent an analysis of that operation I cannot say if that adds any new variables to our system of equations.  If it does, then the computational complexity goes up a good bit.

There are alternate implementations which need to be considered.  For example, using a quadrature mixer to generate IQ streams at baseband.   A possibility would be to have a 1:4 frequency multiplier (more variables) and then pairwise use one oscillator multiplied by 4x as the sampling clock for a Tayloe mixer (more variables) and another oscillator as the signal.

In the implementation I outlined in the other thread using ADCMP581s I neglected to account for jitter in the comparator responses.  This suggests to me that we might need a switch matrix to get enough equations to account for the comparator error contribution.

To restate the fundamental questions:

Is there a physical configuration which will resolve all the errors in a reference oscillator without requiring a golden reference?

Has such a configuration been discovered yet?
 

Offline JohnnyMalariaTopic starter

  • Super Contributor
  • ***
  • Posts: 1154
  • Country: us
    • Enlighten Scientific LLC
Here's a short example of the demodulated IQ from my light scattering apparatus:

https://youtu.be/PtoO8QIEndI

The dominant noise is due to random diffusion of 150nm diameter particles. The line broadening in the Doppler spectrum is of the order of 10Hz. Buried in there is a periodic phase change of amplitude 10mrad and frequency ~500Hz. The phase difference function averages the noise to zero allowing the amplitude of the periodic phase change to be determined with high confidence and fidelity.
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6146
  • Country: ro
The principal figure of merit for a clock is that at various times in the future it is correct.  This requires in turn  a high degree of frequency stability.  So long as the phase noise is zero mean over the shortest period you wish to measure with the clock and this holds true over the full range from the shortest to the longest, the phase noise is irrelevant to time keeping.

That makes sense.



The claim in bold is false.

In a clock, the oscillator's phase noise will accumulate over time as an imprecision in time keeping, even if the average value of the phase noise is zero.

I did not make the statement in bold.   My statement was a sentence with an important constraint which you wish to ellide and then claim I am wrong.   I included that constraint precisely because the statement in bold is false.

Would you please show mathematically why a clock meeting the constraint I imposed would lead to an accumulation of errors?  I shall be very interested in your argument (aka mathematical proof).

By clock, I will understand a time keeping device.

As an example, let's consider a clock made from an oscillator followed by a counter.  The counter will count the number of oscillations.  By reading the counter, we can measure time.

To simplify this thought experiment, let's consider a square wave oscillator followed by a digital counter that counts the number of rising edges.

By phase noise, will understand the next rising edge of the oscillator arrives to the counter slightly faster, or slightly delayed, than expected from an ideal oscillator (ideal as in constant frequency and no phase noise).


For the first case, let's consider the clock has an ideal oscillator: constant known frequency, and no phase noise.  We can measure the time by reading the counter, and compare the number with another reference clock.  The maximum error will be +/-1 count.  This error will never increase with time, because we assumed our oscillator is ideal, and the reference clock is also ideal.

Now, let's add some phase noise.  Let's say we have a true random numbers generator that generates only -1 and +1, with 50/50 chances.  The average of many random numbers will converge to zero.  We will generate a random number for each oscillator's period.  If the random number is +1, we artificially move the rising edge of the oscillator to the right (let's say +1us).  If the random number is -1, we move the edge to the left (-1us).

On average, the period of the oscillator stays the same, because our random +/-1 has exactly 50/50 chances.

At the first look, we will be tempted to say that our clock will not be affected, because the average frequency of the oscillator stays unchanged, but this is NOT true: the clock will be affected.  The later we read our clock (counter) the more erroneous readings we get.  That phase noise accumulates with time.

To understand why, we need to consider the worst possible scenarios.  Let's say we read the counter after 100 rising edges.  The worst possible error will be +100 or -100us.  If we read the counter after 1000 edges, then the worst possible result will be +1000us or -1000us.

For an oscillator with white phase noise, the errors will have a Gaussian distributtion.  The bell of errors is always centered on zero, but the shape of the bell goes wider and wider in time, and that's because of that zero mean phase noise.

In conclusion, measurement errors caused by phase noise increases with time.

This will affect the time keeping (for either long or short time), as well as any phase measurement of the oscillator.  The later we measure, the bigger the errors.



Here is an experiment to check the above conclusion:
- We have a DDS generator (Rigol DG4102) and an oscilloscope (Rigol DG1054Z), each with their own internal oscillator, and their own phase noise
- the DDS generates pulses of 10ns at each 500ms
- the oscilloscope visualizes the pulses in 3 situations:
1. First pulse, at the trigger moment (video between minute 00:07 and 00:10)
2. The second pulse, at 500ms after the trigger moment (video between minute 01:57 and 03:10)
3. Third pulse, at 1s after the trigger moment (video between minute 00:35 and 01:43)

The blue, orange and red spikes are only some fixed markers. They are just as references. Ideally they should be only one spike in the center of the grid, but they have different positions because of some small difference in frequency between the DDS and the oscilloscope's oscillators.

The useful signal (the 10ns pulse) is the green trace.

The video is unedited, so please look only at the specified moments, and ignore the periods where I was changing the oscilloscope's settings.

In case 1, the pulse is stable, in case 2, the pulse WIGGLES around the 500ms mark, in case 3, the pulse wiggles EVEN MORE around the 1 second mark. The errors increases with time.





Of course this error accumulation caused by the phase noise can be alleviated buy averaging repeated measurements, but repeating the measurement is not always possible.  Even if it were, the effect of phase noise is still important in order to decide how many measurements we need to average.

Now, all these are nothing more than my own intuitive explanation, using only time domain and common sense about probabilities. I still didn't provide a mathematical demonstration.  Probably that would be to demonstrate the Central Limit Theorem, but then I will just copy/paste, and it will still make no sense without all the above.
 
The following users thanked this post: dnessett

Offline JohnnyMalariaTopic starter

  • Super Contributor
  • ***
  • Posts: 1154
  • Country: us
    • Enlighten Scientific LLC
For an oscillator with white phase noise, the errors will have a Gaussian distributtion.  The bell of errors is always centered on zero, but the shape of the bell goes wider and wider in time, and that's because of that zero mean phase noise.

In conclusion, measurement errors caused by phase noise increases with time.

I cannot agree with this. The descriptive statistics such as mean and standard deviation for a truly stochastic process t are temporally invariant.

In the video in my preceding comment shows phase noise with a Gaussian distribution. Whether I measure that distribution for 1s, 1min or 1hr, I get the same standard deviation. There's a whole industry of nanoparticle sizing that exploits that.

Perhaps this difference is down to the relative timescales.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
See attached
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6146
  • Country: ro
JohnnyMalaria, my long answer was just because of the rhb's request to justify why I don't agree with his affirmations. Maybe I misunderstood the context.

In my experiment, the phase noise is also constant in time, but measurements of absolute time (or phase) became in time more and more spread around the mean expected value. For my experiment, the phase noise was leading to a jitter that was increasing in time. The experiment was replicated twice, with more precise instruments (Waverunner HRO 64Zi and 33220 generator, and with a Tektronix MDO3104 and its internal probe compensation oscillator), so I think the increasing seen jitter was not just an artifact of my cheap Rigol instruments.

I looked at the PDF you attached, but only briefly browsed it, so I couldn't say I understood the meaning of your oscilloscope video.

Anyway, if what I was talking about does not affect your type of measurement, then sorry for the offtopic.


« Last Edit: June 20, 2018, 09:15:35 pm by RoGeorge »
 

Offline JohnnyMalariaTopic starter

  • Super Contributor
  • ***
  • Posts: 1154
  • Country: us
    • Enlighten Scientific LLC
I looked at the PDF you attached, but only briefly browsed it, so I couldn't say I understood the meaning of your oscilloscope video.

Anyway, if what I was talking about does not affect your type of measurement, then sorry for the offtopic.

It's not a problem - this is very interesting for me since it takes something I'm used to in one discipline and compares it to something very similar in other but obviously with very different application and challenges.

Briefly, the phase noise in the video clip is due to randomly diffusing particles in a liquid scattering laser light. The phase of the scattered light depends on the location of the particles resulting in the 2D random walk on the scope (the X and Y scope channels are the I and Q demodulated signals from the photodetector). The particles are electrically charged and subject to an alternating electric field. This motion contributes a sinusoidal phase change which you would call periodic phase noise (?). However, it is very small and impossible to discern by eye but the data processing algorithm pulls it out very readily.

I assume that for what you would consider clocks or oscillators that gaussian noise isn't that dominant over period phase noise compared to my case??
 

Offline tomato

  • Regular Contributor
  • *
  • Posts: 206
  • Country: us

Briefly, the phase noise in the video clip is due to randomly diffusing particles in a liquid scattering laser light. The phase of the scattered light depends on the location of the particles resulting in the 2D random walk on the scope (the X and Y scope channels are the I and Q demodulated signals from the photodetector). The particles are electrically charged and subject to an alternating electric field. This motion contributes a sinusoidal phase change which you would call periodic phase noise (?). However, it is very small and impossible to discern by eye but the data processing algorithm pulls it out very readily.

I assume that for what you would consider clocks or oscillators that gaussian noise isn't that dominant over period phase noise compared to my case??

Your technique is very interesting, but I don't understand why you refer to "periodic phase noise."  It isn't noise; it's a signal arising from the application of an AC electric field, and it's frequency and amplitude are largely determined by the experimenter.  It isn't really appropriate to equate it to the underlying phase noise of an oscillator.
 

Offline JohnnyMalariaTopic starter

  • Super Contributor
  • ***
  • Posts: 1154
  • Country: us
    • Enlighten Scientific LLC
Your technique is very interesting, but I don't understand why you refer to "periodic phase noise."  It isn't noise; it's a signal arising from the application of an AC electric field, and it's frequency and amplitude are largely determined by the experimenter.  It isn't really appropriate to equate it to the underlying phase noise of an oscillator.

To be honest, the terminology is a mess - part of the joys of interdisciplinary discussion :)

I have usually referred to it as periodic phase variation/variability but I keep seeing it referred to as period phase noise, too. I'm still trying to get a grip on the terminology used in your area.

What I really call the different contributions to the signal are collective oscillatory motion and random motion. There is a third collective linear motion, too, due to phenomena such as settling or convection. There are a few millions of particles each contributing to the phase. It is assumed that they all move with the same velocity in the applied field, hence collective. Each particle contributes individually to the random noise. The phase difference function, f(tau), simple calculates the phase difference (duh!) across one cycle of the electric field many times starting at a fixed point on the field, t0. i.e., f(tau) = <phi(tau + t0) - phi(t0)>. In the original version of this technique, the phase difference is weighted by the amplitude of the signal, too. This is to compensate for when the amplitude goes to zero at which point phi is indeterminate. Today I don't bother since for my experiments I get less noisy phase difference functions without the amplitude weighting. The signal very rarely approaches zero amplitude. The phase structure function is f(tau) = <[phi(t + tau) - phi(t)]2> (i.e, the second moment of the phase difference) It isn't synchronized with the field and the random noise does contribute. Because it isn't synchronized you don't need a priori knowledge of the frequency (which you do need for the difference function) and, hence, you can determine the frequency as long as the random noise isn't too dominant. The structure function can be constructed synchronously but the equation is a bit more complicated.
« Last Edit: June 21, 2018, 12:07:54 am by JohnnyMalaria »
 

Offline dnessett

  • Regular Contributor
  • *
  • Posts: 242
  • Country: us

By clock, I will understand a time keeping device.

As an example, let's consider a clock made from an oscillator followed by a counter.  The counter will count the number of oscillations.  By reading the counter, we can measure time.

<Elided explanatory text>


Your explanation was concise, clear, coherent and very helpful. However, I have a couple of questions.

1. In Figure 3 in Rutman and Wall's paper Characterization of Frequency Stability In Precision Frequency Sources, which compares the traditional standard deviation of fractional frequency data with the "performance of practical frequency sources", there is a knee, above which the traditional SD and "practical performance" are identical. They diverge after log(tau) increases to a particular value, not indicated on the figure. From your explanation, one would surmise that traditional SD and the "practical" clock SD would diverge rather quickly. Do you have any insight why this doesn't appear to be true in that figure?

2. One problem I have with Allan Variance is no one has yet clearly articulated what you do with it. Here is a fictional, and I freely confess satirical, story that illustrates my concern. I am offering this not to offend anyone, but rather in an attempt to get my point across in a way that I have not yet succeeded using conventional arguments.

Suppose I have a next-door neighbor who just bought a new car. I see him in his front yard and say,

"Hi neighbor, I see you just bought a new car."

He replies, "Yep, its a beaut. I looked at quite a few models before selecting this one and I am very happy with my choice."

"Yes, I see that. But tell me, what convinced you to select this one?"

"Oh, there were many reasons, but the principle one was that it has a BiddleyBoop rating of 1.4*10^-11."

I am a bit puzzled and say, "A BiddleyBoop rating, huh? What exactly is that?"

He smiles and says, "It is a way to characterize the superiority of one car over another."

I am even more puzzled and say, "Uh huh, Uh huh. But, how does it relate to your driving experience?"

He frowns and says, "Well, the other models I looked at had BiddleyBoop ratings on the order of 10^-10, so the driving experience they deliver is inferior."

I am now completely confused and say, "But, how does a BiddleyBoop rating on the order of 10^-10 mean that those cars provide an inferior driving experience when compared to the model you bought?"

My neighbor looks at me like I am somewhat retarded and says, "Isn't it obvious? The model I bought is over 10 times better than the other models as measured by their BiddleyBoop ratings."
« Last Edit: June 21, 2018, 08:26:50 pm by dnessett »
 

Offline tomato

  • Regular Contributor
  • *
  • Posts: 206
  • Country: us

1. In Figure 3 in Rutman and Wall's paper Characterization of Frequency Stability In Precision Frequency Sources, which compares the traditional standard deviation of fractional frequency data with the "performance of practical frequency sources", there is a knee, above which the traditional SD and "practical performance" are identical. They diverge after log(tau) increases to a particular value, not indicated on the figure. From your explanation, one would surmise that traditional SD and the "practical" clock SD would diverge rather quickly. Do you have any insight why this doesn't appear to be true in that figure?

  Figure 3 does show them diverging.

Quote
2. One problem I have with Allan Variance is no one has yet clearly articulated what you do with it. Here is a fictional, and I freely confess satirical, story that illustrates my concern. I am offering this not to offend anyone, but rather in an attempt to get my point across in a way that I have not yet succeeded using conventional arguments.

Suppose I have a next-door neighbor who just bought a new car. I see him in his front yard and say,

"Hi neighbor, I see you just bought a new car."

He replies, "Yep, its a beaut. I looked at quite a few models before selecting this one and I am very happy with my choice."

"Yes, I see that. But tell me, what convinced you to select this one?"

"Oh, there were many reasons, but the principle one was that it has a BiddleyBoop rating of 1.4*10^-11."

I am a bit puzzled and say, "A BiddleyBoop rating, huh? What exactly is that?"

He smiles and says, "It is a way to characterize the superiority of one car over another."

I am even more puzzled and say, "Uh huh, Uh huh. But, how does it relate to your driving experience?"

He frowns and says, "Well, the other models I looked at had BiddleyBoop ratings on the order of 10^-10, so the driving experience they deliver is inferior."

I am now completely confused and say, "But, how does a BiddleyBoop rating on the order of 10^-10 mean that those cars provide an inferior driving experience when compared to the model you bought?"

My neighbor looks at me like I am somewhat retarded and says, "Isn't it obvious? The model I bought is over 10 times better than the other models as measured by their BiddleyBoop ratings."

You left out the part where your neighbor handed you multiple articles about BiddleyBoop ratings, including several written by Mr. Biddley himself.
 

Offline thermistor-guy

  • Frequent Contributor
  • **
  • Posts: 365
  • Country: au
...
You left out the part where your neighbor handed you multiple articles about BiddleyBoop ratings, including several written by Mr. Biddley himself.

As a timenut novice, I found the BiddleyBoop article on Wikipedia helpful, particularly section 2 on interpretation of value.
https://en.wikipedia.org/wiki/Allan_variance
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6146
  • Country: ro
I assume that for what you would consider clocks or oscillators that gaussian noise isn't that dominant over period phase noise compared to my case??

Indeed, terminology can vary widely. I can see now why you used "phase noise" in the title, but before I thought you were meaning something else. So now, I'm not sure I understand your question. Also you seem to need the waveform of the photodector in order to find both the phase and the amplitude of each spectral component, yet you are saying you use a Spectrum Analyzer to read the photodetector. By Spectrum Analyzer I understand a device that is not aware of the signal's waveform, an SA does not know the phase of each spectral component, an SA knows only the amplitude information. An SA can not give I and Q.

To avoid all these, I found another way of describing the accumulation of errors we didn't agree before. This time in just one small paragraph, without specialized terminology:

Imagine you want to walk on an alley. Straight line, one direction only, one step at a time. Your step is about one foot. To be more precise, 1 foot +/1 inch. Now, start walking 120 steps. How far are you now? In average, you will be 120 feet away from the starting point, but because of those +/- 1 inch at each step, you could be anywhere between 110 and 130 feet. The error is +/- 120 inch (+/- 10 feet). But if you walk 1200 steps, then the error could be +/-1200 inch (+/- 100 feet). That is the accumulation I was talking about.

1 feet, would be the equivalent of the average frequency (for my oscillator)
+/-1 inch, would be the equivalent of a phase noise of my oscillator
120 feet, for my clock (note clock is not the same as oscillator, a clock is made from an oscillator and a counter, so a time keeping device) would be value found in my counter (aka what hour shows your clock?)
110...130 feet would be the real time (the time indicated by an ideal clock)

Does this accumulation affect your measurement? I don't know, you tell me.  ^-^

In the meantime I spent about one hour reading your PDF thesis (I am only at 3.5.5. now - about half of the document). Very interesting reading. For now, the various techniques you described seem to converge to a sort of Laser Doppler Vibrometry scanning, and you want to study small moving particles during an electrophoresis process (in contrast with the study of a macroscopic vibrating object, like in vibrometry).

From now on, expect from me more question than answers.  ^-^

So far, the device looks like an interferometer to me. Since the paths for both rays of light are roughly the same length, I guess you won't care much about the phase noise in the laser source. In the walking analogy, that would be like making no steps at all (because ideally the interferometer arms are identical). So, with no sample to analyze you should see no signal. In reality, the arms are not exactly the same, so you might see some noise because of the laser phase noise (accumulated because of the differences in the 2 light rays' length). Since I have no experience with lasers, no idea how big this noise would be for your setup.

About electrophoresis, I know basically nothing, so I will assume your setup is with some calibrated gel and a DC passing current. In my understanding, this setup is to separate particles (or DNA chunks) by their sizes (the smaller particles moves faster through the gel, so smaller chunks of DNA will travel longer in a given time). Please correct me where I'm wrong.

The first question is why do you use AC instead of DC in the electrophoresis process? What advantages does the AC brings to you? Does the particles still migrate (with time) in one direction only, like in DC gel electrophoresis?

I'm not sure yet what the photodetector reads: it reads only the beating effect between the reference light and the Doppler shifted light (you mentioned somewhere that the non-linearity of the photo sensor acts as a mixer in a heterodyne), or it reads the integral of interference fringes that are moving over the surface of the detector, or both?

« Last Edit: June 22, 2018, 10:12:57 am by RoGeorge »
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6146
  • Country: ro
I have a couple of questions.

Glad you liked the explanation, thank you. Don't know about the paper you linked, and I have no time for it right now. Maybe it seems a contradiction because I didn't explained well enough. In the meantime I found a more simple way to say why the error range increases, and in what situation (see the paragraph with "1 step = 1 foot +/1 inch" from my previous post).

About the Allan variance. I never used it, but I'm sure there might be a reason for its existence, so don't assume it's useless.

Offline dnessett

  • Regular Contributor
  • *
  • Posts: 242
  • Country: us
About the Allan variance. I never used it, but I'm sure there might be a reason for its existence, so don't assume it's useless.

For someone who hasn't used it, you certainly did a better job explaining its necessity that those who, at least by the tone of their posts, presume themselves to be experts.

With your indulgence, I would like to summarize the situation in broad terms and welcome your comments. Oscillators can be used in a number of applications, some of which are time-keeping, doppler radar, and spread spectrum communications, to name a very few. Allan variance is important in time-keeping because the errors inherent in oscillator frequency fluctuations accumulate over time when the oscillator drives a counter, implementing a clock. However, in applications such as doppler radar and spread spectrum communications, this is not the case. Using your analogy of an alley, for these applications you take a few steps, use the measurements you make in that interval, and then return to the starting point for the next round. Long-term oscillator errors don't accumulate, since you are not integrating the frequency fluctuation averages over a long period of time.
 

Offline JohnnyMalariaTopic starter

  • Super Contributor
  • ***
  • Posts: 1154
  • Country: us
    • Enlighten Scientific LLC
I assume that for what you would consider clocks or oscillators that gaussian noise isn't that dominant over period phase noise compared to my case??

Indeed, terminology can vary widely. I can see now why you used "phase noise" in the title, but before I thought you were meaning something else. So now, I'm not sure I understand your question. Also you seem to need the waveform of the photodector in order to find both the phase and the amplitude of each spectral component, yet you are saying you use a Spectrum Analyzer to read the photodetector. By Spectrum Analyzer I understand a device that is not aware of the signal's waveform, an SA does not know the phase of each spectral component, an SA knows only the amplitude information. An SA can not give I and Q.


When I started my PhD I inherited a spectrum analyzer along with the laser + optics mounted on a rail and balanced on motorbike innertubes. Really. After months of trying to get anything meaningful I tried a different approach which involved an expensive lock-in amplifier to demodulate the detector signal and, hence, get at the amplitude and phase. Today, I actually do both the spectral analysis and the phase analysis simultaneously on the same detector signal. Each data analysis method has its pros and cons and together can give a lot of insight into the properties of my sample.

Quote
To avoid all these, I found another way of describing the accumulation of errors we didn't agree before. This time in just one small paragraph, without specialized terminology:

Imagine you want to walk on an alley. Straight line, one direction only, one step at a time. Your step is about one foot. To be more precise, 1 foot +/1 inch. Now, start walking 120 steps. How far are you now? In average, you will be 120 feet away from the starting point, but because of those +/- 1 inch at each step, you could be anywhere between 110 and 130 feet. The error is +/- 120 inch (+/- 10 feet). But if you walk 1200 steps, then the error could be +/-1200 inch (+/- 100 feet). That is the accumulation I was talking about.

1 feet, would be the equivalent of the average frequency (for my oscillator)
+/-1 inch, would be the equivalent of a phase noise of my oscillator
120 feet, for my clock (note clock is not the same as oscillator, a clock is made from an oscillator and a counter, so a time keeping device) would be value found in my counter (aka what hour shows your clock?)
110...130 feet would be the real time (the time indicated by an ideal clock)

Does this accumulation affect your measurement? I don't know, you tell me.  ^-^

My signal is quite similar to an audio signal. There is the modulation frequency due to the frequency difference between the laser "arms" - imagine a few kHz steady tone. The random diffusion is hiss and the phase oscillation is weak tremolo. I typically sample at a few kHz over a 1 second window and do this repeatedly for perhaps a minute. I only need say one part in a thousand accuracy which is quite a different realm than in the RF clock/oscillator world.

Quote
In the meantime I spent about one hour reading your PDF thesis (I am only at 3.5.5. now - about half of the document). Very interesting reading. For now, the various techniques you described seem to converge to a sort of Laser Doppler Vibrometry scanning, and you want to study small moving particles during an electrophoresis process (in contrast with the study of a macroscopic vibrating object, like in vibrometry).


Exactly. There are a lot of techniques derived from the same fundamental principle.

Quote
From now on, expect from me more question than answers.  ^-^

So far, the device looks like an interferometer to me. Since the paths for both rays of light are roughly the same length, I guess you won't care much about the phase noise in the laser source.

That's right. Purists say you need to calculate the phase coherence of the beam to figure out things like where to place focusing lenses etc. I don't bother. I just eyeball it and it works very well :)

Quote
About electrophoresis, I know basically nothing, so I will assume your setup is with some calibrated gel and a DC passing current. In my understanding, this setup is to separate particles (or DNA chunks) by their sizes (the smaller particles moves faster through the gel, so smaller chunks of DNA will travel longer in a given time). Please correct me where I'm wrong.

Electrophoresis is a general term that just means migration of charged particles/droplets in a fluid toward an electrode of opposite charge. For me, it is microelectrophoresis - just particles in a liquid, e.g., milk, paint etc.

Quote
The first question is why do you use AC instead of DC in the electrophoresis process? What advantages does the AC brings to you? Does the particles still migrate (with time) in one direction only, like in DC gel electrophoresis?


Gel electrophoresis requires a DC field since the goal is to separate the different species. For my method, there is a small volume that the detector observes, perhaps 1mm3. With a DC field the particles would migrate in one direction and eventually deplete the volume that the detector is looking at. Also, in the presence of salts, electrolysis can occur at the surface of the electrode causing highly undesirable chemical reactions, heating, turbulence etc. Using an AC field helps reduce these effects. The higher the frequency, the better the reduction but at a price. What I've built overcomes what many have been saying for decades is impossible. I can't go into the details in a public forum about that, though (IP and all that).

Quote
I'm not sure yet what the photodetector reads: it reads only the beating effect between the reference light and the Doppler shifted light (you mentioned somewhere that the non-linearity of the photo sensor acts as a mixer in a heterodyne), or it reads the integral of interference fringes that are moving over the surface of the detector, or both?

So there are two optical geometries that can be used but are mathematically equivalent. One resembles holography in a way. Light is scattered by the moving particles and picked up by the detector. A diffuse second 'reference' source of light (from the same laser) is directed to the detector. Hence, both heterodyne to give a signal containing all the Doppler shifts from each particle. However, this doesn't yield the direction of the motion. Hence, one of the sources of light is frequency-shifted relative to the other so that know a stationary particle would be seen to be a particle moving with a Doppler signal equal to the frequency shift between the two light sources. This is basically modulating the signal just as you would do with a lock-in amplifier so that subsequent demodulation would allow you to recover weak signals buried in a lot of noise. So, the detector signal is basically a gaussian (due to diffusion) centered around whatever the modulation frequency is plus whatever motion is occurring due to the electrophoresis. The other optical arrangement is conceptually easier to understand and is the one I use. The two light sources intersect in the sample forming interference fringes. Due to the frequency shift the fringes move. They "sweep" past particles and the light from the particles goes to the detector.

My question/interest

Hopefully, it's evident that the magnitude of frequencies, noise etc are quite different for my experiments than in the world of RF oscillators etc. I'd like to know if IQ data generated using the various ways I read on other threads could be processed with my analysis methods and differentiate between different oscillators. If so, would it help pragmatic selection given a choice of oscillators? Or maybe provide a simple way to characterize long-term changes in a given oscillator's performance? I say simple because implementation of the analysis method is straightforward. Does anyone have such IQ data that I could play with?
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
Software defined radios (SDRs) will record IQ streams to disk for later playback.  The SDRplay RSP2 is quite capable and the price is pretty nominal.  It will accept a GPSDO reference input and goes to 2 GHz, so it's actually a very good choice for digitizing the output of an oscillator.  I should have mentioned that previously, but overlooked it.

The IQ signal is just another name for the analytic signal, so it should all be fairly familiar territory, or at least once was when you were working on your dissertation.

I'll try to work in collecting a bit of data for you.  Not sure I'll have time to set up the GPSDO, but I can reference it to my 8648C which has the high stability option and then record the 33622A signal which is at least 10x poorer as it is not equipped with the high stability option.  I was able to view phase shifts using Lissajous figures when comparing to the 8648C output.
 

Offline dnessett

  • Regular Contributor
  • *
  • Posts: 242
  • Country: us
As a timenut novice, I found the BiddleyBoop article on Wikipedia helpful, particularly section 2 on interpretation of value.
https://en.wikipedia.org/wiki/Allan_variance

I apologize for not responding to this sooner. In order to avoid hijacking this thread for a detailed discussion of Allan Variance, I have responded in a more appropriate place - here. JohnnyMalaria started this thread in order to discuss a topic that he originally posted in the another thread, specifically An advanced question - sampling an oscillator's signal for analysis. This was courteous and it would be discourteous of me to attempt to take-over his thread in order to discuss this topic. I invite you to respond to my comments in post referenced above.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
A couple of comments related to previous posts:

Typical spectrum analyzers do not measure phase and so a vector network analyzer is used for phase measurements.  But that is a rather different sense of measuring phase.  The HP E4406A transmitter tester will display IQ constellations and some versions  have IQ baseband inputs.

Consider d(t) = a*t +e(t)  where a is a constant and e(t) is a random, zero mean Gaussian process.  By definition, the expected value of d(t) is a*t for all t.  While the error *is* accumulated, the expectation of the sum of e(t) over any period of time is zero.  If that is not the case, then e(t)  fails to meet the definition.  While  any integration of e(t) over some period will in general be non-zero, the errors will generally cancel.  Any measurement of d(t) will be a(t) to within the variance of e(t).

In short, if phase errors in a clock accumulate, then the error process is not zero mean. I included the constraint on being zero mean over the measurement period specifically to address the cyclostationary case.

Edit: Added statement with regard to variance of e(t)
« Last Edit: June 24, 2018, 12:49:06 pm by rhb »
 

Offline JohnnyMalariaTopic starter

  • Super Contributor
  • ***
  • Posts: 1154
  • Country: us
    • Enlighten Scientific LLC
Typical spectrum analyzers do not measure phase and so a vector network analyzer is used for phase measurements.  But that is a rather different sense of measuring phase.  The HP E4406A transmitter tester will display IQ constellations and some versions  have IQ baseband inputs.


I'm quite surprised at that. The SA I used 30 years ago (!) was a HP3582A dual-channel digital audio SA (0.02Hz-25.6kHz) from ~1979. It could measure both amplitude and phase. I tried but the data had to be transferred by GBIP and only once a sweep had finished - I wanted real-time. Also, the phase resolution was too low for my needs (10 degrees). I had to write a convincing justification for my impoverished UK university to buy a EG&G 5210 high end dual phase lock-in amp which is widely sought after today and considered the benchmark analog lock-in.






Amazing I can achieve the same two functions for less than $1000 now.


« Last Edit: June 24, 2018, 04:56:09 pm by JohnnyMalaria »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf