| Electronics > Projects, Designs, and Technical Stuff |
| Signal processing - getting exact frequency from short ADC sample |
| << < (10/18) > >> |
| Berni:
--- Quote from: daqq on December 16, 2019, 07:56:14 am ---Berni: Thanks for the mixer idea, I've been thinking about it, but with such a small time frame, the more I downconvert (mix it with a high frequency getting the frequency difference), the less 'full sines' I get? Basically, if my window 2ms (200 samples) and I downconvert from 10kHz to, say, 100Hz, then, during the 2ms I don't even get one full sine. Thanks again everyone, I'll play around with it more during the week. --- End quote --- The point of using downconversion is to turn the real signal into a complex I/Q quadrature signal, the reduction in frequency is just a side effect that can be later used to help reduce the required processing power. It is perfectly possible to go the other way and upconvert against 1GHz to turn 100.1KHz into 1000100.1 KHz if you want more frequency. The advantage of having the signal in I/Q form is that you determine the phase by looking at only 1 sample, just converting that complex I/Q number from rectangular form to polar form (This is just calculating the sides of a right angle triangle, so just needs elementary school level math). Since phase and frequency are related via time means that you only need 2 samples from the ADC to be able to determine the frequency to any precision past 0.0001Hz. In practice however more points are needed to get to sub 1Hz precision due to the ADC not being perfect (number of bits, noise, nonlinearity etc...). There is no need to see single a zero crossing in the signal, since you know the phase in every point means you can determine the frequency pretty accurately by seeing just 1/4 of one sine wave period. This is because a sinewave will move forwards by 90 degrees in phase after 1/4 period, and there is only one unique frequency that will move its phase by 90 degrees in the given time frame of the ADC capture. So if your ADC is decent and input signal is stable you can likely use this "downmixing and polar connversion" to determine the frequency down to 0.1Hz precision from looking at only 5ms of your 1MSPS ADC data. Another advantage is that if you skip averaging after downconversion you keep 1MSPS trough the signal chain. So with only a few dozen multiply and accumulate operations per sample (after some optimizations) you stream data trough the process in a pipelined way to get 1 000 000 frequency readings per second as the output, each accurate to 0.1Hz (Or even more accurate if you use a larger phase integration window). Best of all doing this required no advanced math at all. All you have to do is multiply each sample with a sine function, then do a bit of trigonometry to find the phase angle and you are done. But the process is a bit easier to understand if you have a good idea of how complex numbers work. |
| Kleinstein:
The Hilbert transform / down-conversion is a viable option if computational speed is critical. I would still use averaging before calculating the phase. In C code the ATAN2 function needed here is one of the really slow ones. With noisy data one could also get effects of noise leaving the linear range. So it makes sense to do averaging / down sampling before in the complex domain. The least squares fit gives the limit how good it can get (if the noise is white). The fitting result to the 12 Bit simulated data showed that sufficient resolution would be possible. I just realized the uncertainty number was in absolute numbers so the error bar is more like 1-1.5 ppm. So to get 0.1 Hz resolution (10 ppm) the noise could be higher by about a factor of 8. So the data would need to be good to about 9 Bits to reach the aimed for 0.1 Hz resolution. From my experience (though with considerably longer data sets and additional amplitude decay as another parameter) the Hilbert transform is slightly more sensitive to noise and may need another 2-6 dB better signals. How much depends on the exact implementation (e.g. choice of the windows for the transformation). Speed wise the Hilbert transformation is faster by maybe a factor of 10 - with the rather short data set the ATAN2 function may reduce that advantage and have the noise disadvantage a little higher as relatively short windows have to be used. |
| hamster_nz:
I couldn't resist - here's using the Hilbert Transform to detect the phase change over 100 samples and extrapolating that to 100,000 samples. The input is a file containing 200 samples. Note that any signal that is a harmonic of the sample rate divided by 100 could be used... so with a sample rate of 100kS/s the signal you are analyzing could be 50kHz, 33.3kHz, 25KHz, 20kHz.. whatever. All that matters is that after 100 samples there is nominally zero phase offset. (oh, and you will need to change the magic fudge factor in the kernel if you are looking for a different frequency). Assumes that the input data has already been bandpass filtered to just around the frequencies of interest. --- Code: ---/************************************************************ * poc.c : A really bad Proof of concept * * Can you extract tiny frequency changes from small datasets * using the Hilbert Transform? Yes, it seems you can. * * Expects a single command arg that is a list of 200 samples * Processes it assuming the sample rate is 100kS/s, and * measures the crawl in phase over 100 samples. This is then * used to extrapolate to the change of phase after 100,000 samples. * * This isn't supposed to be the most correct code, just a hack * to see if it works. ************************************************************/ #include <stdio.h> #include <stdlib.h> #include <math.h> double data_r[200]; double data_i[200]; double kernel[99]; static void build_kernel(void) { int i; for(i = -49; i < 50; i++) { if((i&1)==1) { // I pulled the scaling factor out of my data kernel[i+49] = 1.0/i/3345*2047; } } } static void hilbert(double *data, double *result) { double r = 0; int i; // Apply the kernel for(i = -49; i < 50; i++) { r += data[i]*kernel[i+49]; } *result = r; } int main(int argc, char *argv[]) { int i; FILE *f; build_kernel(); if(2 != argc) { fprintf(stderr,"Only supply data file name\n"); exit(0); } f = fopen(argv[1],"rb"); if(NULL == f) { fprintf(stderr,"Unable to open file\n"); exit(0); } for(i = 0; i < 200; i++) { if(1 != fscanf(f,"%lf",data_r+i)) { fprintf(stderr,"Error reading data\n"); exit(0); } data_i[i] = 0.0; } fprintf(stderr, "Data read\n"); // Just calculate the transform at two places hilbert(data_r+49, data_i+49); hilbert(data_r+149, data_i+149); double phase49,phase149,change; phase49 = atan2(data_r[49],data_i[49]); phase149 = atan2(data_r[149],data_i[149]); change = (phase149-phase49)/100; /* change' needs to be wrapped into +/- PI, but I haven't done it */ printf("Angle at sample 49 is %10.6f\n",phase49); printf("Angle at sample 149 is %10.6f\n",phase149); printf("Change in cycles after 100,000 samples %10.6f\n", change*100000/(2*M_PI)); return 0; } --- End code --- Here's the output when the data is 0.1 Hz faster: --- Code: ---Data read Angle at sample 49 is -0.657761 Angle at sample 149 is -0.658373 Change in cycles after 100,000 samples -0.097432 --- End code --- And 0.1 Hz slower --- Code: ---Data read Angle at sample 49 is -0.657381 Angle at sample 149 is -0.656791 Change in cycles after 100,000 samples 0.093872 --- End code --- Here's the hack I was using to generate test data --- Code: ---#include <stdio.h> #include <math.h> #define SAMPLE_RATE (100000.0) #define FREQUENCY (9990.0) #define SCALE (2047) int main(int argc, char *argv[]) { int i; for(i = 0; i < 200; i++) { double d = sin(i/(SAMPLE_RATE/FREQUENCY)*2*M_PI)*SCALE; printf("%i\n",(int)(d)); } return 0; } --- End code --- |
| ogden:
--- Quote from: Berni on December 16, 2019, 06:42:36 am ---But still i think OP is looking at the problem from the wrong direction. If you use a quadrature mixer to convert the 100.1KHz signal down to a pair of 100Hz signals things get a lot easier (Its essentially a software defined radio at this point). Not only is this computationally cheap to do, but unlike his method of taking 1 second long 1MSPS recordings of the signal and processing them later, this downconversion method can also operate continuously on a signal, giving you a result on every sample rather than just one sample per second. --- End quote --- I am afraid that you drive OP into wrong direction as well. Estimating precise frequency of downmixed 100Hz do not seem to be simpler than just counting zero crossings of "carrier". |
| Marco:
He isn't suggesting just downmixing it, he's suggesting downmixing it by sines in quadrature. I think after low pass filtering, the phase of the quadrature results treated as a complex vector will give you the frequency shift (side effect of the PM/FM equivalence). |
| Navigation |
| Message Index |
| Next page |
| Previous page |