Have you considered passing the samples through a Hilbert transform filter to convert the real 8-bit samples to complex, then convert those values to polar values (e.g. atan2 and magnitude). The (smoothed) angular rate of change will indicate frequency, the smoothed magnitude will indicate amplitude.
a(t)-->f_LTI()-->b(t)
a(t)=k1*sin(w*t+p1)
b(t)=k2*sin(w*t+p2)
wanted: {k2/k1, (p1-p2)}
f(t): real ---> H(f(t)) ---> h(s): complex
w: real ---> s: complex
LP: low pass filter
get_sample(a(), b());
loop(i=0..n-1)
{
LP(polar(H(a(t),w)))=polar(LP(cplx(a[s]))={ mod(LP(cplx(a[s])) , ang(LP(cplx(a[s])) } = { k1[i], p1[i] }
LP(polar(H(b(t),w)))=polar(LP(cplx(b[s]))={ mod(LP(cplx(b[s])) , ang(LP(cplx(b[s])) } = { k2[i], p2[i] }
}
k1' = average_smoother(i=0..n-1){ k1[i] }
k2' = average_smoother(i=0..n-1){ k2[i] }
p1' = average_smoother(i=0..n-1){ p1[i] }
p2' = average_smoother(i=0..n-1){ p2[i] }
interesting approach :D
One problem will be if the circuit under test has 'memory'
indirectly it's a good way to "test" the circuit memory.
If it has no memory, results are sweep-independent.
If it has memory, results are sweep-dependent
you invented a "memory meter" :o :o :o
(Ask for a patent! ;D )
I took another look at hamster_nz's Hilbert filter. I think there is a small bug in the convolution loop that omits the last coefficient, but that makes little difference in the results. The error is a bit higher than I thought. I computed the amplitude and phase velocity errors outside of the boundary area and got
Max phase vel. error [degree]: 0.288886, std.dev: 0.142669
Max amp error: 0.015821, std.dev: 0.002622
I tested the same type of filter (anti-symmetric, step of two) with an optimized filter mask:
static double hilbert_kern_64[64] = {
0.635616, 0.209204, 0.122352, 0.0840508, 0.0619826, 0.0473452, 0.0367619,
0.0286638, 0.022225, 0.0169707, 0.0126088, 0.00894956, 0.00586386, 0.00325984,
0.00106942, -0.00076, -0.00227075, -0.00349816, -0.00447281, -0.00522194, -0.00577038,
-0.00614116, -0.00635588, -0.00643492, -0.00639756, -0.00626203, -0.00604542, -0.00576376,
-0.00543187, -0.00506332, -0.00467047, -0.00426433, -0.00385457, -0.00344956, -0.00305631,
-0.00268057, -0.00232689, -0.00199862, -0.00169807, -0.00142656, -0.00118454, -0.000971719,
-0.000787179, -0.000629464, -0.000496718, -0.000386775, -0.00029733, -0.000225911, -0.000170083,
-0.000127445, -9.57244e-05, -7.27986e-05, -5.67725e-05, -4.59548e-05, -3.88904e-05, -3.43821e-05,
-3.14574e-05, -2.93609e-05, -2.75466e-05, -2.56537e-05, -2.34504e-05, -2.08759e-05, -1.79182e-05,
-1.46725e-05};
This provides more than two orders of magnitude in error reduction:
Max phase vel. error [degree]: 0.000847, std.dev: 0.000380
Max amp error: 0.000046, std.dev: 0.000007
The maximum amplitude error in both cases occurs at the low end for a normalized wave number of ~0.1, where the amplitude is highest (1). The same relative error also occurs at high wave numbers (~0.9), as the transfer function is symmetric w.r.t. half the Nyquist frequency.
The plot below shows the transfer function corresponding to the kernel above in magenta. Note the difference in scale compared to the previous plot from the truncated hyperbola. The plot also contains a second optimized Hilbert filter using an acausal recursive pre-filter. Using this technique, more compact filters with less error and/or a wider frequency range can be constructed. There are limits to this and the recursive pre-filter will become unstable if one attempts to push the upper cut-off too close to the Nyquist frequency. But the lower cut-off is probably of more concern for practical applications. This end can be extended using a band decomposition and subsampling (Laplace pyramid). Also worth mentioning is that the Hilbert filter itself can be used to eliminate DC bias by applying it twice to create the quadrature pair.
I took another look at hamster_nz's Hilbert filter. I think there is a small bug in the convolution loop that omits the last coefficient, but that makes little difference in the results.
Yes, it was a bug/oversight - it should be
for(int j = -127; j <= 128; j+=2) {
I'm quite pleased that even my lame attempt actually looks like a possible technique for 8-bit data, as I've occasionally pondered on recovering amplitude and frequency from time series data, but never acted on it to now....
One other thing I have no idea on is how the sweeping of frequency will upset things.
There are also DSOs and MSOs able to export the amplitude (peak-peak) and phase measures.
So, this could be a working environment:
. ___________ ________ ________
| | | | | |
freq ---> | | ---> | system |---> ch0 | ---> { amplitude, phase }
| DDS | |________| | |
| sin(freq) | | DSO |
| | | |
| | ------------------> ch1 | ---> { amplitude, phase }
|___________| |_______|
- SPI, change DDS freq
- loop: RS232/usb/eth, read measures
- goto loop, until stable results
- eval: { gain = (amplidute.ch0 / amplitude.ch1), diff_phase = (phase.ch0 - phase.ch1) }
A sidestep:
Interestingly, if one was going to use sine-squared shaped pulses at a fixed frequency \$f\$ for a duration \$T\$,
$$A(t) = \sin\left(\frac{\pi t}{T}\right)^2 \sin(2 \pi f t)$$
then each pulse would be a single FFT window, and there would be much fewer details to consider. In particular, its Fourier transform is known in algebraic form,
# f = signal frequency
# T = Pulse duration
# Signal = sin(%pi*t/T)^2 * sin(2*%pi*f*t)
# Fourier transform FT, with F as the frequency parameter,
# as a function of the signal duration T and the original frequency f:
FT(F,T,f).real = ( (f-F) * (T*f - F*T - 1) * (T*f - F*T + 1) * cos(2*%pi*T*(f+F) )
+ (f+F) * (T*f + F*T - 1) * (T*f + F*T + 1) * cos(2*%pi*T*(f-F) )
+ 2*f * (1 - T^2*(f^2 + 3*F^2))
) / ( 8 * %pi * (f-F) * (f+F) * (T*f - F*T - 1) * (T*f - F*T + 1) * (T*f + F*T - 1) * (T*f + F*T + 1) );
FT(F,T,f).imag = ( (f+F) * (T*f + F*T - 1) * (T*f + F*T + 1) * sin(2*%pi*T*(f-F))
- (f-F) * (T*f - F*T - 1) * (T*f - F*T + 1) * sin(2*%pi*T*(f+F))
) / ( 8 * %pi * (f-F) * (f+F) * (T*f - F*T - 1) * (T*f - F*T + 1) * (T*f + F*T - 1) * (T*f + F*T + 1) );
which means that if one samples the signal generator directly, the signal generator itself can be characterized (at that fixed frequency) by comparing the computed FT and the sampled FFT.
Of course, because it is a single fixed frequency pulse, it is no replacement for e.g. chirps (frequency sweeps) at all. What it might be useful for, is when a system exhibits complex behaviour, like shifts the input frequency, modulates it, and perhaps generates odd harmonics from it; this gives a simple tool for examining suspicious frequencies that a sweep or other analysis indicates may have something funny going on.
I don't know if this is really actually useful, though; I only think it is interesting!
Had to put aside this topic lately, but kept reading and learning and thinking about it. This weekend stumbled upon these two videos that opened a new perspective about FFT, and convolution, and many other things that clicked together. Link to them (again) for the docs.
The Fast Fourier Transform (FFT): Most Ingenious Algorithm Ever?
Reducible
https://www.youtube.com/watch?v=h7apO7q16V0 (https://www.youtube.com/watch?v=h7apO7q16V0)
But what is a convolution?
3Blue1Brown
https://www.youtube.com/watch?v=KuXjwB4LzSA (https://www.youtube.com/watch?v=KuXjwB4LzSA)
In terms of FFT speed, tried a FFT for 24 million samples and it's almost instant (1.5 seconds), considering the download alone for the ADC samples takes about 20-30 seconds.
$ ipython
Python 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.5.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import numpy as np
In [2]: fake_samples = np.random.random(24_000_000)
In [3]: %%timeit
...: fft_samples = np.fft.fft(fake_samples)
1.32 s ± 4.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)