Author Topic: Idea for improving LTDZ Spectrum Analyzer & Tracking Generator 35MHz-4.4GHz  (Read 24643 times)

0 Members and 1 Guest are viewing this topic.

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
So, you are suggesting that the dynamic range of the 12-bit ADC cannot be improved, and using a 12-bit ADC as a LOG-converter (ie. power meter) is doomed from the start because AD8307 has a theoretical dynamic range of 92dB as stated in its datasheet? And what is the real dynamic range achieved using AD8307 in the LTDZ and similar boards? It is not 92dB although it is stated in the AD8307 datasheet.

Edit: Think about SDR and how it works: With a receiver with a 12-bit ADC it is possible to listen to weak signals that are well below the ADC's 72dB dynamic range. Even those cheap SDR-sticks with only 8-bit ADCs can be used to receive signals that are well below the theoretical 48dB dynamic range. According to your statement above, this should not be possible. Although the STM32F103 has pretty bad ADC, the principles are still the same. At this point I do not have any clue what will be the practical noise floor limit with different digitally implemented RBWs on this hardware.

Let's play with some numbers and visualize:

Low end of the AD8307's input range is specified with -75dBm. So let's set our aim at -75dBm, too. That's only about 40µVRMS. Let's try to capture that with the ADC. 40µVRMS is well below 1LSB. If it can be captured at all with the ADC, then only in the presence of sufficient dithering noise. Likely this is even granted per se, w/o explicitly adding additional noise.

I don't know the exact specs and operating conditions of the ADC, but let's make the following assumptions:
12 bits, full-scale input voltage = 2Vpp, ADC noise = 707µVRMS Gaussian noise (-30dBFS), wanted signal is sine wave with 40µVRMS.

Attached is a 10000 and 100000 point FFT of the simulated signal (with added noise and 12-bit quantization). Y-axis isnormallized to dBm input level (i.e. 0 is not full-scale)
With 10000 samples the noise floor can be lowered to a level such that the wanted signal peeks out a little bit. With 100000 samples the signal becomes clearly visible then.

So I guess it is not doomed from the beginning. But capturing that many samples takes some time. For input levels below say -50dBm, a LNA in front of the ADC were IMO helpful, in order that the ADC's dynamic range can be utilized more efficiently. Also unclear to me is the suitability of the existing noise for dithering such low signal levels (distribution unknown, while the simulation was done with perfect Gussian random numbers).

Yes, increasing the dynamic range and spectral resolution is a trade-off with speed. If the ADC is using sample rate of 1Ms/s, the maximum sweep-rate is 1000 frequency steps per seconds when using 1000 samples, 100 frequency steps per second when using 10000 samples, and 10 frequency steps/s when using 100000 samples. In practice the step-rate will be less because the ADF4351 PLL's  require some settling time after each step.

By using the 12-bit ADC instead of AD8307, the user can choose which is more important at any given moment: fast sweep-rate or better dynamic range / spectral resolution ie narrower RBW. With AD8307 that is not possible without changing the component values of the 120kHz RBW/low-pass filter.

Quote
EDIT: I worry about potential unpredictable non-random noise/errors, though, like power supply ripple on the reference voltage (it seems not to be separately filtered?) and the bias for the input, unwanted coupling of stray signals, etc. Such things can still defeat feasibility.

This is something that remains to be seen what kind of performance can be achieved with the actual hardware, and what are the real limiting factors in the hardware.
« Last Edit: May 18, 2021, 10:57:39 am by Kalvin »
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Got my LTDZ hardware, and I have been playing with it on my Linux PC. After some intensive google search sessions, I was able to find suitable PC software. The PC is now talking to the LTDZ, and it is possible to do some measurements. In order to protect the LTDZ from overloading, a 10dB SMA-attenuator is connected to the LO input. No hardware modifications done yet.

Quickly measuring a 50ohm through-cable and with different combination of 10dB and 20dB SMA-attenuators reveal that the LOG-detector is not particularly linear. The SMA-attenuators were checked with NanoVNA.

Measuring a FM band-stop filter from RLT-SDR.com (attachment #1) looks quite promising (blue trace), but adding an additional 10dB attenuator reveals the problem with the LOG-detector non-linearity (red trace). Probably the LOG-scale can be calibrated for improved accuracy: Need to investigate this a bit further.

Coupling/crosstalk between the tracking generator and the RX in frequency span 35Mhz - 4GHz  is shown in the attachment #2. The tg output and the rx input are terminated with 50ohm impedances. After 1.2GHz the on-board crosstalk from tg to rx increases, reducing the useful dynamic range.

As I now have LTDZ hardware available, it is possible to start experimenting with the firmware.

PC software used for testing:
WinNWT5_v5_0_2_1730 running on Wine (used for the band-stop filter measurement).
winnwt4_v4_11_09 running on Wine.
nwt4_v1_10_13 native Linux application NWT4000lin from dl4jal (Configured as NWT4000-2).

« Last Edit: May 22, 2021, 01:56:26 pm by Kalvin »
 

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3342
  • Country: ua
PC software used for testing:

these software all show non linear dB for LTDZ, which is actually not dB due to simplified calibration. Actually they can show just a garbage at min and max values.

You can get precise dB with my tool which I posted here: https://www.eevblog.com/forum/rf-microwave/idea-for-improving-ltdz-spectrum-analyzer-tracking-generator-35mhz-4-4ghz/msg3566887/#msg3566887

Do calibration with 10 dB and 40 dB attenuators to get the real dB scale.
My tool allows to perform calibration with any attenuator values (just put wanted values under Calibrate button before calibration). It requires two attenuators for calibration in order to achieve better precision.
Note, that values < 10 dB will be distorted on LTDZ due to non linearities of AD8307 for a close to the maximum signal level, for example will bet 5 dB instead of 0 dB.

Also you can see RAW ADC output on the chart (by selecting Magnitude or RAW), it will show you actual LTDZ performance with no distortion from a math correction.

Here is the same Chinese FM-reject filter through 10 dB attenuator and proper calibration on my LTDZ. Note -45 dB rejection here is not mistake, that's real LTDZ dynamic range 55 dB with substract of 10 dB attenuator (55-10=45 dB). WinNWT shows about -60 dB for 40 dB attenuator on LTDZ, so don't believe it
« Last Edit: May 22, 2021, 03:10:04 pm by radiolistener »
 

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3342
  • Country: ua
just broken my LTDZ, micro-USB connector was broken with a part of PCB wires :(

Be careful they are soldered weak and USB connector is keeping on PCB wires...
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
just broken my LTDZ, micro-USB connector was broken with a part of PCB wires :(

Be careful they are soldered weak and USB connector is keeping on PCB wires...

Ouch! Can you fix the PCB? Looks like it is a good practice to add some solder to the USB-connector.
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Some progress: I have been able to build the original source from joseluu: https://github.com/joseluu/D6_firmware/commit/babda8b84393ddd9176de9e8841653d1d0a52468
The code works pretty well. Somehow the noise floor has jumped from -80dB to -70dB. Nice thing about this code is that the scanning doesn't produce any artifacts any more.

Next thing to do is to modify the hardware a little that the system is able to sample the output of the 120kHz RBW/low-pass filter using ADC with a sample rate of 1Ms/s, and compute the LOG/RMS of the signal. Probably I also need to fix the RBW lowpass-filter response so that the filter will perform better at the lower frequencies. Otherwise there won't be any benefit from creating dynamically adjustable RBW.

One observation: Tracking generator's output frequency is 120kHz higher than the LO by default. Probably this is common to all these devices like LTDZ and D6.
 

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3342
  • Country: ua
One observation: Tracking generator's output frequency is 120kHz higher than the LO by default. Probably this is common to all these devices like LTDZ and D6.

This is ok, frequencies near DC and outside LPF will be removed by coupling capacitor.
LO frequency needs to be shifted relative to tracking generator in order to put scanning frequency into LPF bandwidth. Also this difference is needed to put unwanted spurs outside input bandwidth.
« Last Edit: May 24, 2021, 09:43:04 pm by radiolistener »
 

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3342
  • Country: ua
Ouch! Can you fix the PCB? Looks like it is a good practice to add some solder to the USB-connector.

I'm not sure, because PCB wires was broken. May be jumpers with a small wires can help to restore it, needs to try...
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Ouch! Can you fix the PCB? Looks like it is a good practice to add some solder to the USB-connector.

I'm not sure, because PCB wires was broken. May be jumpers with a small wires can help to restore it, needs to try...

If the PCB traces are badly broken, it might be possible to remove/desolder the USB-to-SERIAL converter IC from the PCB, and use an external USB-to-SERIAL converter module instead. Just wire a three-pin header to GND, RX, TX and hot-glue the header to PCB. Then connect the external USB-to-SERIAL module to the header pins.
 

Offline zelea2

  • Regular Contributor
  • *
  • Posts: 61
  • Country: gb
The filter has already been redesigned by F4HTQ
You can read about this here http://alloza.eu/david/WordPress3/?p=542 (in french)
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
The filter has already been redesigned by F4HTQ
You can read about this here http://alloza.eu/david/WordPress3/?p=542 (in french)

Yes, that filter has better performance at 120 kHz compared to the original RBW-filter. F4HTQ's improved filter is able to even increase the signal amplitude seen by the AD8307 LOG-detector, giving some extra dB for dynamic range.

My intention is to implement a digital low-pass filter with selectable cut-off frequency after this RBW-filter, so the RBW filter should have a flat pass-band response from low-frequencies up to 120 kHz, and good stop-band attenuation. Here is a new filter implemented in my LTDZ (see attachments #1 and #2).

I have had also some time to play and experiment with the firmware source code and the actual hardware. After some modifications to firmware, it is now possible to use STM32F103's 12-bit ADC to sample the output of the RBW-filter into the on-chip sample buffer, and download the sample buffer contents into PC for later analysis. The maximum sample buffer size is currently 8192 x 16-bit samples. I am using a simple Python script to download the collected ADC samples from LTDZ, and GNU Octave for signal analysis.
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Here are initial measurement results showing the linearity of the 12-bit ADC with different signal attenuation levels (see attachment #1). The reference level (green graph) measurement was performed with the tracking generator set to 15 kHz offset, a 6dB attenuator connected to TG output, 20dB 10dB attenuator connected to rx input, and a coax cable between the attenuators, thus creating a loop from TG to rx with 16dB attenuation.

Three 20dB attenuators were added to the loop one at a time, and the measurement was repeated for each attenuator added. Finally a fourth, a 10dB attenuator was added, but the signal level is too low to make any difference any more. From these initial measurements it can be seen that the dynamic range is already 60dB or better. No calibration was performed nor necessary.

All these measurements were performed with a sample buffer length of 4096 samples, decimation with a factor of 16, without multiple buffer averaging. Using multiple buffers and averaging, and selecting carefully the offset frequency of the tracking generator, the noise level should decrease and thus the dynamic range can be improved.

These initial measurements showed that this cheap and simple network analyzer could have at least 60dB of dynamic range.

The attachment #2 shows the noisefloor of the rx-path seen by the ADC when TG disabled (ie. in spectrum analyzer mode). The raw dynamic range is around 45dB. Reducing the bandwidth using digital filtering and averaging, the dynamic range can be improved at the expense of the slower sweep rate. It was calculated that using decimation of 16, the RBW bandwidth becomes approx. 20kHz, and the noise floor drops to -50dB. With decimation of 256, the RBW bandwidth becomes approx. 1 kHz and the noise floor drops to -56dB. It might be possible to modify the hardware a little in order to improve the board's noise floor.

Edit: Fixed the rx input attenuation.
« Last Edit: May 31, 2021, 02:36:54 pm by Kalvin »
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1163
  • Country: de
Nice :-+

The noise floor seems to be roughly in the expected region.
What was the programmed output power level of the TG's ADF4351?

What catches my eye is the large width of the two peaks of the green trace in attachment #1. As the TG transmits just unmodulated CW, I'd ideally expect to see just two narrow spikes sticking out from the noise floor. Is there such a large amount of phase noise? Or did you happen to mess up anything with the decimation and FFT? What window function did you use for the FFT? How does the green trace in look, if you take the captured buffer (w/o any pre-processing) and just do a 4k point FFT, using a say Hamming window (or even better a Kaiser window with beta=13), and zoom-in the plot to the same -30kHz...30kHz frequency range?

EDIT: Was your new IF filter already in place, or were the measurements still done with the original one?
« Last Edit: May 31, 2021, 09:18:12 pm by gf »
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
The noise floor seems to be roughly in the expected region.

Yes, the noise spectrum does contain some spikes, but other than that the noise level is pretty mach what was expected. For the dithering purposes the noise spectrum looks pretty useful. I will try to add some extra coupling caps to the power supply in order to reduce the spikes a bit.

Quote
What was the programmed output power level of the TG's ADF4351?

TG ADF4351 output level was set to its maximum. There is a option for using either "low noise" or "low spur" mode: TG was using low spur and rx was using low noise.

Quote
What catches my eye is the large width of the two peaks of the green trace in attachment #1. As the TG transmits just unmodulated CW, I'd ideally expect to see just two narrow spikes sticking out from the noise floor. Is there such a large amount of phase noise?

I can repeat my measurements with different ADF4351 modes (low spur/low noise) and see whether that makes any difference.

Quote
EDIT: Was your new IF filter already in place, or were the measurements still done with the original one?

Yes, I had already modified my filter. The mixer's output is coupled to the filer's input using a 1uF ceramic capacitor. The filter output is biased to GND by a 12k resistor, and to +5V by a 24k resistor. The +5V is the same which is used by AD8307 LOG-detector. The biased filter output is then wired to directly to ADC input pin A2. As the ADC input has now necessary bias voltage added, the added bias voltage is removed from the buffered samples simply by subtracting the buffer's mean from each sample value. For simplicity the 12-bit ADC samples are stored as 16-bit values (ie. left aligned). The 1uF coupling capacitor's type is X7R or worse, so the capacitor's dielectric may add some distortion to the signal which may contribute to the spectrum. I will change the cap to tantalum and see whether the noise level improves.

Quote
Or did you happen to mess up anything with the decimation and FFT? What window function did you use for the FFT? How does the green trace in look, if you take the captured buffer (w/o any pre-processing) and just do a 4k point FFT, using a say Hamming window (or even better a Kaiser window with beta=13), and zoom-in the plot to the same -30kHz...30kHz frequency range?

Here is my current code for computing the FFT. The x is the buffer of 4096 samples (DC bias removed), and there is an option for decimation. If the decimation option is given, the system will perform decimation in steps of 2, and adjust the frequency data accordingly. It is possible that this iterative decimation may produce some low level noise, but I have not investigated that yet. The FFT windowing function used for the initial measurements was hann().

Edit: Attached the 4096 sample buffer data of the reference measurement signal used for the analysis, which can be read Matlab/Octave load().

Code: [Select]
# System's MCU oscillator frequency
global XTAL = 72000000;

# System's default sampling rate
global Fs_Hz = (XTAL/6/14);

function [f_Hz, bins] = spectrum(x, D)
  global Fs_Hz; # System's sampling rate

  if (nargin == 1)
    D=0;
  endif

  d=D;
  while (d > 0)
    x=decimate(x, 2);
    d=d-1;
  endwhile

  N = length(x);
  w = hann(N);
  df = Fs_Hz/N;
  min_f = -Fs_Hz/2;
  max_f = Fs_Hz/2 - df;
  f_Hz = [min_f : df : max_f] / (2**D);
  spec = (fft(x .* w));
  bins = abs(fftshift(spec));
  bins = bins/N;
endfunction

« Last Edit: June 01, 2021, 09:02:00 am by Kalvin »
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Here is the FFT spectrum of the refence signal using three different windowing functions: Hann, Hamming and Kaiser 13.
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1163
  • Country: de
Here is the FFT spectrum of the refence signal using three different windowing functions: Hann, Hamming and Kaiser 13.

So it's not the signal :). The wide width at the bottom of the peaks is FFT leakage. Hann is insufficient here. Hamming gives already nice peaks, but still the attenuation of the side lobes does not bring them below the noise floor. Finally, Kaiser 13 has >90dB attenuation for the side lobes, so there are no visible sidelobes above the noise floor any more -- at the price of a wider main lobe.
Another way to avoid leakage for the 15kHz signal (and its harmonics) is an FFT size which contains an exact integral multiple of 15kHz periods. For your sampling rate, this applies to FFT sizes of N*400 samples, where N=1,2,3,... And indeed, if you do a FFT of buffer(1:4000) -- withoud window -- then the N*15kHz peaks do not leak any more. [The latter does not avoid leakage for other frequencies in the spectrum, though, so it is not a general solution to the leakage issue.].

Besides the leakage issue, the decimation filter also garbles the initial samples in the decimated output. At the beginning, the filter has no state from previous samples, so it needs some settling time.

I'm puzzled where the peaks in the spectrum at higher frequencies >250kZh are coming from (see attachment)  :-//
The frequencies are obviously fs/2 - 15kHz - N*30kHz, and amplitude of the rightmost one is only ~35dB below the 15kHz carrier.
I guess they are not present at the analog ADC input (maybe you could also check the ADC input with a scope and FFT, using a higher sampling rate?)
If it were an interleaved ADC, then I would consider interleaving spurs, but it isn't interlaved, is it?

EDIT: Seems that there is indeed a time-shift between even and odd samples, see 2nd attachment.

EDIT: Reading AN3116, I noticed that the STM32 ADCs can operate in dual fast and dual slow interleaved mode. Are you using one of these operating modes?
« Last Edit: June 01, 2021, 04:16:48 pm by gf »
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Excellent analysis, gf!  :-+

I found out that the TG may not been configured properly due to some timing issue in the original code, thus 15 kHz frequency. In the source code the TG offset frequency was set to 10 kHz, and I started debugging why the TG frequency was a bit off. Now it is fixed, and the TG frequency offset is correct. After quick test the spectrum looks much better now.

I recreated the original test now with 15 kHz TG frequency offset, as I wanted to keep things as identical as possible, and our scripts should work with this test signal. The captured samples can be found in the attachment #1. The FFT-spectrum without any windowing for samples y(1:4000) is shown in attachment #2. The decimated by 8 version is shown in attachment #3.

As far as I understand the ADC is not running in interleaved mode. The ADC is set to continuous mode, and MCU is polling the ADC's end-of-conversion flag before reading the ADC and  storing the samples into a RAM buffer. Since the ADC is running in a continuous mode, there should be very little jitter. I will modify the sampling process so that it will use DMA, thus the jitter should be eliminated altogether.
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Before proceeding any further I need to check my hardware so that power supply ripple is minimized, ADC input is clean, and ADC sampling is using DMA for minimum jitter etc. Otherwise it will be very hard to see what is going on, and any further gains will be only marginal. Anyway, the concept is working.

Edit: ADC is now using DMA.
« Last Edit: June 02, 2021, 01:11:00 pm by Kalvin »
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
For a quick test for ADC dynamic range and linearity with a digital 15kHz band-pass filter added, here are results repeated with the same sample set used in my earlier post https://www.eevblog.com/forum/rf-microwave/idea-for-improving-ltdz-spectrum-analyzer-tracking-generator-35mhz-4-4ghz/msg3579913/#msg3579913

The reference level (yellow trace) measurement was performed with the tracking generator set to 15 kHz offset, a 6dB attenuator connected to TG output, no attenuator at rx input, and a coax cable between TG and rx, thus creating a loop from TG to rx with 6dB attenuation. Each 4096 sample buffer data was filtered with a 4th order bandpass filter (center frequency 15kHz, BW 400Hz) before analysis.

First the signal level without any attenuation was measured (yellow trace). A 10dB attenuator was added to the loop (blue trace), followed by three 20dB attenuators one at a time, and the measurement was repeated for each attenuator added. Finally a fifth, a 10dB attenuator was added, but ADC's linearity/dynamic range wasn't quite good enough, or the board's noisefloor was too high to get correct signal attenuation and/or the 400Hz filter bandwidth was too wide. Here are the measured and filtered signal levels using different attenuators:

No attenuator: -9.1591
10dB attenuation: -18.062
30dB attenuation: -37.563
50dB attenuation:  -57.580
70dB attenuation:  -78.101
80dB attenuation:  -81.673
no signal: -87.762

After filtering measured samples and performing FFT using Kaiser 13 window, this data shows that the usable dynamic range is at least 70dB. The loopback signal with 80dB attenuation is now visible above the noisefloor, although its value is not correct any more. Probably getting more samples and performing averaging over multiple buffers would improve the dynamic range a bit, but that remains to be seen until I have tried to reduce board's noise floor in the first place.

With the original design using AD8307 LOG-detector, the board's dynamic range was around 50dB when used as a network analyzer (tracking generator enabled). Now, using the ADC and DSP-filtering, the dynamic range of the board is at least 70dB when using the board in network analyzer-mode with the frequency resolution 1kHz (limit of the tracking generator's frequency step size). Improvement of 20dB is quite impressive as the only hardware modification so far are 1) changing the original 120kHz RBW filter component values for better low-pass performance and 2) using ADC to sample the output signal of the 120 kHz RBW filter.

In spectrum analyzer mode (ie. tracking generator disabled and using 120kHz RBW bandwidth), the dynamic range is still less than 50dB due to the poor noisefloor of the board. Adding some filtering to power supplies may improve the situation. Further, as the signal level is now computed from the sampled data, it is possible to reduce the RBW's bandwidth according to the step size using digital filtering, which will effectively improve the board's noisefloor seen by the signal level estimator. However, if the board's inherent noise floor cannot improved, the dynamic range of the spectrum analyzer-mode will remain around 50dB. Even if the dynamic range cannot be improved much, the signal levels reported by the device will be more linear compared to the original design using AD8307.

Edit: Using a 4096 sample buffer, the device is able to perform theoretically 209 sweep steps in a second. In practice the sweep step rate will be less, as the software has to wait until the PLLs are locked before starting sampling RBW signal. Currently the signal processing is performed after capturing the whole buffer of 4096 samples, which will reduce the sweep rate. However, it is possible to use two sample buffers of 4096 samples so that while the other buffer is used for collecting new data using DMA, the other buffer containing samples from the previous sweep can be processed at the same time, practically eliminating the sample buffer processing time. In spectrum analyzer mode (ie. tracking generator off), the board's noisefloor will be quite bad anyway due to wide RBW (even after applying digital RBW filtering), so it might be practical to use even shorter sample buffer length, which would increase the effective sweep step rate.
« Last Edit: June 03, 2021, 12:37:36 pm by Kalvin »
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1163
  • Country: de
For a quick test for testing the ADC dynamic range and linearity with a digital 15kHz band-pass filter added, [...]

IMO you are thinking too complicated. You neither need decimation nor an explicit bandpass filter. The DFT acts as filter bank, and the window function already is the bandpass filter which is applied to each frequency bin. For instance, Kaiser 13 has a 3dB bandwidth of about 2 DFT bins, a stoppband attenuation of about 100dB, and a stopband bandwidth (width of the main lobe) of about 8.5 bins. When you apply it to 4000 samples captured at a sampling rate of 12/14 MSa/s, then you get a -3dB RBW of ~430Hz and a main lobe width of about 1.8kHz. A DFT calculates all frequency bins in the range -fs/2...fs/2, but for the IF detector, we are eventually only interested in a single bin, namely the bin at 15kHz. So the idea is to calculate the DFT for this bin only, and to renounce the calculation of the other bins. This leads us to a Goertzel detector then. It does exactly that: calculating the DFT for a single frequency. The functionality of the window function, acting as detection bandpass filter, still applies to Goertzel as well, of course.

EDIT: Computational complexity is eventually only 3 multiplications and two additions per sample - and still you get a narrow RBW for the detector and a huge out-of band attenuation.

EDIT: A potential hurdle for doing it on the SMT32 in real-time might be the memory consumption for the pre-calculated window function table and sin/cos table. Depends of course on the maximum desired window size. If the supported window size(s) are fixed, a set of pre-calculated tables could be possibly reside in ROM (don't know, is ROM access slower than RAM access on the SMR32?).
« Last Edit: June 03, 2021, 01:11:10 pm by gf »
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
For a quick test for testing the ADC dynamic range and linearity with a digital 15kHz band-pass filter added, [...]

IMO you are thinking too complicated. You neither need decimation nor an explicit bandpass filter. The DFT acts as filter bank, and the window function already is the bandpass filter which is applied to each frequency bin. For instance, Kaiser 13 has a 3dB bandwidth of about 2 DFT bins, a stoppband attenuation of about 100dB, and a stopband bandwidth (width of the main lobe) of about 8.5 bins. When you apply it to 4000 samples captured at a sampling rate of 12/14 MSa/s, then you get a -3dB RBW of ~430Hz and a main lobe width of about 1.8kHz. A DFT calculates all frequency bins in the range -fs/s...fs/s, but for an IF detector, we are eventually only interested in a single bin, namely the bin at 15kHz. So the idea is to calculate the DFT for this bin only, and to renounce the calculation of the other bins. This leads us to a Goertzel detector then. It does exactly that: calculating the DFT for a single frequency. The functionality of the window function, acting as detection bandpass filter, still applies to Goertzel as well, of course.

At this point I am just performing rough signal analysis for the captured signals in order to get an overview how the signals/spectrum look in general, are there any specific noise peaks, and what kind of dynamic range is expected ie. kind of feasibility study and proof of concept. Eventually I will implement the actual signal detector in STM32F103. ARM Cortex-M3 doesn't have FPU, so performing any complicated DSP will take a certain amount of cycles, thus the simpler the better. The effects of filter's coefficient quantization and available numeric range needs to be considered carefully as well if wanting to achieve very narrow filters. If the ratio of the filter's center frequency or filter's 3dB point relative to the sampling frequency (800+ kHz) is very small, is might be practical/necessary to perform some decimation prior filtering in order to get the filter coefficients into practical numeric range.

Edit: For the network analyzer use-case, we are free to choose the rx lo frequency so that the received CW signal will land in a location which contains little noise. For example, the noise level increases near 0Hz, so it is not practical to choose rx lo frequency so that the received CW signal is close to 0Hz. Also, due to filter coefficients it is more practical to choose rx lo frequency so that the received CW signal is a bit higher than experimented 15 kHz. For the spectrum analyzer use-case, I just may implement a simple Nth order IIR low-pass filter with a set of pre-calculated coefficients so that the filter's bandwidth will be selected according to the sweep step size.
« Last Edit: June 03, 2021, 02:32:21 pm by Kalvin »
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1163
  • Country: de
The reference level (yellow trace) measurement was performed with the tracking generator set to 15 kHz offset, a 6dB attenuator connected to TG output, no attenuator at rx input, and a coax cable between TG and rx, thus creating a loop from TG to rx with 6dB attenuation. Each 4096 sample buffer data was filtered with a 4th order bandpass filter (center frequency 15kHz, BW 400Hz) before analysis.

I don't understand how you can reliably apply such a bandpass filter to only ~4.78ms of sampled data (4096 samples). For instance, the impulse response of an IIR filter like
Code: [Select]
cheby1(4,0.1,[(15000-200)/(fs/2) (15000+200)/(fs/2)]) has a width of> 25ms until it fades out to a value close to zero, while the captured samples have a lenght of only ~4.78ms. To eliminate the effect of filter settling you would need to discard 25+ ms from the front of the filtered buffer, and consider only the remaining samples for further processing. But after discarding leading 25ms, there are no samples left...
« Last Edit: June 04, 2021, 12:44:42 pm by gf »
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
The reference level (yellow trace) measurement was performed with the tracking generator set to 15 kHz offset, a 6dB attenuator connected to TG output, no attenuator at rx input, and a coax cable between TG and rx, thus creating a loop from TG to rx with 6dB attenuation. Each 4096 sample buffer data was filtered with a 4th order bandpass filter (center frequency 15kHz, BW 400Hz) before analysis.

I don't understand how you can reliably apply such a bandpass filter to only ~4.78ms of sampled data (4096 samples). For instance, the impulse response of an IIR filter like
Code: [Select]
cheby1(4,0.1,[(15000-200)/(fs/2) (15000+200)/(fs/2)]) has a width of> 25ms until it fades out to a value close to zero, while the captured samples have a lenght of only ~4.78ms. To eliminate the effect of filter settling you would need to discard 25+ ms from the front of the filtered buffer, and consider only the remaining samples for further processing. But after discarding leading 25ms, there are no samples left...

I guess the same question applies to the Goertzel-detector as well? As Goertzel has a very narrow bandwidth, its impulse response will be very long too (actually it is a resonator as the pole is on the unit circle), thus its output will rise slowly until it reaches the steady state. So far I haven't seen any information which says that Goertzel-detector cannot be used to detect signals that are shorter than the Goertzel's impulse response or rise time. For the shorter signals the Goertzel's output power will be less than the maximum available after steady state, though. Please correct me if I am wrong here.

The idea is to use a bandpass IIR-filter to process the sampled signal buffer (4096 samples), and compute the RMS (energy) of the filter output over the all samples processed: kind of RMS-detector with a filter applied to its input signal. At least this seems to work in practice, but I do not know whether this works in theory.

Edit: Changed wording.

Edit 2: I do understand that when using a shorter signal than IIR filter's impulse response, the filter will not reach the steady-state. Now, if we change filter parameters (center frequency or bandwidth), the output energy between two filters will be different due to different rise-times, and it is necessary to compute a correction/calibration factor for each filter to be used. Since we are working in a digital domain, computing these calibration factors is quite trivial, though.
« Last Edit: June 04, 2021, 03:49:42 pm by Kalvin »
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1163
  • Country: de
I guess the same question applies to the Goertzel-detector as well? As Goertzel has a very narrow bandwidth, its impulse response will be very long too (actually it is a resonator as the pole is on the unit circle), thus its output will rise slowly until it reaches the steady state. So far I haven't seen any information which says that Goertzel-detector cannot be used to detect signals that are shorter than the Goertzel's impulse response or rise time. For the shorter signals the Goertzel's output power will be less than the maximum available after steady state, though. Please correct me if I am wrong here.

The idea is to use a bandpass IIR-filter to process the sampled signal buffer (4096 samples), and compute the RMS (energy) of the filter output over the all samples processed: kind of RMS-detector with a filter applied to its input signal. At least this seems to work in practice, but I do not know whether this works in theory.

Edit: Changed wording.

Edit 2: I do understand that when using a shorter signal than IIR filter's impulse response, the filter will not reach the steady-state. Now, if we change filter parameters (center frequency or bandwidth), the output energy between two filters will be different due to different rise-times, and it is necessary to compute a correction/calibration factor for each filter to be used. Since we are working in a digital domain, computing these calibration factors is quite trivial, though.

A regular FIR filter is based on linear convolution, which is rather supposed to be applied to an continuous infinite stream of samples. If you instead apply linear convolution to a finite number of samples, then steady state is reached only after the the length of the filter's impulse response, and the leading filtered samples are "garbage". For IIR, baiscally the same applies, but the impulse response length is actually infinite, so an arbitrary end of the impulse needs to be defined, at a point where it returns "close enough" to zero.

Goertzel is under the hood a DFT, calculated for only a single frequency (or a snapshot of a STFT, calculated for a single chunk of samples at a particular point in time, and calculated only for a single frequency).

DFT treats the samples as if they were circular. The window function smooths the wrap-around discontinunity between end and start, reducing spectral leakage.

But a DFT can be also interpreted as filter bank. According to the filter bank interpretation, the chosen window function is the impulse response of a prototype low-pass filter, which is under the hood converted to a bandpass and applied to each DFT frequency bin (see previous link). The DFT window has always the same size as of the number of samples, it cannot be longer. While there exist various commonly used window functions (Hann, Hamming, Kaiser,...), basically any FIR lowpass with N taps (where N is the DFT size -- number of samples) could be used as window function in order to give the filter bank the desired frequency response (of course, if you need special properties like "constant overlap add", this may limit the choice of suitable window functions, but this is not an issue here).

A window function with 4000 "FIR taps" (for a 4000 point DFT) is already quite a large number, thus enabling already a pretty narrow bandwidth. But the minimum feasible bandwidth is eventuall limited by the number of samples. And my feeling is that there is a trade-off between stop-band rejection (-> power of out of band frequencies leaking to the filter output) and ENBW (equivalent noise bandwidth -> i.e. noise power picked up inside the passband).

Integrating the power of the band-pass filtered signal is certainly a valid procedure (granted that the band-pass filtering is valid in the first place). Advantage of Goertzel is the implied bandpass filter, at low computational cost, and it collects both, amplitude and phase, so that phase differences between subsequent readings can be measured. Phase measurements are more sensitive to noise than the amplitude measurements, though. For phase confidence of 1° (standard deviation), you need an effective SNR of better than 30dB.

Quote
The effects of filter's coefficient quantization and available numeric range needs to be considered carefully as well if wanting to achieve very narrow filters. If the ratio of the filter's center frequency or filter's 3dB point relative to the sampling frequency (800+ kHz) is very small, is might be practical/necessary to perform some decimation prior filtering in order to get the filter coefficients into practical numeric range.

Sure, numeric ranges need to be planned carefully. I tried to check the effect of quantizaiton. Quantizing the coefficients of a 4k point Kaiser window to 16 bits seems to reduce the window's stop band attenuation to ~120dB. The question is whether this is enough or whether more than that is required? 32-bit Q31 arithmetic should not be a problem for the cortex M3. The accumulator can also be 64 bits if necessary. For real-time processing there are less than 84 cycles per sample available, which rather rules out too complex filtering - as you said yourself. Even a decimation filter with good stopband attenuation might be already too expensive. Goertzel applied to the undecimated data is computationally not so expensive, so it seems well feasible, OTOH.

Btw, could you post the raw data from the previous test?

Edit: You may be interested in this paper, too.
« Last Edit: June 04, 2021, 10:08:38 pm by gf »
 

Offline KalvinTopic starter

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Btw, could you post the raw data from the previous test?
Please find the signals in the attachment. I recreated the 15kHz test signals using ADC with DMA, and correct ADF4351 configuration. The file names contain attenuation used. File with name noisefloor is measured when tg was off giving baseline for the board's noise floor. File name noloop is measured when tg on but no loop connected (for measuring on-board signal crosstalk from tg to rx). The signal with 0dB attenuation may just start overdriving the rx, but the signal with 10dB attenuation is clean.
 
The following users thanked this post: gf


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf