### Author Topic: A comment on DSO FFTs  (Read 1244 times)

0 Members and 1 Guest are viewing this topic.

#### rhb

• Super Contributor
• Posts: 2428
• Country:
##### A comment on DSO FFTs
« on: October 08, 2017, 01:08:52 pm »
I spent my career in the oil industry focused on reflection seismology.  Some comments I've read about DSO FFTs  have made it apparent that a lot of people do not understand the tradeoffs including top tier instrument vendor support staff.

You have a choice.  You can have the frequency resolution of the length of the trace with the dynamic range of the ADC or you can have larger dynamic range at the price of reduced resolution in frequency by subdividing N samples into P segments, computing the FFT of each segment and then summing.  The latter also reduces the variance of the estimate which is the primary motivation in seismology as we normally have 24 bits or more.

Every time you double the number of FFTs summed you gain 6 dB of dynamic range OR your make a corresponding reduction in the variance of the estimate.  Pick one.

For the ambitious, I refer you to "Random Data" by Bendat & Piersol.  It is one of the best investments you can make.

Have Fun!
Reg

#### jjoonathan

• Frequent Contributor
• Posts: 302
• Country:
##### Re: A comment on DSO FFTs
« Reply #1 on: October 08, 2017, 02:39:57 pm »
Mathematically I agree, but I think the simplifications that often get made are more than reasonable.

Quote
You can have the frequency resolution of the length of the trace with the dynamic range of the ADC or you can have larger dynamic range at the price of reduced resolution

* Typical trace lengths are ~thousands of times the screen resolution. Dynamic range / variance reduction is almost always useful, a zillion frequency buckets that you can't even see without zooming seldom are.

* The zillion buckets calculation is O(N*ln(N)) while the "virtual spectrum analyzer" is O(N) in trace length. One can handle arbitrarily long lengths (modulo finite float precision), the other cannot.

* Cheap scopes use fft(trace) because it's easy to implement, not because they think it's better than SA-like behavior.

I'm pretty comfortable calling SA-like behavior "good" and fft(trace) "bad" in the context of scope comparisons, even if someone somewhere has an application for fft(trace) and even though I recognize that mathematically speaking a tradeoff is happening.

A typical spectrum analyzer locked to its narrowest RBW at every span would be considered broken. So should the corresponding FFT.

Quote
Every time you double the number of FFTs summed you gain 6 dB of dynamic range OR your make a corresponding reduction in the variance of the estimate.  Pick one.

Clarification: if I'm understanding your setup correctly, the choice is in the signal, not in the FFT. Frequency buckets with a signal see variance reduction, frequency buckets without see the noise floor drop / dynamic range increase (which is really the same thing as variance reduction around 0, of course).

• Super Contributor
• Posts: 5579
• Country:
##### Re: A comment on DSO FFTs
« Reply #2 on: October 08, 2017, 04:38:26 pm »
In many scopes FFT function is limited to some silly number of points, like 2048 or so. And the reason people don't like FFT on the scope is that you never know what you will get. So it is just easier to forget that it exists.

There is rarely a case I would need to see a spectrum, but even if I do, it would be of a specific part of the waveform anyway. So it is jsut easier to capture the data and import it into PC, where there are limitless tools for this kind of stuff.
Alex

#### Kleinstein

• Super Contributor
• Posts: 5760
• Country:
##### Re: A comment on DSO FFTs
« Reply #3 on: October 08, 2017, 07:22:31 pm »
Dividing the data in small chunks an doing the FFT separate is not much different from doing a full FFT and than smooth the curve with way to many frequency point. In both cases you get the lower noise. However averaging has to be done on the power level or abs values, not the raw complex FFT values. So the gain in dynamics is limited and noise would not totally go away, but will stay there as a rather high background.

True increased dynamics would need individually triggered chunks, like in high resolution modes. Than one could as well do averaging before the FFT and would only need a single FFT.

Chopping up the curve to much will also add more artifacts from the start / end - so one should not overdo it.

The computational overhead looks large going from O(N) to O(N)*O(Log(N)), but it is not: if you already start with a reasonable large FFT window (e.g. N=4096) the increase in log(N) is not that large with an extra factor of maybe 1024: it's not even doubling the time. It may have a larger effect on systems with cache, as a 4096 points FFT has a good chance to run in cache - a millions of points FFT is more like worst case for the cache.

#### nctnico

• Super Contributor
• Posts: 16932
• Country:
##### Re: A comment on DSO FFTs
« Reply #4 on: October 08, 2017, 08:12:10 pm »
In many scopes FFT function is limited to some silly number of points, like 2048 or so. And the reason people don't like FFT on the scope is that you never know what you will get. So it is just easier to forget that it exists.

There is rarely a case I would need to see a spectrum, but even if I do, it would be of a specific part of the waveform anyway. So it is jsut easier to capture the data and import it into PC, where there are limitless tools for this kind of stuff.
I can't agree with this because it depends greatly on what kind of circuits you work on and there are DSOs with 1Mpts FFT which update fast as well. I work on circuits requiring digital signal processing regulary. (long) FFT is a very handy tool to check how much filter a signal will need and whether filtering will be enough to seperate the unwanted parts from the wanted parts. I usually have a DAC to verify the various digital signal processing stages work OK. Again FFT allows to check for aliasing and frequency responses.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.

#### Mechatrommer

• Super Contributor
• Posts: 8835
• Country:
• reassessing directives...
##### Re: A comment on DSO FFTs
« Reply #5 on: October 08, 2017, 09:30:16 pm »
There is rarely a case I would need to see a spectrum, but even if I do, it would be of a specific part of the waveform anyway. So it is jsut easier to capture the data and import it into PC, where there are limitless tools for this kind of stuff.
this...

You have a choice.  You can have the frequency resolution of the length of the trace with the dynamic range of the ADC or you can have larger dynamic range at the price of reduced resolution in frequency by subdividing N samples into P segments, computing the FFT of each segment and then summing.  The latter also reduces the variance of the estimate which is the primary motivation in seismology as we normally have 24 bits or more.
Every time you double the number of FFTs summed you gain 6 dB of dynamic range OR your make a corresponding reduction in the variance of the estimate.  Pick one.
and how would chopping large dataset into several small pieces FFTs and then averaging would help? it will introduce too much leakage i guess... as i've tried to calculate smaller piece FFT (10Kpts) 2a.png compared to near full set (10Mpts) FFT 5a.png from the same 24Mpts dataset 0.png. if you have many FFT like in 2a.png and then averaging it, all we get is still a FFT that floored someshere at -60dB, where doing a single calculation on large dataset will give you noise floor at around -90dB as in 5a.png. we have both resolution and dynamic range, so i'm not getting what you are saying..
« Last Edit: October 08, 2017, 09:32:46 pm by Mechatrommer »
if something can select, how cant it be intelligent? if something is intelligent, how cant it exist?

#### jjoonathan

• Frequent Contributor
• Posts: 302
• Country:
##### Re: A comment on DSO FFTs
« Reply #6 on: October 09, 2017, 12:14:42 am »
Quote
and how would chopping large dataset into several small pieces FFTs and then averaging would help? it will introduce too much leakage i guess... as i've tried to calculate smaller piece FFT (10Kpts) 2a.png compared to near full set (10Mpts) FFT 5a.png from the same 24Mpts dataset 0.png. if you have many FFT like in 2a.png and then averaging it, all we get is still a FFT that floored someshere at -60dB, where doing a single calculation on large dataset will give you noise floor at around -90dB as in 5a.png. we have both resolution and dynamic range, so i'm not getting what you are saying..

The short FFT has a "RBW filter" 1000x wider than the long FFT, so we would expect each bin of the noise floor (or any broadband signal) to contain 1000x more energy and show up 30dB higher, which it does. The displayed dBm/Hz of each FFt is the same, but the convention is to put dBm rather than dBm/Hz on the vertical axis by default. This is how it would work on a spectrum analyzer too.  The task of pulling narrowband signals out of the noise floor is the reason why "tiny RBW, peak/avg detect" probably ought to be supported, even though I still maintain that it shouldn't be the default or only mode of operation.

Also, if people are interested in running experiments at home, we can't omit the interaction with window functions. If you have a signal of length N and you divide it into k pieces of length n=N/k and add/average them (depending on your normalization convention), you have actually just calculated bins 0,k,2k,...N of the big FFT and discarded the rest of the information. You need to multiply the signal by a window function before chopping it into k pieces, which then makes the "chop and average" result equivalent to taking the large FFT, smoothing with fft(window), and then downsampling. Mechatrommer's setup almost certainly did this behind the scenes, otherwise I would expect a deceptively lower noise floor with gaps in frequency coverage.

As Kleinstein mentioned, you can get 80% of the way there with avg, high-res mode, and choice of memory depth + zoom, you just have to be really comfortable with your frequency/time domain dualities
« Last Edit: October 09, 2017, 12:39:13 am by jjoonathan »

#### David Hess

• Super Contributor
• Posts: 9495
• Country:
• DavidH
##### Re: A comment on DSO FFTs
« Reply #7 on: October 09, 2017, 05:11:02 am »
As Kleinstein mentioned, you can get 80% of the way there with avg, high-res mode, and choice of memory depth + zoom, you just have to be really comfortable with your frequency/time domain dualities

Unfortunately I am familiar enough with the subject to have concluded that most DSO FFT implementations are toys.  If someone has implemented good DSO FFT functionality, then I have not found it.  (1) I could live with the limited dynamic range 8 bits provide but not with a DSO FFT which requires me to calculate the equivalent noise bandwidth myself and the averaging situation is completely unacceptable.

https://www.edn.com/electronics-blogs/the-practicing-instrumentation-engineer/4427466/1/DSOs-and-noise-

Result?  I either transfer the waveform to a computer and do the FFT there where I can know exactly what is going on or use my *analog* oscilloscope to make spot noise measurements.  The analog way is easier, faster, and less error prone.

(1) And that is before I consider FFTs which do not return magnitude and phase.  Some modern very expensive DSOs can do this.  But so can some ancient and now inexpensive used DSOs.

#### rhb

• Super Contributor
• Posts: 2428
• Country:
##### Re: A comment on DSO FFTs
« Reply #8 on: October 09, 2017, 05:46:44 am »
Some clarifications in no particular order:

1) Averaging a series of short windows reduces the variance.  Summing a series of short windows increases the dynamic range.  Divide by N or 1. Pick one.

2) A window in time is a moving average in frequency.  In traditional DSP speak a multiplication by a rectangular window in time is a convolution with sinc(x) in frequency.  A Gaussian window in one domain is a Gaussian smoother in the other.

3) Not being able to select the record length, FFT length, number of segments, average or sum and time domain window type is the motivation for my comment. This is not difficult to implement nor is it particularly compute intensive.  The absence of these options is a comment on the mathematical prowess of the programmers.  No more.

4) log(20,000)*20 is 83dB.  That's a substantial increase in dynamic range if you choose not to normalize the sum of the FFTs. 1/sqrt(20,000) is a factor of 141 reduction in the variance.  Lots of DSOs have sufficient memory to record more than 20,000 segments.

5) A good FFT is a nice feature on a scope, but not a replacement for an SA. But sometimes all you have is a scope.  Lots of people (including me) don't have access to an SA.  I tried to fix that, but the SA was much too buggy to keep.  I'm now considering spending over twice as much.

6) At present, for most scopes the only answer is to move the data to a PC and run MATLAB or Octave.

7) There is no need for guessing. All the math was nailed down in the 40's by Wiener et al.  It's actually astonishingly easy to do it all on the back of a cocktail napkin if you know the transform pairs in time and frequency.  Ronald Bracewell has an excellent collection of graphs of common functions in both domains in his classic text on the Fourier transform.  Almost any signal you can come up with can be modeled as the sum of other signals with simple transforms.

Wraparound is a VERY serious issue.  Discontinuities at the start and end of a window MUST be handled correctly or you will get GIGO.

9) A long FFT has higher variance relative to the average of a bunch of short FFTs.  It also has narrower frequency bins and greater dynamic range.  Pick one.

10) No information is discarded by chopping a long series into short pieces.  It's just being used in different ways.

11) The change in dynamic range assumes Gaussian distributed additive random noise.  I should have stated that assumption despite it being so common I refer to it as "sprinkling Gauss water on the problem".

12) If a long trace consists of a series of segments with gaps between them it gets rather messy if the segments don't have a consistent trigger for each.

13) Couldn't agree more that most DSO FFTs are toys.

OT:  I HATE  "smileys" being inserted where they don't belong!

#### Mechatrommer

• Super Contributor
• Posts: 8835
• Country:
• reassessing directives...
##### Re: A comment on DSO FFTs
« Reply #9 on: October 09, 2017, 06:45:29 am »
OT:  I HATE  "smileys" being inserted where they don't belong!
8} or 8] pick one
if something can select, how cant it be intelligent? if something is intelligent, how cant it exist?

#### Kleinstein

• Super Contributor
• Posts: 5760
• Country:
##### Re: A comment on DSO FFTs
« Reply #10 on: October 09, 2017, 07:24:29 am »
Chopping the data into pieces will loose (give less weight to some of it) some information, if a window function is used. I am not so sure about how averaging adjacent chunks could reduce artifacts from not using a window function.

My guess is there will be some artifacts as averaging the complex numbers would odd interference in some cases, while just averaging the power could not lower the noise limit. So as far as I see it neither case would work perfect.

FFT can do quite a lot of what and spectrum analyzer does. Modern SAs usually also use a kind of FFT to analyze chunks of the frequency spectrum at a time in stead of the classical way of 1 frequency at a time.
The usual 8 bit ADC of an DSO however can produce harmonics / spurs - SAs can use more than 8 Bits at lower speed if needed.

A good FFT (including some extras) might replace the SA in some cases, though mainly at lower frequencies.

#### rhb

• Super Contributor
• Posts: 2428
• Country:
##### Re: A comment on DSO FFTs
« Reply #11 on: October 09, 2017, 09:53:13 am »
Some clarifications in no particular order:

1) Averaging a series of short windows reduces the variance.  Summing a series of short windows increases the dynamic range.  Divide by N or 1. Pick one.

On consideration, I'm not sure my assertion on dynamic range is correct.  It's certainly true with respect to Gaussian distributed random noise.  But the assertion may too be general.  It may not apply in the case of comparing two signals.  That would take a deeper consideration of the characteristics of quantization error than I wish to undertake. In any case, the assertion regarding division by N or 1 is BS.  Sorry about that.

In summary,  DSOs need to allow more user control of the FFT.  Otherwise they are of very limited use and likely to be misleading.

Without actually knowing the details of construction of an SA, I'm not sure how much one can say about what one sees. Nonlinearities make the analysis rather complicated.  Sadly none of the lower price instruments tell the user anything at all about the design and construction.

Smf