| Products > Test Equipment |
| Bode Plot Computational Time for various DSOs |
| << < (8/10) > >> |
| gf:
--- Quote from: Someone on November 08, 2022, 06:46:11 am ---GSa/s have nothing to do with it, any frequency selective suppression is relative to the steepness of the filter + settling (and therefore total period of capture). It is an absolute constraint, more filtering and noise rejection requires more periods captured (one way or another). --- End quote --- Signal: 1kHz sine wave with peak amplutide = 1, polluted with Gausian noise with standard deviation = 0.0039763 (SNR = 45dBc) Measurement interval/window: 10ms 1) signal sampled at 100kSa/s -> 1000 samples, detector ENBW=100Hz 2) signal sampled at 10MSa/s -> 100000 samples, detector ENBW=100Hz If you look at the attached images, the noise floor of (1) is already lower than 45dBc, and the noise floor of (2) is clearly even lower. Note that the filter shape and bandwidth is the same for (1) and (2), and the total measurement interval is the same, too. How do you come to the conclusion that sample rate would not matter for noise floor reduction if the filter and the measurement interval are the same? [ What really matters is the number of samples, and at a higher sample rate a larger number of samples fit into the same measurement interval. ] EDIT: "Settling" is a good point. After presenting a new stimulus frequency to the DUT it is of course necessary to wait until the DUT has settled to s stationary state, before the measurement interval can begin. Hard to predict the required settling time in advance if the DUT is unknown. The implied filter of a DFT-based detector does not need additional settling time, but only a window of N samples need to be captured and processed once the DUT has settled. If an explicit filter (FIR or IIR) in front of the detector were used, it will require some additional settling time. |
| Someone:
--- Quote from: gf on November 08, 2022, 03:43:35 pm --- --- Quote from: Someone on November 08, 2022, 06:46:11 am ---GSa/s have nothing to do with it, any frequency selective suppression is relative to the steepness of the filter + settling (and therefore total period of capture). It is an absolute constraint, more filtering and noise rejection requires more periods captured (one way or another). --- End quote --- Signal: 1kHz sine wave with peak amplutide = 1, polluted with Gausian noise with standard deviation = 0.0039763 (SNR = 45dBc) Measurement interval/window: 10ms 1) signal sampled at 100kSa/s -> 1000 samples, detector ENBW=100Hz 2) signal sampled at 10MSa/s -> 100000 samples, detector ENBW=100Hz If you look at the attached images, the noise floor of (1) is already lower than 45dBc, and the noise floor of (2) is clearly even lower. Note that the filter shape and bandwidth is the same for (1) and (2), and the total measurement interval is the same, too. How do you come to the conclusion that sample rate would not matter for noise floor reduction if the filter and the measurement interval are the same? --- End quote --- Now repeat the same with some aggressor signal that is contained in a small bandwidth (extreme case an adjacent sine), adding more sample points does not improve the attenuation of signals the same distance apart, that attenuation is set by the FFT + window function. You are applying synthetic signals with varying bandwidth. Although the gaussian noise has constant energy, that energy is spread over a wider bandwidth as you increase the number of samples. A purely mathematical/simulation effect that is not matched with real systems as are being discussed here. So it is not the FFT length that is making that difference, but the different noise you added to the signal. If you could make a steeper filter by interpolating more points in-between you could make an infinitely steep FIR in a finite length (obviously impossible). |
| gf:
--- Quote from: Someone on November 08, 2022, 09:47:28 pm ---You are applying synthetic signals with varying bandwidth. Although the gaussian noise has constant energy, that energy is spread over a wider bandwidth as you increase the number of samples. A purely mathematical/simulation effect that is not matched with real systems as are being discussed here. So it is not the FFT length that is making that difference, but the different noise you added to the signal. --- End quote --- Sure, what happens at the end is: When the analog wideband noise is sampled, then all noise frequencies beyond Nyquist are folded into the 0...Nyquist range. And Nyquist is of course higher when the sample rate is higher, therefore a higher sample rate distributes the sampled noise over a larger bandwidth, resulting in a lower power density per Hz. Consequently a bandpass filter with a given ENBW extracts less noise power from the high-sample-rate signal than from the low-sample-rate signal. Still the higher sample rate helps to push down the random noise floor w/o increasing the measurement time. I don't agree that random noise is of no practical importance. Of course it won't be perfectly white, but It is certainly a nasty issue in practice, too, which needs to be addressed if we don't want that a measured high dynamic range transfer function of a DUT gets buried in the noise floor. OTOH, a small amount of dithering noise is even useful if we want to sqeeze out more than 50dB dynamic range from an 8-bit ADC (improves SFDR, INL,...). --- Quote ---Now repeat the same with some aggressor signal that is contained in a small bandwidth (extreme case an adjacent sine), adding more sample points does not improve the attenuation of signals the same distance apart, that attenuation is set by the FFT + window function. If you could make a steeper filter by interpolating more points in-between you could make an infinitely steep FIR in a finite length (obviously impossible). --- End quote --- When you initially wrote "with zero noise suppression", I did intpret it it as random (wideband) noise, and not as arbitrary "agressor signal". I fully agree that a narrower filter bandwidth requires a longer measurement interval. And if a filter with a better selectivity than the (rather lousy) sinc response of a rectangular window is desired, then a longer measurement interval is required as well. [ However, if only harmonics of the stimulus need to be eliminated, and if a frequency plan can be arranged, such that the window size is an exact integral multiple of the stimulus period, then a rectangular window is still fine (transfer function has zeros at all harmonics), and it has the lowest ENBW among all window functions for a given window size (measurement interval). ] |
| mawyatt:
--- Quote from: gf on November 09, 2022, 12:07:22 am ---OTOH, a small amount of dithering noise is even useful if we want to sqeeze out more than 50dB dynamic range from an 8-bit ADC (improves SFDR, INL,...). --- End quote --- Yes, often used to get a little more out of the lower bits in many ADCs!! --- Quote ---[ However, if only harmonics of the stimulus need to be eliminated, and if a frequency plan can be arranged, such that the window size is an exact integral multiple of the stimulus period, then a rectangular window is still fine (transfer function has zeros at all harmonics), and it has the lowest ENBW among all window functions for a given window size (measurement interval). ] --- End quote --- Often utilized to reduce/eliminate Mains signal corruption in low speed SD ADCs. Best, |
| Someone:
--- Quote from: gf on November 09, 2022, 12:07:22 am --- --- Quote from: Someone on November 08, 2022, 09:47:28 pm ---You are applying synthetic signals with varying bandwidth. Although the gaussian noise has constant energy, that energy is spread over a wider bandwidth as you increase the number of samples. A purely mathematical/simulation effect that is not matched with real systems as are being discussed here. So it is not the FFT length that is making that difference, but the different noise you added to the signal. --- End quote --- Sure, what happens at the end is: When the analog wideband noise is sampled, then all noise frequencies beyond Nyquist are folded into the 0...Nyquist range. And Nyquist is of course higher when the sample rate is higher, therefore a higher sample rate distributes the sampled noise over a larger bandwidth, resulting in a lower power density per Hz. Consequently a bandpass filter with a given ENBW extracts less noise power from the high-sample-rate signal than from the low-sample-rate signal. Still the higher sample rate helps to push down the random noise floor w/o increasing the measurement time. --- End quote --- Change the sample rate setting of a scope under these conditions, not in some simulation of perfect signals. The std-deviation of the noise is not a constant. Practical real world scopes include anti-aliasing filters ahead of the sampling, at lower sample rates (bandwidths) the measured broadband noise reduces as the bandwidth has been reduced. Simplified Simulation /= Real World The statement is still very true, I can keep rewording it: the only way to increase the attenuation of a noise component is to capture more periods, adding more samples (that have been captured with an antialias filter) does not improve the attenuation of any given signal. You have only shown that a signal with noise energy spread across more bandwidth has less noise energy within a specific bandwidth. The FFT shows that, but the changed parameters of the FFT did not cause that. |
| Navigation |
| Message Index |
| Next page |
| Previous page |