Products > Test Equipment
Trying to display bandwith via Math on Siglent SDS2k+/2kHD/800X HD
Performa01:
--- Quote from: gf on May 17, 2022, 10:57:58 am ---
--- Quote from: Performa01 on May 17, 2022, 08:52:53 am ---The “Dx” parameter in the SDS2000X Plus math refers to sample points and we actually need to take the sample rate into account whenever we want to convert it to an absolute time.
--- End quote ---
At 20ns/div and 10ns/div, FFT(Cx) shows a sample rate of 50GSa/s, and at 5ns/div it shows a sample rate of 200GSa/s.
--- End quote ---
Well, that’s the problem. An SDS2504X HD for instance doesn’t show this weird behavior. FFT sample rate remains 2 GSa/s all the way down to 500 ps/div – and so does the SDS2000X Plus, as long as there is no d/dt expression as argument for the FFT.
--- Quote from: gf on May 17, 2022, 10:57:58 am ---This is obviously the interpolated sample rate, and it seems that interpolation joins in automatically at <= 20ns/div.
--- End quote ---
I’ve checked the d/dt operation for the SDS2000X Plus – width and delay of the numerically generated pulse remain constant for any given Dx setting, regardless of the time base, hinting on a constant sample rate.
You are certainly right in that there is an interpolation kicking in at time bases where the number of samples is too low to fill the screen buffer. For the screen display this is selectable linear (x) or Sinc (sin(x)/x) or off (dots mode), but for automatic measurements it is always there in the background. This enables time measurements with much better resolution than what could normally be expected from 2 GSa/s. I don’t have the details, yet I think the math usually doesn’t use the interpolated data.
It just so happens that we’ve already identified a bug regarding the interpolation. It operates quite differently than on other Siglent scopes and it is obviously wrong, if only because the interpolation method x or sin(x)/x still affects dot mode. All in all, I strongly suspect that this current bug with the FFT(d(X)/dt) math is just another symptom of this very bug. Anyway, my report about incorrect FFT sample rate has already been confirmed by Siglent by now, so we can expect a fix for the next FW update.
--- Quote from: gf on May 17, 2022, 10:57:58 am ---What is dx=4 now supposed to mean at (say) 10ns/div? Still 2ns (4 / 2 GSa/s), or 80ps (4 / 50 GSa/s)?
--- End quote ---
According to my investigations, it strongly looks like it’s always operating at the original (true) sample rate. Consequently, a Dx parameter of 4 means dt = 2 ns in half channel mode.
--- Quote from: gf on May 17, 2022, 10:57:58 am ---EDIT: Looking at the image in your previous message it must be (significantly) less than 2ns, otherwise the spectrum would look different.
--- End quote ---
This is certainly true but has a different cause. The math for this test is FFT(Intrp(d(C4)/dt)); applying an upsampling coefficient of 20, we get an FFT bandwidth of 40 GSa/s and the result isn’t very convincing yet.
See attached screenshot. The upper window shows the original pulse edge at 10 ns/div and math channel F1 that processes just d(C4)/dt, hence showing the pulse from the differentiated transition.
SDS2504X HD_Math_FR_10ns_Normal
Only the averaging in the acquisition menu makes things work, see the second screenshot:
SDS2504X HD_Math_FR_10ns_Avg16
The FFT-bandwidth has increased by a factor of ten and you can also compare the properties of the pulse that results from math function F1. The additional data gained from averaging are true samples and not interpolated redundant data as with Intrp(), therefore the d/dt operation with Dx = 4 now actually works with a 200 ps step. Since this is similar to RIS (Random Interleaved Sampling), some resampling/interpolation is still required to translate the additional data into an evenly spaced sample stream. This could also be the reason why the increase in sample rate does not conform with the number of averages (16 in this example).
Performa01:
--- Quote from: rf-loop on May 17, 2022, 02:50:01 pm ---
--- Quote from: Performa01 on May 17, 2022, 09:08:51 am ---Here’s a demonstration what the SDS2504X HD looks like with a moderate ~500ps risetime pulse coming from the SDG7102A. 32x averaging before up-sampling.
--- End quote ---
Perhaps marker 1 horizontal position is better in ≥ 1/2 Δf
(also looks weird your marker 2 vertical position looks like below trace level)
--- End quote ---
Thanks for the hint – this is when we get a creature of habit…
Yes, the accuracy is severely affected and I already wondered, why the frequency response appears better than it actually is.
Regarding the position of marker 2, there is nothing I can do about it. Might look into this later and see if the accuracy is affected by this. If so, I’ll file a bug report.
I have attached a cursor measurement and now the result is as expected, i.e. the measured bandwidth is only 520 MHz whereas it’s actually 570 MHz.
gf:
--- Quote from: Performa01 on May 18, 2022, 07:02:48 am ---The additional data gained from averaging are true samples and not interpolated redundant data as with Intrp(), therefore the d/dt operation with Dx = 4 now actually works with a 200 ps step. Since this is similar to RIS (Random Interleaved Sampling), some resampling/interpolation is still required to translate the additional data into an evenly spaced sample stream. This could also be the reason why the increase in sample rate does not conform with the number of averages (16 in this example).
--- End quote ---
How is this actually implemented? Does it change the ADC clock phase for each acquisition being "averaged"?
RIS - as documented by LeCroy - rather requires a TDC and an analog trigger.
Or is it just a fractional sample time shift, in order to align the (interpolated) trigger points of the traces, after up-sampling?
2N3055:
--- Quote from: gf on May 18, 2022, 11:44:49 am ---
--- Quote from: Performa01 on May 18, 2022, 07:02:48 am ---The additional data gained from averaging are true samples and not interpolated redundant data as with Intrp(), therefore the d/dt operation with Dx = 4 now actually works with a 200 ps step. Since this is similar to RIS (Random Interleaved Sampling), some resampling/interpolation is still required to translate the additional data into an evenly spaced sample stream. This could also be the reason why the increase in sample rate does not conform with the number of averages (16 in this example).
--- End quote ---
How is this actually implemented? Does it change the ADC clock phase for each acquisition being "averaged"?
RIS - as documented by LeCroy - rather requires a TDC and an analog trigger.
Or is it just a fractional sample time shift, in order to align the (interpolated) trigger points of the traces, after up-sampling?
--- End quote ---
It is interpolated trigger time alignment. But there is no upsampling. Scope has it's internal "mathematical" timebase that is needed to be able to interpolate and align the triggers that is much finer than sample rate. Samples are mapped into that space.
So it reconstructs many points of the curve from repeated acquisitions, and retriggering timing that is not fully monotonic and fact that sampling clock and external signal are not synchronised, performs same function that sequential/random sampling does.
Apart from excellent contributions by Performa01, I would also add that after you average and differentiate, such created dataset will have some size that will not be same as a number of FFT bins. Number of FFT bins will be a power of 2 number that will fit into dataset size.
That makes for an interesting contribution to this behaviour.
Namely, FFT should be performed only on a dirac pulse part of differentiated data, in such a way that the pulse is time centered in a data set.
Ideally a time gate should be used to pick a part of differentiated data that we want to FFT.
Sometimes you have a lot of data on screen but FFT picks just first 512 points.
If you then take horizontal position for time domain signal knob, and move that signal edge is to the left (or right) of the center, FFT will change. Of course it would, because it is looking only on part of the time of the data we see on the screen. FFT algorithm picks data on the beginning of the buffer from the left (from the start) and takes what it decided it needs. Rest of the data to the right of that point is not taken into calculation.
That can be taken to the extreme such that if you start moving the edge to the right of the centre of the screen, at some setting you can make FFT plot showing nothing, because it looks only on the flat part of the curve to the left of the edge.
Complex math can get complex...
gf:
--- Quote from: 2N3055 on May 18, 2022, 10:11:48 pm ---It is interpolated trigger time alignment.
--- End quote ---
OK, just that (i.e. no special hardware support).
--- Quote ---So it reconstructs many points of the curve from repeated acquisitions, and retriggering timing that is not fully monotonic and fact that sampling clock and external signal are not synchronised, performs same function that sequential/random sampling does.
--- End quote ---
The major difference is how the trigger time is determined.
A trigger point interpolation still suffers potentially from aliasing when the trigger is implemented fully digital.
OTOH, RIS overcomes the Nyquist limit with an analog trigger and TDC circuit, in addition to the ADC.
--- Quote ---But there is no upsampling.
--- End quote ---
Performa01 got a 10x higher "virtual sample rate", though, in the screenshots above, when averaging acquisition was enabled.
I actually don't see how points with different arbitrary time could be averaged, so a re-sampling to a common regular grid (at higher resolution, say 1/10 sample) makes sense to me.
Then each of the the 10 bins per sample can be averaged. Some bins may still remain empty if the trigger point of no trace happens fo fall into their time slot.
In order to be suitable as input to operations that need a regular, non-sparse time grid (e.g. FFT), additionaly the empty bins need to be estimated/interpolated and filled-in.
--- Quote ---Sometimes you have a lot of data on screen but FFT picks just first 512 points.
--- End quote ---
If the FFT window starts at the left edge of the screen, and if its size is not visualized (graphically) on-screen, then it is indeed difficult to center the signal of interest in the FFT window w/o manually calculating its size first [in seconds, or div].
Picking the FFT points from the center of the screen were IMO a more natural choice - then it were sufficient to center the data of interest on the screen.
Even better were a user-adjustable FFT window position, with proper graphical visualization.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version