Products > Test Equipment
Siglent SDS2000 new V2 Firmware
Performa01:
@rf-loop
Well, with combined efforts we have finally solved this puzzle.
Given all our test results and having a look at the ADC08D1020 data sheet (up to now, I didn’t know what ADC is working in the scope), it all comes together very nicely.
So I try to summarize the way this scope architecture works and will simplify things by ignoring the DEMUX option with its associated DXd output ports, as the actual configuration doesn’t affect the general working principle. I will also ignore the additional memory dedicated for the digital channels, that with V2 FW can somehow be attached to the analog channels to provide a total of 35Mpts per channel.
So I just say the ADC08D1020 has two ADCs, where each of them has an associated digital output port, which in turn is connected to a block of 14MB acquisition memory.
Both ADCs can work in parallel for any of the two analog input channels, and the interleaved data will be output by both ports at the same time, thus making the output data bus twice as wide. So we get twice the data but also twice the memory in interleaved configuration. For screen representation, we have to combine the contents of the two memory blocks, like all even sample numbers will be found in one block and the odd ones in the other.
The trigger system in contrast doesn’t care for that and is attached to the output port of the trigger channel. If this happens to be Ch. 1, the trigger system is connected to the output port of ADC1. If Ch. 2 is turned off, then its output port is providing data for Ch. 1 also, but that is not seen by the trigger system, as its data bus width remains constant. This is the same on the R&S RTO, which also uses ‘only’ 10GSa/s for the trigger system, whereas a single channel can have 20GSa/s in interleaved mode. So it is the exact same situation, except for the RTO sample rates to be 10 times higher… ;)
And yes, Siglent couldn’t be bothered to implement an upsampling circuit in the trigger system, so the left part of fig. 7 in the before mentioned R&S application note applies. That sounds a bit disappointing at first, but isn’t all that bad after all. The RTO scope claims glitch detection <50ps, which would be equivalent to 500ps on the Siglent. But Siglent only specifies 1ns, which is perfectly fine for a scope in this class.
Which leads us back to the point where we started this discussion…
Since we could demonstrate that sin(x)/x reconstruction is not a signal processing in the main acquisition path, nor is it in the trigger path, it is indeed more like a display option and should belong in the ‘Display’ menu.
For Sequence Mode, sin(x)/x should not make any difference regarding waveform capture rate, and it should be possible to arbitrarily apply that after the capture, when we are viewing the history.
EDIT: ADC08D1020 appears to be a very nice chip by the way! :)
rf-loop:
--- Quote from: Performa01 on January 10, 2016, 01:56:51 pm ---@rf-loop
.
Given all our test results and having a look at the ADC08D1020 data sheet (up to now, I didn’t know what ADC is working in the scope), it all comes together very nicely.
--- End quote ---
Oh. I forget add to my previous msg: "ADC is something like this"
(I'm not sure what it is exactly, it was only for thinking principle and how it give data out. I'm quite sure thät least it is "something like this")
Performa01:
--- Quote from: rf-loop on January 10, 2016, 02:13:29 pm ---Oh. I forget add to my previous msg: "ADC is something like this"
(I'm not sure what it is exactly, it was only for thinking principle and how it give data out. This I'm sure thet least it is "something like this")
--- End quote ---
Oh - doesn't really matter, does it?
It is quite obvious that it has to be something like this, and this 'something' is a real nice piece of silicon! ;)
rf-loop:
--- Quote from: Performa01 on January 10, 2016, 02:47:28 pm ---
--- Quote from: rf-loop on January 10, 2016, 02:13:29 pm ---Oh. I forget add to my previous msg: "ADC is something like this"
(I'm not sure what it is exactly, it was only for thinking principle and how it give data out. This I'm sure thet least it is "something like this")
--- End quote ---
Oh - doesn't really matter, does it?
It is quite obvious that it has to be something like this, and this 'something' is a real nice piece of silicon! ;)
--- End quote ---
If think only this discussion no matter (here working principle matter)
But I can not confirm exactly if it is Ti ADC08D1020.
Performa01:
Peak Detect:/Roll Mode revisited
Given the insights gained from our previous experiments, I wanted to look at this topic once again, particularly in order to see if the 1ns glitch detection works as specified.
First let’s try if we can reliably trigger on trigger conditions that are only valid for just one nanosecond. I’ve tried two different methods and used the trigger frequency display as a reference. As long as its displayed value does not drop, we aren’t losing any trigger events.
An easy way to check that was using a 300MHz sinewave again and adjusting the trigger level near the top of the waveform, where the positive halfwave is just 1ns wide (Trigger_1ns)
With these settings triggering is reliable and no events are missed (yes, trigger frequeny display is a bit off, but it has not dropped!). And just like all the previous tests already indicated, it doesn’t matter if we use sin(x)/x or just x as well as vectors or dots. The result is also the same regardless of the number of channels in use, it also works with a sample rate of just 1GSa/s.
Also tried this with a narrow pulse at a repetition rate of 1µs. Again, we don’t lose any trigger events as long as the pulse width at the trigger level is at least 1ns (PeakDetect_1ns_1MHz_1ns)
Now I change the repetition rate to 12ms, while leaving everything else just as before (PeakDetect_1ns_1ns)
At a timebase setting of 2ms/div we still maintain a sample rate of 2GSa/s and a total of three pulses are displayed on the screen. They are very faint, hence barely visible, but they are there (PeakDetect_1ns_2ms)
At 5ms/div, sample rate drops to 1GSa/s, but that’s no problem, as we can consitently see all five pulses (PeakDetect_1ns_5ms)
At 10ms/div and a sample rate of 500MSa/s, we still don’t lose any pulses, but the amplitudes aren’t stable anymore. The trigger system quite obviously still works on the full 1GSa/s raw data from the ADC as there is absolutely no hint for losing any pulses. But it becomes also obvious, that sample data is decimated to the current sample rate (500MSa/s in this case) before peak detect gets access to it. This explains the discrepancy between triggering and displayed data. If you look close, according to the display, the pulse at the trigger position doesn’t quite reach the trigger level, but the trigger still fires (PeakDetect_1ns_10ms)
At 20ms/div and a sample rate of 250MSa/s, we still don’t lose any pulses, but the amplitudes are pretty random (PeakDetect_1ns_20ms)
At 100ms/div, we automatically get roll mode, and there things change quite a bit. Current sample rate drops dramatically to just 2MSa/s, but at the same time peak detect mode quite obviously starts working based on the raw ADC data at 1GSa/s. So we not only don’t lose any pulses – except for the occasional drop out every now and then, as can be seen in the left of the screenshot – but amplitude levels are all near the maximum with only little variation (PeakDetect_1ns_100ms_roll)
Finally I want to push this to the limit and use 50s/div roll mode, which is the maximum we can get on the SDS2000. Sample rate now drops to the incredible low value of just 4kSa/s, but we still see a dense carpet of pulses that are only 1ns wide at the trigger level! (PeakDetect_1ns_50s_roll)
The question remains: does it really capture all the pulses without missing any? Short answer is yes, it does. When stopping the acquisition and lowering the timebase to 10ms, we can see all the pulses there, even at a fairly constant amplitude. Yes, I’ve shifted the waveform on the time axis a bit in order to find any missing pulse, but they were all there (PeakDetect_1ns_50s_H10ms)
I’ve also tried that in Y-t mode at 100ms/div and – surprise! – it behaves the same as roll mode, i.e. despite the sample rate now being 50MSa/s, we get all pulses at their full amplitude. So it quite obviously is possible to do the peak detect before data decimation, so that we can get the true peak data values based on the raw 1GSa/s ADC data, no matter what the current sample rate (that determines the time spacing of the samples in the buffer) is (PeakDetect_1ns_100ms)
Conclusion:
Peak detect works absolutely perfect in roll mode and generally for time bases >20ms/div.
Below that, it seems there would still be room for improvement. As it is now, the peak values are acquired from simply decimated data according to the current sample speed, instead of replacing the decimation by a peak detect mechanism.
As it is now, if we had a perfect 1ns wide pulse we would be able to trigger on it but still might not see anything on the screen.
Big question: Since it is possible to do it in an optimal way for timebases >20ms, why on earth has it to be different for faster timebases?
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version