Products > Test Equipment
Siglent SDS2000 new V2 Firmware
Performa01:
Digital Trigger System, Peak Detect & Glitch Capture
After a little pondering and some more tests, I should provide some clarifications on the digital trigger system, glitch capture and peak detect acquisition mode, also with regard to the observations described in my recent postings.
First, I shall give an overview on how the SDS2000 works, see picture SDS_BD.
The input signal is conditioned by means of an attenuator and VGA /variable gain amplifier) to accommodate the dynamic range of the ADC.
The ADC is clocked at a constant 1GHz, hence provides a constant sample rate of 1GSa/s. To keep things simple, the block diagram doesn’t include interleave mode, where the ADC data of two channels are combined in order to yield twice the sample speed, because this has no impact on the basic operation and particularly the trigger system, which always works with 1GSa/s raw data from just one ADC – the one that is assigned to the trigger channel.
The ADC always has to run full speed, otherwise the fully digital trigger system as well as the 1ns glitch capture would not work. So I refer to the ADC output as the raw sample data here. These are fed into the trigger circuit and the first signal processing unit at the same time.
Proc. 1 refers to the first signal processing step, which serves (at least) two tasks:
1. Compaction of the raw sample data according to the timebase setting, so they can fit into the available sample memory.
2. Glitch capture, which is similar to peak detect, with the only difference that it works on the raw sample data instead of the already decimated data in sample memory.
Of course, compaction is only necessary on timebase settings, where the raw sample data for the full screen width don’t fit into the sample memory. For example, at the lowest memory setting of 7k no compaction is needed for timebases up to 500ns/div, because this results in a total recording time of 14 div. times 500ns = 7µs, which in turn yields 7k samples at 1GSa/s.
For slower timebase settings, the amount of data has to be reduced in order to fit into the memory. This could be done by simple decimation, i.e. writing only every Nth sample into memory. This way, a slower sample speed is simulated and I’ll refer to it as the effective sample rate as opposed to the raw sample rate, which is always 1GSa/s.
Since this scope provides glitch capture down to 1ns, a simple decimation would not cut it at all, hence a similar process as described below for the 2nd processing step in peak detect mode has to be applied in order to preserve narrow glitches and make sure they will make it to the sample memory.
Proc. 2 refers to the second signal processing step, which has to do one more data compaction in order to fit the display. As the display has 800 x 480 pixels, we can assume there are 700 horizontal pixels for the waveform. In normal acquisition mode, we can only sensibly display 700 pixels per waveform. Even if the sample memory is at its lowest setting of just 7k, this is still 10 times more than what the display can handle. In normal acquisition mode again, the 2nd signal processing step would just decimate the data by only passing every 10th sample to the display.
In peak detect mode, things are a little different. It should be obvious, that peak detect mode wouldn’t make any difference as long as all data in memory fit into the display memory. So at timebase settings of 50ns/div or faster, there are just 700 samples (or less) acquired for each trigger event (waveform), so no data compaction is required, no matter what the memory depth is set to.
If the sample memory contains more than 700 samples for a single waveform and data compaction becomes necessary, peak detect mode traditionally uses two data points, that is a minimum and maximum value, per horizontal pixel on the screen. So at a timebase of 100ns/div there are 1400 samples per waveform, and they all make it to the display in peak detect mode, as there is now a min/max data pair for each horizontal pixel position.
For even slower timebases, peak detect keeps providing the display with 700 data pairs, so it needs to do some data reduction too. But this time it isn’t just dropping superfluous data by a simple scheme without looking at them, but every cluster of samples that shall be displayed at just one horizontal pixel position is searched for the min/max values and the resulting data pair is sent to the display.
By now it should be clear, that peak detect and glitch capture together give an extremely powerful combination, which allows us to spot narrow pulses down to 1ns width even at very slow effective sample speeds, such as in roll mode, where it can be as low as just 10Sa/s at 50s/div with 7k memory.
So now let’s correlate the findings in my previous tests with the working principle described above:
Conclusion
1. The trigger system works flawlessly on trigger events down to 1ns, no matter what the timebase and acquisition settings are, except for the minor flaw that it won’t trigger correctly on two narrow pulses spaced <30ns apart, as I’ve demonstrated in an earlier post.
2. Peak detect mode works just as it should and I haven’t come across a single instance, where it had failed to do so.
3. Glitch capture works beautifully on slower timebases, but doesn’t seem to work properly on faster ones. This needs some further investigation (coming soon).
EDIT: some typos...
Performa01:
Glitch Capture
In all previous tests, triggering and peak detect worked absolutely fine, whereas glitch capture quite surprisingly only kicked in at timebases 50ms/div and slower, leaving two timebase settings (10 and 20ms/div) where the pulses had a pretty random amplitude, indicating that they were captured only with the effective sample rate of 500 and 250MSa/s respectively.
This behaviour gave me some headache, since it should be clear what would happen if we had a perfect 1ns wide pulse.
The scope risetime is specified at <1.2ns (it is actually more like 1ns), so the ADC in a SDS2304 would probably see something like this when a perfect 1ns wide pulse is applied to the scope input (Ideal_Pulse_1ns)
That means at a sample rate of 500MSa/s the minimum pulse hight on the screen would be just about 15% of the original amplitude. With 250MSa/s, it could easily happen that the pulse wouldn’t be visible on the screen at all, whereas the scope would still reliably trigger on it. That’s certainly not good – we always should be able to see the trigger event, i.e. the signal the scope has triggered on.
The test I am referring to has been carried out with the maximum memory length of 70Mpts. What if we select less memory, which would cause the sample rate to drop earlier? For example, with the minimum of 7k, sample rate would only be 25kSa/s at 20ms/div and glitch capture would not work for all timebases from 20ms/div down to 1µs/div! This prospect gave me some serious headache, so I had to investigate.
I set up another test with narrow pulses, but this time wanted to demonstrate glitch capture working independent of the trigger. So I let the scope trigger on a sync signal applied to Ch. 3, whereas the narrow pulses are fed into Ch. 4 with some 30ns lag with respect to the trigger edge. Peak amplitude is about 4.5V and pulse width is <4ns (Glitch_Test_Setup)
I’ve tested all possible memory sizes and will show a few examples only.
First 7k memory depth and 500ns/div, which still maintains 1GSa/s (Glitch_7k_1GSa)
Everything’s fine here, as the sample rate has not yet dropped. A little amplitude jitter is inevitable, given the fact that 1GSa/s isn’t fast enough to consistently capture the pulse peak. I’ve turned automatic measurements on, in order to document my observations and according to these, the pk-pk amplitude varied by just 240mV, equivalent to some 5%.
At 1µs/div and with 7k memory, sample rate drops to 500MSa/s and since the pulse is significantly wider than 1ns, the variation still doesn’t appear too bad, it is now 680mV or 15%. In any case, glitch capture has not kicked in (Glitch_7k_500MSa)
At 2µs/div and with 7k memory, sample rate drops to 250MSa/s and the amplitude variation gets really significant now – it is 2.48V or 54%. No glitch capture to be seen … (Glitch_7k_250MSa)
At 5µs/div and with 7k memory, sample rate is now down to 100MSa/s, but glitch capture has finally become active. The amplitude variation gets significantly better again with 1.12V or 24% (Glitch_7k_100MSa)
And it gets even better at slower timebase settings. As an extreme example, at 20ms/div and with 7k memory, sample rate is only 25kSa/s, but glitch capture is quite effective here. The amplitude variation is a negligible 120mV or 2.6% (Glitch_7k_25kSa)
As already stated, the behaviour is basically the same, no matter what the currently effective memory depth is. But there is one exception with the maximum setting of 35Mpts, because it allows a timebase setting, where the sample rate drops to just 125MSa/s (Glitch_35M_125MSa)
As can be seen from the screenshot, amplitude variation gets particularly bad here and we do indeed see some missing pulses (we can see that we can see nothing ;)). So far this is no big surprise, as it seems obvious by now, that glitch capture only kicks in for sample rates of 100MSa/s and slower, and sampling a pulse that is less than 4ns wide at only 125MSa/s cannot yield any useful results of course.
Apart from that, the additional memory once again seems to cause some additional trouble as well (as I’ve already demonstrated when reviewing the FFT), as now the automatic measurements don’t show the true picture at all. According to these, the minimum amplitude would still be 3.76V, which is clearly not true as the signal trace indicates.
Conclusion
Glitch detect only kicks in at sample rates of 100MSa/s or slower, leaving a gap for the timebases where the sample rate is more than 100MSa/s, but still less than 1GSa/s. In this area it actually could happen that we might not see the trigger event on the screen. This is clearly a flaw, but is easily circumvented by avoiding this ‘grey’ range of sample rates when really narrow glitches can be expected to occur. Maybe Siglent could narrow or even close this gap?
Maximum memory enables us to get a sample rate that is just slightly faster than 100MSa/s (where glitch capture would kick in), with particularly bad results, but also causes the automatic measurement to apparently miss all peak amplitudes lower than 3.76V in my test setup, that is clearly a bug.
I can withdraw another complaint in exchange, namely the very striking double lines (I called it ‘ghosting’ back then), specifically in roll mode. This is simply the visual effect of glitch capture and naturally gets worse with increasing ratio of raw to effective sample rate. Since the effective sample rate drops dramatically in roll mode, this explains why the double lines become so obvious in this mode.
Well, one might still ask why the effective sample rate has to be that slow and why sample memory is limited to 1.4Mpts in roll mode. But that’s another story ;)
EDIT: corrected prediction of the response to an ideal 1ns pulse.
Mark_O:
--- Quote from: Performa01 on January 16, 2016, 08:33:01 pm ---In this area it actually could happen that we might not see the trigger event on the screen. This is clearly a flaw, but is easily circumvented by avoiding this ‘grey’ range of sample rates when really narrow glitches can be expected to occur.
--- End quote ---
One problem there is that sometimes I'm not expecting there to be any really narrow glitches at all. However, that does not stop them from occurring. ;) And it's one reason I may be using a scope... to check to see if things are occurring that I don't expect. As in, shouldn't be there, but are. I'd rather not have to avoid a 'grey range', which hasn't even been defined by the manufacturer.
[Exceptional continuing analysis, BTW. :-+]
Performa01:
@Mark_O
In general, I agree with you – and that’s the reason why this topic gave me some serious headaches and kept me investigating.
Nevertheless I should try to explain a little better what I had in mind when talking about ‘expectations’ with regard to narrow glitches.
Glitch capture does not work at sample rates >100MSa/s, so in general a potential problem exists for pulses <10ns wide.
The lowest sample rate >100MSa/s we can actually get is 125MSa/s at 35Mpts, which corresponds to a pulse width of 8ns.
For all other memory settings, from 7k to 28M, the lowest sample rate >100MSa/s is 200MSa/s, corresponding to a pulse width of 5ns.
That means, for all possible combinations of timebase and memory settings but one, the pulse width has to be less than 5ns in order to be able to cause a problem. Yet there is one single setting, where this limit is 8ns.
So I just wanted to point out that there is a physical limit to how narrow a pulse can be produced by any circuit, be it by intention or by accident – and that’s the bandwidth / transition times of active components in use.
For example, for logic devices with slow transition times, like TTL and LS-TTL, as well as CMOS up to HC, it would be very unlikely for a narrow pulse <8ns to occur. Of course, it is not totally impossible and could still happen at a very low amplitude, but then we would miss it anyway, as long as the trigger level is set to the nominal threshold voltage of ½ Vdd for CMOS. And as soon as we need to set a specific trigger level in order to capture the glitch (which would be totally different depending on the initial logic state), we need to be aware that there is something going on, and if so, we can just as well set up the scope so that it uses its full sample rate – which should be easy, given the amount of memory available.
So this is basically a problem for high speed logic debugging only, as I cannot think of any mechanism causing sporadic narrow glitches in analog circuits, including HF designs and discrete transistors, and I’ve certainly never observed one in some 40 years. This is what I mean that we can indeed expect – let’s say ‘predict’ instead – whether or not glitches could occur and if so, what the minimum pulse width would be for a given circuit.
Btw. I have corrected the paragraph describing the effects of an ideal 1ns wide pulse in my previous post, as it is obvious that the rise time of the scope alone will prevent that pulse from disappearing completely at 500MSa/s. For the same reason, this whole topic wouldn’t be an issue on a 100MHz scope.
Apart from all that, it shouldn’t be impossible to fix this issue in the SDS2000(x) V2 firmware and I certainly hope that Siglent engineers will come up with a working solution.
Performa01:
Visibility of narrow pulses
For several tests I’ve used narrow pulses and sometimes pointed out that they were barely visible in the screenshots. This might lead to the false impression that it would be generally difficult to spot glitches on the scope screen, which isn’t true. So I thought I’d elaborate on that a bit more.
Generally, it should be obvious that narrow pulses appear on the screen as lines that are only one or two pixels wide, hence visibility also depends on the absolute screen resolution. The screen on the SDS2000 has an absolute resolution of about 114dpi which corresponds to a pixel width of ~222µm. This is not awfully big…
My computer screen where I’m viewing the screenshots has only ~81dpi and the pixel width is ~312µm and visibility should be better, but that’s not actually the case. In fact, visibility is significantly better on the scope than the screenshot viewed on my computer monitor. The reason for this is in the monitor profile, including settings for brightness, contrast, gamma and color space, which are optimized for natural looking photos as I do a lot of picture processing. It is obvious that the scope uses a quite different profile, optimized for graph viewing.
So it comes as no surprise that visibility of glitches is not bad on the scope, whereas it might be considerably worse on the computer screen, depending on the monitor profile in use.
That said, I still did a couple of tests to determine how the display of glitches can be improved.
First I show a screenshot with the default settings on the scope. Depending on your particular monitor settings, visibility of the narrow pulses on channel 4 as well as the edges of the squarewave on channel 2 might be somewhere in the range of fair to poor (Glitch_Intensity_Grade_50%)
Of course, the narrow pulses and steep edges are so dim because of the intensity grading and in situations like this, one might wish to be able to turn it off. Intensity grading simulates the behaviour of an analog scope, which gives us a lot more information about dynamic signals, but also has the disadvantage of poor visibility of fast edges, if not quite as bad as it used to be in the CRT era.
We cannot turn off the intensity grading.
Persistence doesn’t help either.
We can only turn up the intensity of the trace display (Glitch_Intensity_Grade_100% )
Visibility has clearly improved and might be all you’d ever want on the scope screen, but might still not be enough for the screenshot image. Even when the monitor picture is acceptable, one might still want better visibility when including screenshots in printed documents – without the need to post process the images, that is.
This is a situation where color grading as an alternative to intensitiy grading comes in really handy (Glitch_Color_Grade)
Yes, it might not look as pretty as before, but it maintains all the information of the 3rd dimension (intensity) without visibility problems. Even when I’ve stated some time ago that I don’t care for color grading, I’ve been disabused and by now be grateful that Siglent has implemented this display mode. :-+
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version