Products > Test Equipment
Siglent SDS2000 new V2 Firmware
Performa01:
--- Quote from: rf-loop on January 05, 2016, 03:15:43 pm ---This whole case need bit more inspections because in FW there is something wrong and/or not optimal.
Example why heck Sinc is mixed there in sequence mode for timebases 1ns/div to 20ns/div. Segmented memory cquistion do not need anything but push as fast as possible raw ADC data to memory and nothing - nothing else.
--- End quote ---
Well, I might have an explanation for that. While vectors and dots really is just a matter of screen representation, sin(x)/x is an actual signal processing step required to recover the original signal. This in turn is required by the digital trigger system in order to determine the exact trigger point on the time axis.
If this option is set to just ‘x’, the scope will just use a linear interpolation and introduce some noticeable jitter by that. I’ve demonstrated some of the odd effects on a fast sinewave in absence of sin(x)/x reconstraction in an earlier post, where we could consider the width of the trace area for any given (trigger) level would be the amount of jitter (Trace_Peak_x_Fast_Vectors)
--- Quote ---There is one open question.
Example. Sequence mode. If example in one sequence is 80000 segments. After these are acquired then acquisition of course stop and scope start do many things including display update for show waveform before it start new sequence. Question is, how many segments it display overlayed on the display before it start new sequence. It is least some hundreds in this case but how much really?
--- End quote ---
I might have finally found a way to estimate this. After several experiments I believe that the scope just displays any (the first - or last – or whatever?) 750 +/-50 waveforms for every filled buffer.
I got these results rather consistently for 80000 segments at timebases from 1 to 10ns/div (used Ch. 4 again, but that probably doesn’t matter in this case). Not tested anything else yet.
rf-loop:
--- Quote from: Performa01 on January 07, 2016, 04:32:24 pm ---
If this option is set to just ‘x’, the scope will just use a linear interpolation and introduce some noticeable jitter by that.
--- Quote ---
Yes I know but...
In Sequence mode trigger position interpolation between samples is possible to do later when user is looking acquired segments and it can do linear or with Sinc curve as user make his settings. Of course if we look Sequence mode continuously running over and over Sequences it is image "cosmetics".
(when we look at the entire oscilloscope all the functions, perhaps should be noted that this is only one fairly small nuance if some thing may give some % more abs max speed)
When look History (Sequence history) it do trigger position interpolation. It do not need do in runtime. (I have not yet checked it with SDS2000 but least SDS1000X can do it and if SS2k can not it need repair. Btw, SDS1000X do not change Sequence segments acquisition burst speed if turn Sinc on or off.)
My findings about displayed (and overlayed) segments between sequences is weird. But your around 750 is inside 500 - 1000 what I have tested but weirs it is because some things there looks nearly like random least because can not quess all variables. I have tried with 2 channel method where other channel just trig segments forward and other channel is slow sawtooth ramp.
Other method was ARB squarewave (2000 50% square cycles and some cycles have signature what I can regognize on the scope screen after acquired 2000 segments and displayed somehow randomly... it was frustrating test due to lack of handy tools for edit this kind of ARB (yes, 524krow csv with notepad for maake some signatures to some pulses ). In continuous Segment mode these "signatures" some times blink on the screen but never when I run single shot from ARB gen. I do not (least yet) know how SDS2000 select segments for overlay on the screen after sequence ready. (Is there even any significance. In particular, because this can not be used to better glitch hunting than the normal mode of operation.)
--- End quote ---
--- End quote ---
Performa01:
--- Quote from: rf-loop on January 08, 2016, 05:26:13 pm ---Yes I know but...
In Sequence mode trigger position interpolation between samples is possible to do later when user is looking acquired segments and it can do linear or with Sinc curve as user make his settings.
--- End quote ---
You are certainly right for lower frequencies, up to about 100MHz at 1GSa/s.
If you look at my picture (Trace_Peak_x_Fast_Vectors) again, then at 300MHz @ 1GSa/s there is also a fair amount of uncertainty regarding the pk-pk amplitude. While triggering would still work as you described at a trigger level of 0V, we certainly would miss trigger events if the trigger level is set to >60% of the peak amplitude.
The same situation as for the picture in my last post, but this time with trigger level set to 66% of the signal peak amplitude (Trig_Peak_x_Fast_66%)
Doesn’t look like a stable and reliable triggering anymore, does it? ;)
For a periodic signal, this wouldn’t be a big deal, as every missed trigger condition just means 33ns wasted time. So even if we frequently miss 10 in a row (which is rather unlikely), it would only decrease waveform update rate by some 10%.
But what about sporadic events, like the glitches you mentioned and that are the reason why a high segmented memory trigger rate is so important according to HPeysightilent? ;)
If we have the odd narrow pulse that has been acquired by less than 4 samples, its amplitude will most likely be too low without sin(x)/x reconstruction – except when we are lucky and a sample just hits the peak, but there is only so much chance that would happen.
The essence of what I’m saying is, that a trigger condition that we’ve missed because of the lack of proper signal reconstruction, cannot be ‘repaired’ or ‘fine-adjusted’ afterwards, simply because recording has never even stopped (and post processing started) due to a trigger condition.
But then again, up to some 100MHz we’re good without sin(x)/x.
Below there is a borderline case with a 4ns wide pulse, which corresponds to a fundamental frequency of 125MHz. Without reconstruction it looks like this (Pulse_4ns_Peak_x_Fast)
There is a fair bit of uncertainty visible, but the scope still wouldn’t miss any trigger events as long as the trigger level is kept below some 93% of the peak amplitude.
With reconstruction filter active, everything is fine. Also the automatic measurements for the transition times get instantly more accurate (Pulse_4ns_Peak_sinx_Fast)
--- Quote ---My findings about displayed (and overlayed) segments between sequences is weird. But your around 750 is inside 500 - 1000 what I have tested but weirs it is because some things there looks nearly like random least because can not quess all variables. I have tried with 2 channel method where other channel just trig segments forward and other channel is slow sawtooth ramp.
--- End quote ---
My test results are in fact even more consistent than what I wrote, according to the following table:
1ns/div 732
2ns/div 784
5ns/div 771
10ns/div 796
My test was simple: I used the setup as described in an earlier post and tweaked the modulation frequency until I just got one full modulation period, i.e. no gaps on the screen.
Since the peak amplitude of my signal was less than 3 divisions, the ADC would resolve this in only less than 70 different levels, which gives an uncertainty of nearly 1.5% already. Then add the difficulty to recognize just one single missing value, i.e. 1/25 of a division, all the more so since there is also a bit of noise, obscuring tiny gaps, the total uncertainty is considerably higher, but obviously still better than 5%, according to my results, that could also be expressed as 763 +/- 4.33%.
I didn’t look any further into this though, as it already has become clear that it wouldn’t be the secret weapon for glitch hunting. Without this prospect, it just doesn’t matter, how the signal actually is displayed in sequence mode.
If, on the other hand, we would have actually found that we could get a complete signal representation on the screen at a higher waveform capture rate than in normal mode, then this would just have been a hint for a bug, because then sequence mode would have done exactly the same work as normal mode, just in bigger blocks, so to speak. So it really shouldn’t be faster in this case.
rf-loop:
Do you believe primary trigger is generated after interpolation?
I do not.
But there is also signs about bug or even design error in trigger "box". (At this time this is just only suspect, not at all enough tested and this is so "deep inside engine" thing and because it can only test indirectly it is better to do with other peoples with some other test idea...for avoid faulty reasoning - fallacy. )
It looks like (and this is NOT confimed, only light suspect until more real evidence) trigger is not generated from interleaved 2GSa/s data. More it looks like trigger look only one ADC data and so 1GSa/s. (or there is some other things wrong)
This is old case but I have forget it totally. Long time ago I wonder why trigger counter drops out so easy where trigger level is not even near signal top. Image on TFT looks like all is ok and stabile trigger
(of course it looks ok because scope is enough fast for eyes and there is enough well trigged waveforms and because captured waveform position adjustment works good),
but counter start drop out specially if look with 2GSa/s and use lines and then Sinc on or off signal is far over trigger level.
But, surprice. With same signal turn to 1GSa/s (take other channel on) and look now signal with lines and Sinc on or off. Surprice is that there is easy to see what is level where trigger counter start dropping out. Because worst case one signal cycle samples are just around trigger level.
If it use 2GSa/s stream, worst case data points are still far over trigger level. So why counter start dropping. At this time I think it is "only" this counter... so not so much for it and I forget it from more testing.
I think counter do not have other source but just this primary trigger signal. I do not believe there is arranged to some other things just for this counter so perhaps this can use here.
It is more easy to test with over 300MHz sinewave. Around 375MHz is some kind of "sweet pot" for visual inspection so that this can see extremely clearly. (if then use measurements so that adjust input signal level so that with 2GSa/s scope show same p-p level what it show using 1GSa/s . In all cases of course 1ns/div.
There can find, using 2GSa/s, trigger level where trigger counter just barely do not drop anything.
Then turn to 1GSa/s and adjust input signal level for same averake p-p value. You find this point in trigger level is exactly same.
Turn Sinc off. Look where is trigger level related to 1GSa/s worst case samples. Also this can nice inspect more if stop now scope and search wfm history slowly.
Why it is same for 1 and 2GSa/s. Perhaps due to design error, bug in FW or this is only way for this speed?
Why it is same for Sinc on or off is perhaps clear. There is not any Sinc interpolation before it generate primary trig from data stream.
Performa01:
--- Quote from: rf-loop on January 09, 2016, 10:39:11 am ---Do you believe primary trigger is generated after interpolation?
--- End quote ---
Yes, I did believe that, simply because I could not think of any other way to get reliable triggering at high frequencies – and as we all know, I think the trigger system of the SDS2000 is hard to fault and certainly one of its strongest points, but…
… after reading your reply and performing some more experiments, I stand corrected. Well, that’s the beauty of having discussions with other engineers, as this prevents us from flogging a dead horse… ;)
I can see why you call 375MHz a ‘sweet spot’ :)
There are numerous others, 388MHz for instance. Still I thought I’d stick with just 300MHz as I didn’t want to go far outside the bandwidth specification for this scope.
I’ll show my experiments first and discuss it in more detail afterwards.
Common settings are dot display, no sin(x)/x reconstruction and fast acquisition.
First there is my test signal: a 300MHz (<50ppb), 500mVrms sinewave. At a trigger level near the centre, the trigger frequency counter of the SDS2000 shows the correct value at its usual poor accuracy and low resolution, i.e. an error of -20ppm. So the trigger frequency display of 299.994MHz shall be the reference for the following tests. Note that the sinewave looks reasonably undistorted in this screenshot, despite no sin(x)/x interpolation has been used (Trig_12mV_2GSa_300MHz_dots_fast_x)
At a sample rate of 2GSa/s and a trigger level of 348mV, the distortion becomes much more evident. The frequency display still shows the reference value, but would start to drop if we set the trigger level any higher (Trig_348mV_2GSa_300MHz_dots_fast_x)
By reducing the sample rate to 1GSa/s, the picture changes dramatically – doesn’t look like a sine at all, does it? What strikes me is the peaks at the trigger point and on top of the first and last negative halfwave – these look somewhat artificial. The trigger counter has now dropped from the reference value, and even though it might be less than expected, it still shows that there is a difference if the sample rate is cut in half (Trig_348mV_1GSa_300MHz_dots_fast_x)
Now the same experiment at an even higher trigger level of 500mV. At 2GSa/s, the distortion is even worse now and peaks on top of the wave become visible even at that sample rate now. The trigger frequency counter has dropped dramatically to just 196.923MHz by now, indicating that more than 33% of the trigger events get lost (Trig_500mV_2GSa_300MHz_dots_fast_x)
If we reduce the sample rate to 1GSa/s again, the pivture does not even show a waveform anymore. Instead, we get some funny curved lines plus even something that looks like a flat oval at the trigger point. Trigger frequency has dropped even further to 178.324MHz, once again the difference is less than expected, but there is a difference after all (Trig_500mV_1GSa_300MHz_dots_fast_x)
Now what does that tell us?
First I want to quote the datasheet, which says that glitch detection works down to 1ns. That indicates that there is always some monitoring going on, which has to be independent from the main acquisition sample rate. Remember how I demonstrated peak detect working even in roll mode at very low sample rates <1MSa/s, where the scope still didn’t miss a single 4ns wide pulse. This would not be possible at sample rates <250MSa/s, if the main acquisition system was the only mechanism in effect.
So my current guess is that the ADC always runs at full speed and the current sample rate displayed on top of the info bar only determines the amount of data to be stored in sample memory. At this point, the decimation mechanism (normal or peak detect) has to sit – and also the primary trigger system.
For me this would be a satisfactory estimation on how this scope works, if it weren’t for the surprisingly small difference between 1 and 2GSa/s.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version