Found two older notes on triggering. These might be more about "know your scope" (and its limitations) stuff. My primary guess is that the trigger's signal path has more noise than the path for shown signal, and I'm just approaching the limits.
For below tests, I'm using direct BNC-BNC cable from wavegen output to channel input, 1M/high-Z modes or 50ohm modes as desired. 50ohm modes will result in (almost) the same, but of course signal amplitude is halved and thus vertical gain should be adjusted comparably. (Earlier I used just probe handheld on the wavegen output with same results). Values shown below are for 50 ohm modes.
First:
Defaults, wavegen 50ohm 500mVpp (square 1kHz), 50ohm for channel input, auto setup -> should give 100mV/div and (rising) edge trigger with 0V offset. (Note, adjust the 50ohm on the wavegen first, as it would otherwise "helpfully" readjust the amplitude value for you (but not the actual amplitude driven), to a value you did not want to set. I made mistake with that twice before realizing what was going on, and once after realizing.)
Increase vertical from that 100mV/div to 1V/div -> loses trigger, even though visually it looks like there is "plenty" of clear edge to trigger from (half a div).
Adjusting trigger level up by two steps to just +40mV (in my case) restores triggering. I'd have expected the triggering to work fine until the edge height is approaching few LSBs or so (i.e. noise); here it is still around 16 LSBs total, or ~8LSB from 0V to the square top (or from bottom to 0V). The 40mV adjustments is 1-2 LSBs. (I'm not sure how exactly it scales things, so those LSB calculations could be slightly off, but order of magnitude should be correct.)
If adjusting the trigger coupling to HF reject with 0V offset, it again triggers, but has a bit of jitter and trigger point seems to be near the top of the edge. I can understand the shift if triggering is done with own analog path (assuming e.g. analog filter, which will naturally slow down the relatively fast edge, and thus the trigger path's signal reaches the trigger level later), but the jittering? Shouldn't HF reject reduce noise and trigger jitter? (Also, I've come to understood that the triggering is digital, "It has an innovative digital trigger system with high sensitivity and low jitter", so the filtering should not cause shift, unless the HF reject is done in analog.. thus it would not be fully digital.)
If the wavegen is set to 1Vpp (twice the amplitude), and the vertical correspondingly to 2V/div (and trigger level returned to 0V), the triggering is, suprise, stable, even though visually the situation should be the same as before. Lowering the trigger two steps to -80mV and it loses the triggering. So, almost the same, but slightly better "margin".
The puzzle for me is the difference in the ability to trigger vs. ability to show a clear signal, and on the other hand the difference in the triggering stability between those two amplitude & gain levels. In fully digital domain there should be no problem to trigger at least for one, if not two more steps of vertical gain (i.e. 2V/div and 5V/div). Then the edge starts to be truly small to see (but still there!).
The other case:
*snip* *snip*
... Well, this case got cleared after half an hour of digging into it this time. I was wondering why the wavegen's cardiac waveform was losing triggering on the mid-height peak before the lowest peak, when lowing trigger level downwards, even though the mid-height peak seems to have a "better" start (from slightly lower voltage and steeper slope). Zooming in on that mid-height peak reveals that there is a tiny sudden step on the "flat" area, just before the peak starts to rise. So, just a slightly poor quality waveform. Maybe the waveform edges are wrapping around there and developers forgot to adjust the waveform for proper continuity.
My scope updated just nice with the latest version, including the self-calibration afterwards. The calibration result was so far the best; earlier the offsets and gains were clearly "about there", now "almost or exactly there". Could be just lucky case this time. (I didn't assume that scopes would be precision instruments anyway, except on timing.)
Is the scope supposed to remember all previous settings after reboot?
…
Bugs? Undocumented features?
Would be nice (at least for me) if a menu selection that uses the adjustment knob would keep that knob active the same way as menu (numeric) value adjustments do. That is, value adjustments seem to not "time out" the knob (one can wait a minute and still adjust the value), but selections time out quite quickly. At least I use the default action of intensity (or whatever) adjustment pretty much never, but I am constantly making a selection change, looking at the view for a moment, and try to select another choice... only to end up tweaking the intensity instead
However, this would then benefit from some way to deselect the current menu item (return the knob back to intensity adjust) without causing changes in the selection. The current way of clicking the menu button first to select/activate it, then to step through the values makes it not possible to have another push for deselect. (I have more to say about this, but I need to check / massage the ideas a bit more first...)
10:1 probe, normal modes (default stuff), trigger at normal (and 0V), probe shorted, 10mV/div, showing about 10mV of noise (as expected). Adjusting position of the channel moves the offset pointer/trigger line smoothly (like 1 pixel at a time, 0.20mV per step), but the shown signal does NOT move up/down at all (it is still updating the data otherwise as expected), until about 11/12th step, when it jumps to the new height on the display. However, if trigger is set to single, showing the one span of noise, that signal view does move smoothly with the position adjustment. I would have expected that normal mode view to move smoothly with the position adjustment, too. Sure, it is mostly noise, but e.g. the average level of the "signal" could be looked at, but with that effect it is not always shown correctly. Explanations? Bug?
Found two older notes on triggering. These might be more about "know your scope" (and its limitations) stuff. My primary guess is that the trigger's signal path has more noise than the path for shown signal, and I'm just approaching the limits.
For below tests, I'm using direct BNC-BNC cable from wavegen output to channel input, 1M/high-Z modes or 50ohm modes as desired. 50ohm modes will result in (almost) the same, but of course signal amplitude is halved and thus vertical gain should be adjusted comparably. (Earlier I used just probe handheld on the wavegen output with same results). Values shown below are for 50 ohm modes.
First:
Defaults, wavegen 50ohm 500mVpp (square 1kHz), 50ohm for channel input, auto setup -> should give 100mV/div and (rising) edge trigger with 0V offset. (Note, adjust the 50ohm on the wavegen first, as it would otherwise "helpfully" readjust the amplitude value for you (but not the actual amplitude driven), to a value you did not want to set. I made mistake with that twice before realizing what was going on, and once after realizing.)
Increase vertical from that 100mV/div to 1V/div -> loses trigger, even though visually it looks like there is "plenty" of clear edge to trigger from (half a div).
Adjusting trigger level up by two steps to just +40mV (in my case) restores triggering. I'd have expected the triggering to work fine until the edge height is approaching few LSBs or so (i.e. noise); here it is still around 16 LSBs total, or ~8LSB from 0V to the square top (or from bottom to 0V). The 40mV adjustments is 1-2 LSBs. (I'm not sure how exactly it scales things, so those LSB calculations could be slightly off, but order of magnitude should be correct.)
If adjusting the trigger coupling to HF reject with 0V offset, it again triggers, but has a bit of jitter and trigger point seems to be near the top of the edge.
I can understand the shift if triggering is done with own analog path (assuming e.g. analog filter, which will naturally slow down the relatively fast edge, and thus the trigger path's signal reaches the trigger level later), but the jittering? Shouldn't HF reject reduce noise and trigger jitter? (Also, I've come to understood that the triggering is digital, "It has an innovative digital trigger system with high sensitivity and low jitter", so the filtering should not cause shift, unless the HF reject is done in analog.. thus it would not be fully digital.)
If the wavegen is set to 1Vpp (twice the amplitude), and the vertical correspondingly to 2V/div (and trigger level returned to 0V), the triggering is, suprise, stable, even though visually the situation should be the same as before. Lowering the trigger two steps to -80mV and it loses the triggering. So, almost the same, but slightly better "margin".
The puzzle for me is the difference in the ability to trigger vs. ability to show a clear signal, and on the other hand the difference in the triggering stability between those two amplitude & gain levels. In fully digital domain there should be no problem to trigger at least for one, if not two more steps of vertical gain (i.e. 2V/div and 5V/div). Then the edge starts to be truly small to see (but still there!).
120mV difference at 2V/div is not much and certainly irrelevant for practical work.
The puzzle is solved by thinking about the scope being rather dumb. It does not know that there is a clean waveform and an even lower hysteresis would work just fine. It just applies a standard hysteresis value that has been tested to work in 90% of practical situations with a low noise scope like this.
On the slope triggering.
On rising edge, I guess the higher trigger level could also have its hysteresis (like the lower has, i.e. signal has to go enough below the trigger level), but since the higher level needs to be above the lower level anyway (even enforced in the UI), it becomes sort of moot point for that. I.e. upper level can be anywhere between lower trigger and top of edge, no extra margins needed. (Well, there is some tiny amount needed, but so small that it is not about hysteresis).
However, why does it not work in the corresponding way for the falling edge? The lower trigger can be as close to the edge's bottom as wanted, and upper trigger as close to the edge's top and still trigger, as long as the separation between triggers is more than about that hysteresis. I'd have expected that the upper trigger would need to be below the signal top by that hysteresis amount, that is, corresponding to the requirement of lower trigger having to be above signal bottom for rising edge.
Also, why do they need any hysteresis at all? The slope trigger in itself, by its definition, has hysteresis. Considering that the falling edge doesn't have hysteresis related to the signal (only between the triggers), which is already close to it. Hmm.. maybe I should test with more noisy signal.
Quote120mV difference at 2V/div is not much and certainly irrelevant for practical work.Ooh, the reason why I original got into these (non-)issues is because it was very much relevant for practical work. Quite the complex and long waveform, I needed some way to get it stable, and the seemingly easiest way was to edge trigger on the peak of the highest ringing wave, which was only a little bit above the 2nd highest wave. Not a much of margin to play with. I don't remember the specifics (amplitudes etc.) any more, and the measured device (PC PSU) is now in bits and pieces.
Don’t forget that we need two reliable thresholds to calculate the slope. If you dispend with the hysteresis, thresholds could be unstable and ambiguous due to slow edges and/or noise.
Yes, the scope is supposed to restore its previous state after a reboot.
Yes, there have been several complaints in the past that certain parameters have not been properly restored, and most of these problems have been fixed.
I’m currently not aware of any such issues but would not be surprised if there were still some left.
If you find something, please document it, describe how to reproduce it and post it here. We can then verify the issue and make Siglent aware of it.
I wonder, if the minimum resolution for vertical offset control (in analog side) is different (i.e. bigger steps), not matching the input resolution? (It does not have to be the same, if there is headroom in the input range (as there seems to be), but then the difference should be corrected in the digital side or simply have bigger steps for offset in the UI/control.) I couldn't find info on that offset step size in the datasheet.
I hope this version explains better what I have been after.
All that theory I already knew.
The "issue" I have with the results I found (or with not having 1 LSB level 0-offset self-calibration) is that the scope obviously knows how to adjust offset at the 1LSB resolution (as proven with the frozen sweep position adjustment and GND-coupled input), but the normal input samples are not behaving the same.
My addition to the theory section:
As long as there is headroom in the ADC (e.g. showing only 200 value range with 8-bit ADC which has total range of 256 steps), the "fine-tuning" of the offset (to within ADC's resolution, or actually even better than that if samples have enough bits) can be done with software/digital. (It can be done even with no ADC headroom, but then either the top or bottom end would have a bit of clipping.)
As an example: say, having settings where ADC resolution would correspond to exactly 1mV, but offset system provides only 10mV resolution, and we would like 6mV offset. If the system then chooses the closest analog offset (10mV), the ADC gives values that are 4mV off the mark (too high offset). But the system knows that, so it can just subtract 4mV offset from every sample in digital/software, and now the final samples are offset by the desired 6mV.
So, this scope does know how to do the math, as it can do the same software offset adjustment with the frozen sweep samples (single-triggered), but it seems it is not doing it with the incoming sample data flow. And because it has not corrected the incoming data, the shown values both in the running waveforms (except with GND-coupled input) and in the single-triggered frozen sweep can be also slightly wrong. In single-trigger view it is only calculating the "fine-tuned" offset change on top of the non-corrected sample data, it does not apply the correction at any point, even during this post-processing where it would have all the time it needs and then some.
In any case, the shown samples do not have the fine-step offset shown to user (except in every 1 in 6 offset or so, EDIT: and no way to know which one of those 6 steps would give a physical offset that matches the shown offset) (and ignoring absolute accuracy, only considering the relative accuracy and resolution). Either the scope should apply the corrective math (one signed 8-byte addition operation per sample) or let the user only adjust the offset with the actual resolution available (since it apparently can not be controlled as accurately as let to believe anyway). (I am also aware that likely the best offset step size in those lowest ranges would be about double of what the steps are doing now in 2mV/div sensitivity, I'd be fine with that.) Both solutions would end with samples with correct offset; one needs a tiny bit more calculations and gives smoother offset control, the other with visibly coarser offset control but matching what the analog side can actually do.
(Note that the corrective addition does not need to be done with every single input sample, but only for shown samples, but then everything else needs to consider this dualism of having different "effective input sample offset" and "higher resolution visual offset". E.g. trigger levels would need to work with the same effective offset as the input data is being handled, but the levels must be shown in the same "visual offset" as the samples are shown with. So, the added complexity of such solution would be just asking for more bugs... Adding an offset correction to every simple input data is simple, but it does need that one calculation, and there can be quite a number of samples coming in, is there enough processing resources in the system...)
An additional question is related to that GND-coupled input. How can its offset be controlled in that 1LSB resolution? I'd have assumed that it would be simply a switched connection to ground at the AFE input (or similar), and thus should have the exact same normal analog side processing. Showing the internal noise and offset calibration errors, and those ~3LSB jumps with offset adjustment. But since GND-coupled input offsetting moves every 2 steps (1 LSB), something is very much different for them. Is the GND switched to the path after the analog offset injection and given all of the offset with digital "correction"? Or even later, just before ADC (as it does not need any gain either)? Or simply digital simulation (with tiny bit of simulated noise on it)?
All that theory I already knew.Sorry, but from your replies this is not always absolutely clear to me – and then there might be others coming across this thread who may actually find it useful to get the full explanations posted here. I don’t mean to bore anybody, but still prefer to provide complete information.
So for this posting I apologize in advance that I will write a lot of things that you probably know already
My addition to the theory section:
As long as there is headroom in the ADC (e.g. showing only 200 value range with 8-bit ADC which has total range of 256 steps), the "fine-tuning" of the offset (to within ADC's resolution, or actually even better than that if samples have enough bits) can be done with software/digital. (It can be done even with no ADC headroom, but then either the top or bottom end would have a bit of clipping.)
As an example: say, having settings where ADC resolution would correspond to exactly 1mV, but offset system provides only 10mV resolution, and we would like 6mV offset. If the system then chooses the closest analog offset (10mV), the ADC gives values that are 4mV off the mark (too high offset). But the system knows that, so it can just subtract 4mV offset from every sample in digital/software, and now the final samples are offset by the desired 6mV.You are right, it could be done that way – on a very different scope that is, one that would be slow like a turtle.
There is also no point in modifying the data in Stop mode all of a sudden. The scope just shows the very last acquisition in this mode, just as it has been displayed during Run (there it was not alone but together with numerous previous acquisitions).
Only when you alter the vertical position in Stop mode, it does some math to reposition the trace to where it’s supposed to be with the new offset – and usually the assumptions are correct, so you don’t see any major jump when starting Run mode again.
There are no samples not showing on the screen. This is one of the points of the Siglent X-series scopes, that they don’t hide any real data and users can always see everything that has been captured at a glance. A single video frame on the screen, updated every 40 milliseconds, contains up to thousands of trigger events, and the entire sample memory is cramped into that display. Yes, many samples will overlap that way, but you’ll never miss a peak or glitch as long as the effective sample rate is high enough to capture it in the first place, and this is also why intensitiy grading works so well with these scopes.
You have just identified a number of impacts that a software correction of the input offset would have on other areas, like triggering and measurements, so it is not just an easy coffee-break change. But the biggest argument against such an implementation would be the impact on performance, as already stated before. Post processing every single Sample in a scope that can use up to a total of 280Mpts (with both ADCs active) is just not going to happen.
I for one don’t see any noise in GND mode, not even at the zoomed 1mV/div gain setting. But I admit that I do not know where exactly the switch is implemented. From some experiments, my first suspicion would be that simply the data transfer between ADC and sample buffer is stopped.
See the attachment, using single trigger, 1V/div (it looks similar at any sensitivity, but they are samples and affected by sensitivity changes afterwards). Whether to call it noise or something else, I don't know, but it certainly isn't a flat line.
Actually does not need to be slow (with proper hardware), but might not be cheap or easy, or possible with the tech in this scope. I was thinking such solutions like 15-20 years ago with DSP chips available back then, with IIRC just 8 parallel units capable of addition, though each could be independent, and half of them could do MACs, not just additions). These days it is nearly trivial considering all the parallel processing advancements - if one can choose the hardware, which obviously isn't the case here; the scope has what it has.
The maximum rate of incoming samples is, I think 4Gs/s (2+2). A modern cheap DSP tech or a parallel processing unit in a CPU can handle that trivially. E.g. 64 byte size additions in parallel (and the other operands are the same number, so no bandwidth wasted in moving varying other operands, only for the samples), 62.5MHz rate is enough. (Even original Pentiums MMX stuff was close to handle that, 20 years ago.) As I mentioned above, I was musing this kind of stuff back then (not for scope samples but SDR processing). Not quite as trivial back then, or needed a chip that cost hundreds of $$$.
However, the alternate version does not need to process all samples individually; it could first process them to the visual data (a maximum of 640x400 in pixels in separate temporary layer), then translate those pixels by the calculated screen-space offset. About two orders of magnitude less of calculations. However, there would be that complexity then. Well, since it does need to do the addition (and other stuff) to place to samples into pixels in the first place, it is just a matter of changing that one value.... see the paragraph after the next one.
And anyway, the scope is doing that offset calculation stuff already in stop mode, just with the slightly "wrong" offset …
See the attachment, using single trigger, 1V/div (it looks similar at any sensitivity, but they are samples and affected by sensitivity changes afterwards). Whether to call it noise or something else, I don't know, but it certainly isn't a flat line.
And anyway, the scope is doing that offset calculation stuff already in stop mode, just with the slightly "wrong" offset …
I have stated it already - Stop mode is completely different. It only shows a single acquisition and no new data to be combined with the existing ones and no building up of millions of data points for a single video frame every 40ms. Consequently, there is also no intensity grading in Stop mode. All in all, it’s not only very different but also a lot easier.
I have already mentioned the newer SDS1000X-E series which has the processing for Average and Eres implemented in hardware, so there is almost no slowdown – but still there is some, even though the max. memory depth is limited to 1.4Mpts. This platform is based on the Xilinx Zync SOC, which is a very powerful architecture and can handle a huge memory. Anyway, the base philosophy has been kept the same – I have not thoroughly checked yet, but would be very surprised if offset were handled any different than in the SDS2k.
I'm not in my lab anymore until next weekend, so I cannot look right now how the newer SDS1000X-E handle vertical position changes down at their highest sensitivities. There we have even +/-2V offset and one LSB in the 500µV range is just 20µV. We’d need a DAC capable of 200k steps, some 18 bits…
Maybe rf-loop is reading this and could have a look?
So it seems Siglent have taken advantage on the powerful platform and actually implemented some "offset fine adjust block" in hardware (FPGA) to cope with both the higher offset range and higher sensitivity of these scopes.
SDS1004X-E.
If Offset have around 240uV increments and offset range is 4V (+/-2V) it means that OffsDAC need least 14bit.
But if it is 14 bit DAC and step is 240uV then its range is not enough. If step is 260uV even then range is just over 4V (4.26V). There need remember that there is several analog components just installed in production line. After then there need be enough room for factory calibration from scratch and after then still room for years of components drifting for selfcal or true cal.
Selfcal resolution for vertical offset adjust voltage is just this DAC resolution. If increment is 250uV then whole range give 4096 room for 4000. If I think this, I feel it is bit too narrow for mass production (if it works somehow as I "believe") So, it is perhaps more than just 14 bit...
SDS1004X-E.
If Offset have around 240uV increments and offset range is 4V (+/-2V) it means that OffsDAC need least 14bit.
But if it is 14 bit DAC and step is 240uV then its range is not enough. If step is 260uV even then range is just over 4V (4.26V). There need remember that there is several analog components just installed in production line. After then there need be enough room for factory calibration from scratch and after then still room for years of components drifting for selfcal or true cal.
Selfcal resolution for vertical offset adjust voltage is just this DAC resolution. If increment is 250uV then whole range give 4096 room for 4000. If I think this, I feel it is bit too narrow for mass production (if it works somehow as I "believe") So, it is perhaps more than just 14 bit...
Then we should not forget that any DAC has INL and also DNL errors, and for the higher resolution DACs this error inevitably becomes substantial.
This is probably the main reason why it is so hard to determine the true physical offset resolution when just watching the scope doing its quick-cal. With an DNL error of just +/-0.5LSB (not at all uncommon even for very good DACs), one LSB step could actually be anything between 100 and 300µV if for example one LSB would ideally be 200µV...