It's always hard to find the rational of a bug.
I think it is a bug, but probably limited to trigger location and not a misfire.
From a gut feeling I don't think the logic which causes the bug is acting on raw / primary data.
I guess it is preprocessed and handling on thing like crossings and slopes and times between them and other stuff. Resuse of some preprocessing between trigger types might be the reason. Maybe this preprocess outputs stuff which was not expected by the programmer. While only handling rawdata this specific triggertype seems not to hard to get right. But maybe the functional requirements are more complex than I think. For instance I don't understand the need for some hysteresis yet.
Maybe I'll put your (noisy) wave in my awg and see if that replicates the bug. With an awg more manipulations can be done, thus more experiments / hour
It's sad that my awg only handles 16k of samples. But putting min/max voltages after each other should give a comparable wave.
Also simpler waves can be tried.
In my thinking a trigger is what stops the continuous capture of samples. At that moment there's pretrigger data. ...
A SDG1032
Hardware wise it can act on raw data from the ADC, but I mean software wise there can be separated pieces of logic. Programmers like to create "engines", but these may hide or introduce stuff.. mostly it is going from lower level decision making to higher level decision-making.
In my thinking a trigger is what stops the continuous capture of samples. At that moment there's pretrigger data. Post trigger data need some more samples.
(It doesn't need to be 50%/50%. The horizontal position knot can make it more or less.
In that sense it seems not buggy. However showing the location and the amount of pre/post trigger data is faulty. Hence giving a wrong impression of what actually triggered it.
This is just guessing, someone has to dive into the software to get the "rational" behind the bug. But by experimenting more can be known about it.
That, too, is possible. But if I had to speculate as to the cause of the issue, I'd say that it's happening as a result of the trigger mechanism failing to reset the initial condition latch (which, in this case, is the rising edge requirement through 3.5V at t between -1.5ms and 0.5ms), and firing again after it had already fired on the earlier section of the waveform.
Hmm...that suggests the possibility of a test. Is there anything that indicates, down to the picosecond, when the trigger fired? I ask because I can set the memory depth to 1.4Mpoints, and that would give me 10 segments worth of captures. If the segment data includes the relative time at which the trigger fired, then we can look at the time of the incorrect capture segment relative to the time of the immediately prior segment. If the difference is less than one waveform period then we'll know for sure that the issue is with the initial condition latch not being reset.
Hmm...looks like if you enable the list in the segment history, it'll show you the time of the segment. It doesn't take you all the way down to the picosecond, but it may be good enough for our purposes here. I've got a test running now with that setup, but it hasn't reproduced yet. Should be interesting to see what happens.
In my thinking a trigger is what stops the continuous capture of samples. At that moment there's pretrigger data. ...
Excuse me but ... What
Turns out that at 1ms/div, the standard acquisition rate of a 60Hz signal is just too slow to make it clear whether or not the trigger is firing in that way, with acquisitions happening somewhere between 50ms and 66ms per frame. In order to ensure consistent acquisition at 16ms per frame (i.e., at the period of the signal itself), I had to drop to 500us per division and 140k points per frame. With that setup, I get a consistent 16ms per frame even with the mask test enabled, and I'm now running that test. Should be interesting to see what comes of it.
Have you looked more long time that if you turn 20MHz BW filter on then it never do this in this test setup or is it so that it do but much more rarely. If it totally stop this fail it is one important finding.
Have you looked more long time that if you turn 20MHz BW filter on then it never do this in this test setup or is it so that it do but much more rarely. If it totally stop this fail it is one important finding.
I still haven't run this test for a massively long duration. If you can give me an idea of how long I should run a 20MHz bandwidth filter test, I'll be happy to do it. Otherwise I'll presume that running it for, say, 12 hours, is sufficient.
I have one question about the acquisition system. When I reduce the memory per capture, it drops the sample rate. That's to be expected: it's having to spread the same amount of time over a smaller number of points with respect to what gets saved. My question is this: does the ADC actually still run at a full 1GS/s for the triggering mechanism, and then the captured points are decimated to fit into the smaller number of sample points in the buffer? Or is the ADC actually run at the stated sample rate and, thus, that sample rate is also what the triggering mechanism sees?
It matters, *a lot*. If the sample rate really is reduced for the triggering mechanism then that could easily cause the issue to disappear if it's somehow time-dependent.
It has been said/confirmed that the trigger system always runs at max speed.
It has been said/confirmed that the trigger system always runs at max speed.
Ah, yes, so it has, and not too long ago at that: https://www.eevblog.com/forum/testgear/siglent-sds1204x-e-released-for-domestic-markets-in-china/msg3360130/?topicseen#msg3360130
OK, so that raises the obvious question: should there be a difference in the ease of reproducing the issue on the basis of the capture size? One would think not, given the above.
It is explained bit more but unfortunately "scrambled by natural enigma". Our "enigma" is our Finnish language what is said is second difficult after Chinese in world.
In some images there is some english/finglish explanations. But more deep text explanations just with Finnish.
https://siglent.fi/oskilloskooppi-tietoa-sds1004x-e--wfm-speed.html
I am slowly reading through this with help from Google translate. It's very helpful, thank you for writing it!
PS: there is a typo in the second (all yellow) table shown in Figure 1, in the line with t/div= 1ms. Column 6 contains 1G sa/s. It should read 1M sa/s.
Typing mistake corrected, thank you.
Something that would be very useful for me would be a block diagram of the scope. This would help me to construct a good internal model for what it's doing. Just a page or two back you posted a nice block diagram showing the input front end, the DAC for offsets, the ADC feeding the triggering system, but then it gets less precise. Is there a more complete block diagram that shows the acquisition/history/sequence/frame system in some detail?
There is not this like of functional "block diagram" what also somehow give imagination what user then see when he is using scope.
Also I miss this kind of block diagram not for me but for perhaps for user guidance and counselling.
What I have is just my own made some kind of teaching "flip chart" material in mind what need also explanations by talking when show these images... but they are as they are now. There is some what give rough imagine about sequence, history and so on but they only give very rough basic fuzzy imagine what is going on there.
@rf-loop, can you suggest any other tests I should perform aside from the 20MHz bandwidth test? I'm about to set that one up and leave it running overnight.
OK, so that raises the obvious question: should there be a difference in the ease of reproducing the issue on the basis of the capture size? One would think not, given the above.
And the answer is: there *is* a difference in the ease of reproducing on the basis of the capture size. I am able to reproduce this most easily at 14M points and reasonably easily at 1.4M points. It is nearly impossible to reproduce at 140k points (I reproduced it only once with that setting), and seems utterly impossible to reproduce at 14k points.
How do I know? Because when I switch from 140k points to 1.4M or 14M points, I've had the issue reproduce right then and there, more than once. Nothing of the sort has ever happened going the other direction.
Additionally, the mask display will show the number of frames it saw before failure, and that number grows significantly as the sample size decreases.
This has me scratching my head about what's going on here. Unless the triggering system is using the number of sample points in some way, I can't explain how the number of sample points could possibly affect the probability of reproduction. But it does.
Has already someone botherd to exchange the Intensity/Adjust encoder with a detented encoder?
I've been wondering the same thing.
I have done it right after purchasing bc this non-detent encoder was driving me nuts - pretty weird design decision given that they have detented ones for vertical/horizontal anyway.
In fact, I've even taken the photos to dump here, but I'm a lazy arse so that never happened.
I can't remember the P/N of the original non-detented one, but it turned out to be some sort of Chinese unobtanium. After searching a bit through what ALPS/Bourns/Omron offer, the closest I was able to find dimension-wise was Bourns PEC12R-4220F-S0024.
To my taste it has a bit too much detent force, otherwise it has been working absolutely fine for the last half year.
(Attachment Link) (Attachment Link)
UPD:
I still have the original part, it has no P/N, but LJV embossed on it.
Thank you, that was the information, I was looking for!
I have done such a mod to my 2 other scopes and the only regret was, not doing it in the fist place.
Chris