Why are these numbers subtly different from the .csv numbers?
How subtle?
I've converted the differences between the two datasets into raw ADC values and they're all less than one ADC step (the majority are less than half an ADC step).
nb. One ADC step is exactly two pixels on screen so in physical terms the majority of differences between RAW and .csv are less than a pixel.
Seems like some sort of a rounding error to me. Maybe csv is done with fixed-point math.
Conclusion: RAW data is slightly better but using CSV isn't a real problem.
Maybe you've accidentally hit on the reason why Rigol decided to enable the "sinc on/off" button when you get to the extremes, ie. to see how much Gibbs is on screen.
Conclusion: RAW data is slightly better but using CSV isn't a real problem.
You mean if less than 3 channels are active?
...but can you trigger on that supposed sin(x)/x distortion (specifically over- or undershoot)?
If yes plot thickens
Triggering is completely digital...
...but can you trigger on that supposed sin(x)/x distortion (specifically over- or undershoot)?No.
Very interesting, but can you trigger on something that over or under (eg not visible) supposed Gibbs-suppressor filter
I can do much better than that:
I set the thing up so that the peak goes either side of a horizontal grid line when you turn sin(x)/x on/off, eg.:
If you switch to dots mode and push the trigger upwards towards the peak, it only triggers when one of the physical sample points is above the trigger line.
Just to be sure, so this is with Sinc=OFF, yes?
(but I need a different trigger level for each one because the peak moves up/down when I switch between them)
To make it look more like regular scope they use Sinc=OFF feature, which is indeed just a filter then to suppress suspicious wfm features.
But they overdid it all a little. Both post corrector and suppressor.
What's a "regular scope"?
For everybody else? No problems.
Afterwards started building some experimental analog contraption involving heavily non-linear components. When started testing almost went mad. It did not produce designed signals no matter what I did, non-linearities were all wrong. For days I debugged and tested and calculated until finally I found the culprit deep in the PS software menus... Sin(x)/x on by default. After switching it OFF discovered that contraption was working as designed from the day 1. So maybe one can get away with 2.5 samples per wfm (PS actually had 4 at max freq, all ch in use) for very well known situations... but for heavily experimental stuff only thing that counts is raw data, period. When you start replacing raw data with math fantasy you usually get string theory or something, not maglev trains
What's a "regular scope"?One with dots staying put and good for further (custom) DSP.
Watch the video again, those dots will be aliased, therefore lies.
Very interesting, but can you trigger on something that over or under (eg not visible) supposed Gibbs-suppressor filter
I can do much better than that:
I set the thing up so that the peak goes either side of a horizontal grid line when you turn sin(x)/x on/off, eg.:
Now if I move the trigger point up to to that grid line I lose trigger when I turn sin(x)/x off:
Why you show all images using 16 waveform average.
It is nice to see this first image with example 1s persistence on and without averaging.
Edit: Here's the same thing with sin(x)/x off. Triggering is much tighter:
I'm not sure what conclusions can be drawn with my really crappy probing though.
What you mean triggering is much tighter. I can not see any difference.
What you mean triggering is much tighter. I can not see any difference.
This reason is of course not trigger.
Conclusion: RAW data is slightly better but using CSV isn't a real problem.
You mean if less than 3 channels are active?
I mean that using the CSV format is just as good as grabbing the data over LAN with DSRemote (or whatever).