I'll take that as a "no"....
I'll take that as a "no"....
Probably there is something. If it interests you maybe you should test it and report if find anything.
So you have this range of 400 dollar scope, 4,000 dollar scope, 40,000 dollar scope and I have no illusions this Rigol is going to compete with the latter.
So you have this range of 400 dollar scope, 4,000 dollar scope, 40,000 dollar scope and I have no illusions this Rigol is going to compete with the latter.
And let's remember that the Pico are far more expensive then the Rigols (for far less hardware!)
without forgetting that most if not all of the money goes into a better version of the important part of the hardware... you know, the front end, acquisition and memory. and the crappiest of the picos is still better in the software, can't say that for the rigol with its load of hardware
Other than you complaining about the meas value "jumping around" what is the problem with the first measurement you posted just above? Compared to pico with 50 samples, the phase offset seems within reason.
So you have this range of 400 dollar scope, 4,000 dollar scope, 40,000 dollar scope and I have no illusions this Rigol is going to compete with the latter.
Yet weirdly enough Picos are despised around here.
It's bashing time again. Now I got 100% solid proof that this box is doing its "measurements" basically off the screen (pixels!?), not the datapoints. But while I'm putting screenshots together - heres a little warmup. It's this error I found earlier:
Is it bad that those on-screen measurements are performed on screen data? What is the trade-off, no automatic on-screen measurements and manual on-screen measurement?
If he can't see that $400 is unbelievable value for so much 'scope, warts and all, then it's his own problem. It's like buying a Ford Fiesta then complaining it won't fit a piano in the back and go 200mph.
I thought that the Rigol operates this way was pretty well established.
I might have accepted this argument if Rigol was not simultaneously deceptive about the capabilities of the instrument.
I'll take that as a "no"....
Probably there is something. If it interests you maybe you should test it and report if find anything.
But what if... Rigols firmware IS a bug?
Or you could spend 15 seconds to load all the 24M data from Rigol onto PC and then perform the calculations you need.
So in short I'd say it's as much about understanding and learning how your instruments work as anything else.
So in short I'd say it's as much about understanding and learning how your instruments work as anything else.
Agree 100%. MrWolf is making mountains out of molehills and pointing the finger at the DS1054Z when most other low-end scopes will probably do similar things of you take them to the limits. If you want that sort of perfection then buy a $40,000 'scope.
Regarding the rise time thing, I agree though that it would be better if the screen said something like "<400ns" like the Keysight scopes do in similar situations. In fact, out of Statistics mode, the Rigol does present it as "<200ns", so I assume it's simply run out of screen real estate. The Tek, though, is already known to be terrible at this. The Pico doesn't tell you anything at all.
I don't think the sample rate is a lie though. When the scope is stopped, you can zoom in and your measurements will be recalculated on the same data but at a higher sampling rate.
So in short I'd say it's as much about understanding and learning how your instruments work as anything else.
The difference is that the display data is always downsampled to 1200pts (100pts per horizontal division).
This is a big limitation but understandable because of the priceclass of the instrument.
Just some general info about the Rigol DS100Z series. As far as I know/experienced, most calculations & decoding are done
with the data that's actually visible on the screen, not with the raw captured data which can be as big as 24Mpts.
So, what's the difference between the raw waveform data (with sample rates up to 1GHz and and a size up to 24Mpts)
and the display data?
The difference is that the display data is always downsampled to 1200pts (100pts per horizontal division).
Those 1200pts are used for serial decoding, FFT and (I haven't tried my self yet) probably also for all other measurements.
How is this downsampling done? Only Rigol knows. They claim it's a special (patented/secret?) algorithm.
Actually they now offer "memory" source option for FFT. It becomes slow as diseased opossum but actually manages to show something meaningful.
Same could be done with other measurements.
Same could be done with other measurements.
Then you could spend all your time complaining it was too slow, right?
Then you could spend all your time complaining it was too slow, right?
We don't know that with 100% certainty but that's what we observe and it makes sense to do it that way.