We keep hearing the same:
- Rigol has the biggest memory depth (up to date size), but Rigol is slow and software is not stable.
- Agilent has a very low memory depth (outdated size), but Agilent is fast and software is stable.
But isn't it the bigger memory depth, that makes things slower? So how can we compare honestly?
What if we really could compare apples with apples?
If everything about Agilent is great (fast, stable), except the memory depth, how come that Agilent does not just wake up, and make a new scope that keeps everything (all the great stuff) what they have today, but just add more memory?
Then we could finally compare apples with apples!
Then we would not longer have an excuse to remove Agilent from the wishlist because the memory depth is too low.
Then we could finally check if the Agilent is still as fast when the full memory depth is used.
Then we could finally check if the Agilent is still as stable when the full memory depth is used.
Then we could finally check if the Agilent still reaches waveform update rates of 1M waveforms/s.
Now it's Rigol's turn
If everything about Rigol is great (cheap, big memory depth, low power), except that the scope is slow, and becomes unstable when loaded too much, how come that Rigol does not just wake up, and implements the heavy CPU load stuff in hardware instead of software (throw in another FPGA - they don't cost a fortune these days, and the DS4000 series isn't necessarily cheap in the first place), and hire some more Chinese software engineers (to fix these bugs once and for all, the DS4000 series costs enough to get a product that actually does what it is supposed to do)?
Things look easy in an ideal world!