Products > Test Equipment
REVIEW - Rigol DS2072 - First Impressions of the DS2000 series from Rigol
marmad:
--- Quote from: Galaxyrise on July 12, 2013, 05:51:58 pm ---My confusion is with your claim that the Rigol could theoretically anti-alias without doing the random decimation before storing samples into the waveform. If it can sample fast enough to store that sine wave into sample memory, it already doesn't alias. (If that sine wave is in sample memory, the aliased low frequency sine wave will never be what's on the screen.)
--- End quote ---
It's about speed!
At 5ms/div, with a 56M sample length, the Rigol is sampling at 500MSa/s. With those settings, the interface is slow!. Do you know why? Not because of the sampling time - its capturing ~6 wfrm/s (compared to ~14 wfrm/s with a 14k sample length). It's because the Rigol has to reduce (decimate) those 56 million sample bytes to the 1400 bytes of display memory - and that takes a hell of a long time.
OTOH, let's say the Rigol only has to decimate a 4000th of that amount of memory (14k) to the 1400 bytes - do you think it might be faster? You can test this quite easily: just see how responsive the scope is at 5ms/div with a 14k sample length - and then with a 56M sample length.
So, if the DSO just grabs every 4000th byte of sample memory for decimation, things will certainly speed up, but the sample rate will then become equivalent to 125kSa/s, and all of a sudden aliasing will be a problem. But not if it does RANDOM decimation with those 14k samples, varying the number of sample it grabs between the Nth and 4000th.
So all of a sudden, you have a DSO working faster at a lower sample rate, but without aliasing.
How much faster? Test the responsiveness of the DSO with a 56M sample length at 5ms/div (it is decimating ALL 56M of 56M), and then again at 500ns/div (it is decimating only 14k of 56M).
zibadun:
--- Quote from: marmad on July 12, 2013, 05:01:52 pm ---
--- Quote from: Galaxyrise on July 12, 2013, 04:36:38 pm ---
--- Quote ---The Rigol does NOT have to do this during sampling; it could sample at full speed (2GSa/s) into sample memory - then do random decimation (to simulate the current sampling rate) to display memory
--- End quote ---
I translate that into:
1) Sample into sample buffer at the fastest rate memory depth + time base allows (up to 2GSa/s, ofc)
2) decimate to screen
The Rigol does that now. And we agree it doesn't fix acquisition aliasing. So that leaves me suspecting I'm missing something in your description of what you'd like the Rigol to do.
--- End quote ---
No, the Rigol doesn't do that now.
It certainly doesn't do RANDOM decimation, and I don't think it does decimation to simulate lower sampling frequencies either.
I've written this all before:
Random decimation (stochastic sampling) is the key to anti-aliasing. Beat frequencies (aliases) are formed by regular time interval sampling.
In the image, the black crosses and dotted line show regular decimation forming an alias frequency of the true frequency. The red crosses and line show irregular (random) decimation NOT forming an alias.
--- End quote ---
rigol would have to re-architect the entire signal processing chain to do what you show. They would need make sure that the real signals are not affected by your special method and verify that interpolation still works correctly. The cure would be worse than the disease. It's far easier to just use the correct sampling rate (i.e. 2.5x highest frequency component).
I use SDRs daily and haven't heard anyone doing the stochastic sampling. This is not a common technique. There is a better way to down sample using CIC and FIR decimating filters which take care of aliasing. What you propose is a gimmick. There is one Agilent paper about it and that's it, and it doesn't even mean they use this for anything but translating sample to display memory...
back to my hole :)
marmad:
--- Quote from: zibadun on July 12, 2013, 08:57:04 pm ---rigol would have to re-architect the entire signal processing chain to do what you show. They would need make sure that the real signals are not affected by your special method and verify that interpolation still works correctly. The cure would be worse than the disease. It's far easier to just use the correct sampling rate (i.e. 2.5x highest frequency component).
--- End quote ---
"My" special method? You give me too much credit; Agilent seems to have been using it for +20 years. ;) Perhaps Rigol could just stop pretending that they offer anti-aliasing that works?
--- Quote ---I use SDRs daily and haven't heard anyone doing the stochastic sampling. This is not a common technique.
--- End quote ---
Sorry, but how would you know how common it is among DSO manufacturers? You didn't even know what it was, and that Agilent used it, up until a few days ago. And Agilent appears to have been using it since at least 1992 - apparently it works quite effectively.
--- Quote ---There is a better way to down sample using CIC and FIR decimating filters which take care of aliasing.
--- End quote ---
Better in what way? Everything comes with a price. Agilent's random decimation technique appears to be quick, allowing them to achieve their fast wfrm/s rates.
--- Quote ---What you propose is a gimmick. There is one Agilent paper about it and that's it...
--- End quote ---
It's not a gimmick; there is tons of literature about stochastic sampling in general - and math to back up the fact that it eliminates aliasing (while introducing noise - which is the price for this technique, as mentioned above). To me, it seems fairly simple to understand how it works.
--- Quote ---.. and it doesn't even mean they use this for anything but translating sample to display memory...
--- End quote ---
Actually, it's quite obvious from the images posted by others (and re-posted by me) that they've used random decimation while capturing samples at lower sample rates with anti-aliasing in effect: the artifacts are quite visible at the sample level.
marmad:
--- Quote from: zibadun on July 12, 2013, 08:57:04 pm ---I use SDRs daily and haven't heard anyone doing the stochastic sampling. This is not a common technique. There is a better way to down sample using CIC and FIR decimating filters which take care of aliasing. What you propose is a gimmick. There is one Agilent paper about it and that's it, and it doesn't even mean they use this for anything but translating sample to display memory...
--- End quote ---
I found Agilent's Patent for the (original) technique - filed in 1991, and invented by Matthew S. Holcomb of the Hewlett-Packard Company. Pretty interesting... I've attached a few images from the Patent here.
Edit: Here is also another published paper about the technique.
From the paper:
"Instead of storing every Nth digitized point, the decimator can be designed to randomly select one out of every N points for storage. In the case of the 10.01-MHz input, the points placed in memory are points randomly selected from the ten cycles of the input that occur in every 1-s interval. This random sample selection technique effectively dithers the acquisition clock during the acquisition and prevents a beat frequency from developing.
This intra-acquisition dithering technique has been used throughout the HP546XX oscilloscope product line and again in the HP54645A/D products. The effect it has on aliasing is dramatic. Fig. 7a shows the aliased 10-kHz sine wave that is produced when a 10.01-MHz sine wave is sampled at 1 MSa/s. Fig. 7b shows the same display using the dithering process just described. The resulting display is a fuzzy band much like what would be seem on an analog oscilloscope, with all signs of an aliased waveform removed."
Edit2 @zibadun: So... still think 'my special method' is a gimmick? :) Apparently, it works quite well eliminating aliases, doesn't affect interpolation, and the reason there isn't lots of information about it in DSO literature is because HP patented it and Agilent doesn't seem to want to talk about it - instead using terminology like the following from the 5000/6000/7000 Series Oscilloscopes User’s Guide:
"At slower sweep speeds, the sample rate is reduced and a proprietary display algorithm is used to minimize the likelihood of aliasing."
GermanMarkus:
Hello altogehter - this thread is huge, but marvellous stuff here! Thanks to all contributors - that is the reason too why I'm a proud owner of a DS2072 too now :)
So - my only small problem with the great DS2072 is that I encountered a mysterious issue when serial decoding a well known 57600 baud serial datastream. The encoding showed me some wrong characters and so I thought maybe the oscillator of the selfmade sending µcontroller device is out of spec. and I tried the "user selectable" baudrate on the DS2072 and at first I used the same baudrate there too, that's 57600. And - I don't know why - when I switched to the user defined baudrate with exactely the same baudrate (57600) the decoding was perfect :)! So there seems to be an internal difference between the STANDARD 57600 baud decoding and the USERDEFINED 57600 baud decoding in the DS2072. So I tuned down the userdefined baudrate to approx. 56200 baud and at this baudrate the decoding showed exactely the same incorrect decoding like the STANDARD 57600 baudrate mode. Does anybody have any idea or could verify this issue on his Rigol DSO.
BTW - the sending baudrate is fine and well in spec. for 57600 baud.
I'm using the latest FW 00.01.01.00.02 .
Thanks in advance! Markus
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version