Products > Test Equipment
Rigol DS1000Z series buglist continued (latest: 00.04.04.04.03, 2019-05-30)
frozenfrogz:
That "inverse brightness coagulation" (as per my second post above and resembling what PP is referring to) behavior is interesting. I would count that as an unexpected rendering issue that does not actually spawn from the ADC.
metrologist:
I can report a more potentially benign bug. I enabled FFT and was setting the Center, Hz/Div, and Offset. I adjusted the Offset and then went to adjust the Center or Hz/Div. With either of those values outlined with the blue box (indicating that was my current parameter selection), the twiddle knob was still adjusting the Offset value ???
I could recover this by selecting different parameters and the scope would release the Offset setting, but it came back again after changing something in the second menu and returning to the first. I reset (Default) the scope and set it up again without an issue, so I did not test the issue further.
I have the latest FW and probably one dot down from the latest hardware.
Porcine Porcupine:
--- Quote from: Fungus on March 04, 2018, 03:53:40 pm ---
--- Quote from: Porcine Porcupine on March 04, 2018, 01:09:02 pm ---One thing that convinced me it's stuck in peak detect mode is that I was able to produce a trace with MATLAB that looks exactly as it should by simply downsampling the raw data to the same number of points displayed on the screen.
--- End quote ---
Correct.
The problem is simply in the number of points you need to sample in your downsampling filter.
In Matlab there isn't really a problem, you can sample 100 points, no problem. It's a powerful PC.
In the Rigol ASIC there may be a much more defined limit for this downsampling. It might only be able to sample (eg.) 4 points (I don't know the exact number but it will be quite small).
This means that at some ratios of imput->output frequencies you're going to get aliasing, ie. You'll see two lines instead of one line.
In my post in the other thread you can see this happening, The aliasing on the displayed changes with zoom level.
This isn't a firmware bug, it's a hardware limitation (number of input samples in the downsampling filter).
--- End quote ---
When I downsampled with Matlab, I did it in about the crudest way possible by just throwing samples away without any low-pass filtering. If downsampling were going to cause such aliasing with this signal, don't you think it would have happened then?
It also seems strange to me that noise or anything else in this signal could alias into two lines like that without any points between them.
Porcine Porcupine:
--- Quote from: 2N3055 on March 04, 2018, 04:35:58 pm ---@Porcine Porcupine,
With do all respect, you don't seem to really understand how peak detect mode works...
In normal mode running at let's say 100M/sec, A/D converter is discarding 9 samples and use only one out of ten..
In peak detect mode, it won't discard 9 samples, but will remember min/max value, and show those two points on screen.
That way, you can detect 10 nsec pulse on time base where one pixel would only mean 1 usec and normally you wouldn't know something happened in meantime..
So far, so good...
But if you go to time base where your scope already samples at 1GS/sec (max rate here), Peak detect mode and Normal mode ARE THE SAME.... That is how it works... Usually, other scopes have warning in their manual that Peak detect works only on slower timebases....
And double line can also be caused by more likely reason: since A/D converter used actually works by interleaving 4x250MS/sec converters, if converters have offset relative to each other, consecutive samples wouldn't be vertically aligned.. Like what you see... If there is a bug, it is more likely it is in self cal procedure...
But it probably is not even that.. I can replicate this only on two lowest ranges, that are software zoom created. At 1mV/DIV (1X probe) vertical res of scope is about 40 pixel ... And any offset between A/D converters is multiplied by 5x, making it really visible...
Lowest real vertical range is 5mv/div...
Just something to think about...
Regards,
Sinisa
--- End quote ---
I haven't been able to find satisfactory documentation about how Rigol's peak detect mode works, but I assumed it does what you described plus a second stage of downsampling. The second stage is needed because there are still too many points to show on the screen.
My thinking is it first stores the result of the downsampling you described in the acquisition memory. Then it downsamples again just like the first time: it divides points stored in the acquisition memory into the same number of time bins as points shown on the screen, and then alternately selects a maximum or minimum point from each bin. I could be be wrong about this, but this is the most sensible way I can think of to do it.
If it does do it in two stages, then peak detect mode would still be different than normal mode because the second downsampling stage still happens even though there's no first-stage downsampling to do. It seems to me it has to do a second stage of downsampling this way to ensure the peak points make it to the screen trace.
I don't think this problem is caused by ADC offsets, because I didn't see any obvious sign of that in the raw data before or after downsampling.
The effect I see on my oscilloscope is easily seen at 500 V/div. It's possible your oscilloscope isn't affected by the same problem as mine, and you're seeing a different effect at those low V/div ranges. Maybe what you see is the ADC offset you mentioned.
Edit: It just occurred to me that there is likely a third peak detect mode downsampling stage to go from the screen trace buffer to what is actually shown on the screen. If the screen trace buffer has 1,200 points stored in it, it would have to be downsampled again with the peak detect downsampling algorithm to preserve the peaks on the displayed trace, which doesn't even cover the full 800-pixel width of the screen.
Porcine Porcupine:
I played with my oscilloscope some more and got a couple more results I think are interesting.
I fed it train of 5-V, 70-ns-wide pulses spaced at 1-ms intervals and set the time base and memory depth appropriately to test peak detect mode. I changed from dots to vectors for this so the pulses are lines instead of hard-to-see dots.
First in normal mode:
As expected the pulses were captured erratically and unreliably in normal mode. Each sweep showed a different number of pulses in different spots.
Then in peak detect mode:
Peak detect mode worked beautifully and did exactly what it's supposed to do. So now I know turning on peak detect mode does something, and the unit isn't simply stuck in peak detect mode.
I'm wondering now if that second stage of peak detect mode I speculated about in my last post might be stuck on, meaning that turning peak detect mode on and off is only controlling that first stage of downsampling. I know that scenario might sound a little farfetched, especially since I'm guessing about how peak detect mode works exactly. However, I think that scenario would be consistent with what I'm seeing. The double line with nothing in the middle and the points all displayed at the extremes of the noise amplitude looks so consistent with what I would expect out of peak detect mode that I'm still not satisfied a peak detect bug isn't somehow behind this.
I also fed a 4-Vpp, 1-MHz sine wave into the oscilloscope while in normal mode. With the time base set very slow compared to the signal's period, this is what I got with dots:
It looks like the envelope of the signal, which is what I would expect to see under these conditions with peak detect mode enabled. Nothing changed on the screen when I varied the frequency around 1-MHz, so I don't think this effect is caused by aliasing. As in the other examples of double lines, it looks like the fog you'd expect to see if displaying vectors because of the lines connecting the two lines of points.
Could this be a more extreme example of this double line issue?
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version