I've been reading all I could find recently about the Agilent X-Series, trying to determine the cause of the behavior evidenced in kg4arn's images - which seems to me might indicate what I would consider a design flaw in the X-Series.
For those new to the thread - it has to do with the difference between the following two images:
Firstly, I'd like to point out that it's quite difficult to know the exact sample lengths used with various combinations of channels, running modes, etc, in the 3000 X-Series given both the very limited information - and the strange presentation of some of it - by Agilent. Through all my digging, I was unable to come up with a single table outlining the exact record lengths - and when they're used.
Given that they publish both this specification:
"Maximum duration of time captured at highest sampling rate (all analog channels) 250us"
...which, with a 4GSa/s rate, equals 1MB of memory - so would tend to indicate that a maximum of 1MB can be used by any single channel in any mode.
and this info:
"When running (versus taking a single acquisition), the memory is divided in half. This lets the acquisition system acquire one record while processing the previous acquisition, dramatically improving the number of waveforms per second processed by the oscilloscope."
...it's difficult to know for sure if one channel has access to the entire memory pool when running on it's own or not. In other words, when in Normal mode and running a single channel, does the scope divide the entire 2MB in half, giving one channel a 1MB record length - or - does it divide the 1MB per channel-pair in half, giving the single channel just a 500kB record length.
I mention all this mainly as a caveat since I'm not completely clear as to whether kg4arn's images represent 1MB or 500kB of sample points - which would affect the percentage of error.
Secondly, as evidenced by the images posted by Wuerstchenhund and myself, I think it's pretty clear that what we see in kg4arn's images is NOT an interpolation problem - since mere interpolation changes from sin(x)/x to linear (or to
no interpolation, for that matter) STILL result in a clearly symmetrical wave-shape - with equidistantly spaced sample points. No, I think what is visible are sampling 'errors', if you will, and it seems to me that they must somehow arise from Agilent's practice of decimation.
Since both the Rigol and LeCroy DSOs produce fairly similar symmetrical output to the test waveforms - and both DSOs follow the practice of actually slowing down the clock speed to the ADC to reduce the sampling rate (while maintaining record lengths) - it seems to me that it might be reasonable to assume that Agilent's practice of 'throwing away' samples (decimation) in order to simulate a slower clock rate might be leading to what can only be defined as missing and/or misaligned sample points.
If all of this is true, it seems to me that Agilent's decision to optimize it's MegaZOOM ASIC for the fastest waveform update rates - in exchange for signal fidelity at slower sample 'speeds' (when unable to reduce clock rates, the ASIC decimates) - was a mistake. Any further information which might enlighten this issue - from other X-Series owners or Agilent techs - would be welcome.
Edit: I might also point out that in the two test images, the Rigol is actually sampling at half the rate of the Agilent (100MSa/s), making it's errors seem even more severe. If the Agilent record length in the above image is 1MB, than the ASIC is throwing away 39 out of every 40 samples; if it's 500kB, then it's throwing away 79 out of every 80 samples.
Edit2: Thinking about this further, I'm wondering if the anomalies in the Agilent's sampled waveform might be produced via it's practice of swapping banks of acquisition memory while running in Normal mode, then appearing to combine both banks together somehow when Stopped.