Elsewhere in the manual it explains that they have a pre-trigger buffer which is FIFO and which they fill before looking for the trigger event. Then they have the post trigger buffer. They always fill at the maximum sample rate and reduce it by decimation for slower sampling rates.
For segmented memory they treat each segment in the same way (pre-trigger and post-trigger buffer). It lists the reset time after each segment as 1microsecond for the 3000X series so the update time will be 2MB at 2GS/s or 1microsec plus an extra microsec which presumably reduces it the wfm/s rate from around 1M to 500K.
In some ways, I would say yes. But aside from the fact that (if an Agilent owner) I would just prefer to be able to turn this feature on and off
The problem with 'automatic' features in complex technology is that if they're not extremely well-documented (in terms of the ramifications on all related sub-systems), the trade-offs are often not clear.
For example, what does the ASIC do exactly when I want my trigger position 5 divisions to the left of the screen edge?
Or what does it do exactly with segments: does it capture them at the fastest speed possible while cutting down the sample length - or does it maintain the sample length while reducing it's update rate?
Why?
Please cite a case were you would need to sacrifice update rate for deep memory in run mode.
Agreed. But in this case I am claiming that there is no downside to what Agilent have done here.
It must capture them at the maximum memory depth, because these are effectively multiple "stop" captures, not simply run mode screen updating.
Agilent know you want to capture and investigate these captured segmented data, so they use all the memory they can. Update rate might be dependent upon trigger type.
Never - assuming the technology worked 100% perfectly 100% of the time. And I know you don't think this kind of result is important, but it is to me. Again, it's fine if you know that this is possible given the Agilent technology and certain settings - and can shut it off. Otherwise it's problematic.
coupled with the fact that it imposes some conditions on you (such as constant interpolation)
The interpolation is not really related to the automatic memory depth/update rate thing we are talking about, it's simply a separate decision agilent made.
Just because they both can't be changed by the user does not mean they are related.
So for example, on an Agilent 3000X, given 1M wfrm/s @ 10ns/div, if I move my trigger position to -10 divs, does my wfrm/s drop to 750k - or 500k - or stay the same?
It always captures the same amount of memory so the trigger position shouldn't make any difference. It might cause it to vary the relative sizes of the pre-trigger and post-trigger buffers.
The pan and zoom options are only available on the last captured trace and presumably this is captured after you press stop and uses the full memory depth (i.e. not split into two buffers).
I think the idea that it optimizes its memory use for rapid capture is misleading - it just splits the memory in two when in run mode - there is no optimization for different triggers or time bases or anything like that.
Never - assuming the technology worked 100% perfectly 100% of the time. And I know you don't think this kind of result is important, but it is to me. Again, it's fine if you know that this is possible given the Agilent technology and certain settings - and can shut it off. Otherwise it's problematic.I was talking about apart from the dot/vector & interpolation thing. I do not disagree with you here.
I think this is a rather different issue to whether or not different interpolations can be turned off.
It always captures the same amount of memory so the trigger position shouldn't make any difference. It might cause it to vary the relative sizes of the pre-trigger and post-trigger buffers.
The pan and zoom options are only available on the last captured trace and presumably this is captured after you press stop and uses the full memory depth (i.e. not split into two buffers).
I think the idea that it optimizes its memory use for rapid capture is misleading - it just splits the memory in two when in run mode - there is no optimization for different triggers or time bases or anything like that.
Sorry, jpb, but I get the impression you haven't read the other posts in the thread - and perhaps that's why you think that Agilent 'define this quite well in their manuals.' Of course it optimizes the memory - it's not using the full size of the sample length split in half - this is obvious by simple math. Please read the previous post(s) describing the impossibility of capturing the full half-length given the wfrm/s - and look at the spreadsheet above. Then we can continue the discussion if you like.
Presumably it doesn't capture memory that is not visible.
Those sample points are probably altered by the anti-alias algorithm. I did a test with my Agilent MSO6034A, which has an option to turn anti-aliasing on and off. It seems that the sine becomes "noisy" when anti-alias is turned on. Too bad that they have removed that option to turn aa off in newer MegaZoom generation.
But I must say that I have never noticed that, or any measurement hasn't gone wrong in past 8 years or so that I have used that Agilent. Working anti-alias has been quite good trade-off.
Regards,
Janne
Presumably it doesn't capture memory that is not visible.
Presumed by who? If you presume ALL DSOs only capture sample memory that is visible on the display at all times you're terribly wrong. They don't all operate like your WaveJet.
And I know what's written in the manual and extra Agilent literature quite well - but I don't feel like arguing about semantics; I'm more interested in the reasons behind my previous post.
Almost all the points of discussion so far seem to involve data that is stored (such as the interpolation used on points that are zoomed in on), whilst the high waveforms per second data is differently captured on the Agilent - only one set is captured in detail presumably when you press the stop button. So whilst it is running it is capturing the minimum of data at a fast rate, when you stop it to look at the results it does a completely different sort of capture which will be much slower and more detailed but as there is only one of it this it doesn't matter. (Assuming the time base is fast enough for such a capture to be well under a second.)
Greg, any final thoughts on the LA module and decode options? I think a number of people were curious to hear information/opinions about that.
Not really - it worked like any other MSO LA to be honest. I didn't use it a ton as I was waiting for the firmware update to enable CAN decodes but it was fine - basically what I wanted spec wise and seemed to function as such. It's a plus that it's there, it works well, and it includes all the decodes with the module purchase. I'd recommend it for that. I am just frustrated the published specs don't conform with the expected output based on the published documentation at the time of my purchase decision.
Well, I didn't expect that you will return your scope... Which scope are you going to buy now? Well, I would wait for the DS2000-S with function generator...
Main thing is I want 4 channels, intensity grading with an MSO option - so we'll see.