Moderately annoying since it was one of the things I was looking at when I was evaluating.
OK, this table is for 1kpoints per channel.
...
Now what is it like for Auto record length?
OK, this table is for 1kpoints per channel.
...
Now what is it like for Auto record length?I'd be curious to see the auto record length chart, too. Please post a link or the info itself if anyone has it.
DS2000 must be better when it has two FPGAs.
We would need to see design sources to judge about which DSO is better (from FPGA vs. design point of view).
Moderately annoying since it was one of the things I was looking at when I was evaluating.
BTW, Greg, I suspect you might be able to turn that 'annoyance' into some kind of tangible 'settlement' from Instek (while pointing out that even their FW Help information was misleading).
To clarify:
ehm, these 656600000 wfms/s at 20ns/DIV on Rigol, where they coming from
At DSOX2002A you get up to 50 000 wfrms/s ×50kpoints = 2.5×10^9 =
2 500 000 000 total samples captured per second.
At DSOX2002A you get up to 50 000 wfrms/s ×50kpoints = 2.5×10^9 =
2 500 000 000 total samples captured per second.
Well, we were discussing what 2 FPGAs vs. 1 FPGA might mean in terms of acquisition throughput - not really ASICs. But yes, the Agilent is the fastest, although I've never seen any published numbers yet for the wfrm/s when using the 1M (or is it 500k?) added memory.
At DSOX2002A you get up to 50 000 wfrms/s ×50kpoints = 2.5×10^9 =
2 500 000 000 total samples captured per second.
For a good example, what would the Agilent 3000 X-Series do? At 10ns/div, it's supposed to do 1,030,000 wfrm/s. But it can't possibly be acquiring the fixed memory size (1M) - it's impossible because (1 / 4G[Sa/s]) * 1M[Pts] * 1.03M[wfrms] = 257.5 seconds. In fact, it must only be acquiring a tiny subset of the full memory (possibly just the amount of samples for the display) - and only capturing the full length when STOP is pushed.
I tried it. At Run/Stop mode DSOX2002A uses 500kpoints memory per channel (when one or all channels used.) The Trig Out frequency is as specified by Agilent in their PDF (up to 50 000 kHz). I measured it.
Yes, at 10ns/div timebase you will never see the whole record on the screen.
Well, only Agilent guys know how their scope really works. Thats it...
For a good example, what would the Agilent 3000 X-Series do? At 10ns/div, it's supposed to do 1,030,000 wfrm/s. But it can't possibly be acquiring the fixed memory size (1M) - it's impossible because (1 / 4G[Sa/s]) * 1M[Pts] * 1.03M[wfrms] = 257.5 seconds. In fact, it must only be acquiring a tiny subset of the full memory (possibly just the amount of samples for the display) -
and only capturing the full length when STOP is pushed.
Well, DSOX2000 is as fast as possible... so you cannot set the memory length, interpolation or even dots/vectors. No scope is perfect.
1 * samples * wfms/s / samples/s = second ? i need koffee ^^
The real data point value is probably equal to display resolution (visible area size, is it 600 dots?).
It can be as well as on TEK DPOs (not sure if on latest model as well), where the buffer is one time fully written
at begin of sampling (so when you do single shot it is once, and when you do RUN it is once ++++) and then
refreshed with what the hardware allows at max. (e.g. DPO3000 with 10k point, TDS700 with 500point)
The dots/vectors thing aside, isn't the Agilent essentially "perfect" in this case?
i.e. it automatically selects whatever memory it needs to get blindingly fast update rate in run mode, and then gives full memory in stop mode when you want to analyse it, right?
The dots/vectors thing aside, isn't the Agilent essentially "perfect" in this case?
i.e. it automatically selects whatever memory it needs to get blindingly fast update rate in run mode, and then gives full memory in stop mode when you want to analyse it, right?
In some ways, I would say yes. But aside from the fact that (if an Agilent owner) I would just prefer to be able to turn this feature on and off (as with interpolation), some of the results of their method seem a bit questionable to me (although granted only under certain circumstances). The problem with 'automatic' features in complex technology is that if they're not extremely well-documented (in terms of the ramifications on all related sub-systems), the trade-offs are often not clear.
For example, what does the ASIC do exactly when I want my trigger position 5 divisions to the left of the screen edge? Or what does it do exactly with segments: does it capture them at the fastest speed possible while cutting down the sample length - or does it maintain the sample length while reducing it's update rate?