I have tested again, this time with the Rigol DG4000 as signal source. Now the normal history mode works as well. Each of the 21 data packets is recorded. The Peak Detect mode is slower, one should also consider.
I will test again with the internal pattern generator.
Peter
I have tested again, this time with the Rigol DG4000 as signal source. Now the normal history mode works as well. Each of the 21 data packets is recorded. The Peak Detect mode is slower, one should also consider.
I will test again with the internal pattern generator.
Peter
Is it really possible that the delta time is always exactly the same to the last nanosecond ? This looks very strange to me.
(RTB2004-PulseBurst-1-21-Sample.png)
Because of the trigger re-arm uncertainty I´m getting a bit suspicious about what the scope software is showing me.
I rember the DS1054Z showing a trigger delay of 10.00000000456ps or so.
Trigger rearm to ready time is separate form internal time base. Scope is showing exactly what time it was between triggers. It knows. It is simply not ready to get another sample at all times in exactly same interval. That is true for all scopes..
And I think I understood another thing now.
R&S have the Single button in Trigger, but it really is single acquisition, not single trigger.
On DS1054Z, I pressed Single, it was single trigger regardless whether before Auto or Manual was chosen.
Makes sense now, but is slightly different behaviour.
As you said, I really should study the manual more in depth.Trigger rearm to ready time is separate form internal time base. Scope is showing exactly what time it was between triggers. It knows. It is simply not ready to get another sample at all times in exactly same interval. That is true for all scopes..I know the scope knows, but that is not the point. The point is that I need to know the time it _will_ need in order to decide if I need Fast Segmentation or not.
If I have no idea, how long the time is before next trigger is possible, how can history be of any use ? Only and only with very long times between trigger (>100ms) or use Fast Segmentation.
Still not quite convinced this is a optimal implementation.
I have observed the trigger out with another oscilloscope and I have noticed errors.
Immediately after starting a recording there are gaps in the trigger. Only after a short time is triggered on each packet. Of course this can become a problem.
10 Msample memory depth is available on each channel if all channels are active. When interleaved, 20 Msample are available.
I don’t think it’s impossible (or extremely hard) to implement for the digital channels only in an FPGA, without waste of memory even. You’d capture non-sparse by default, but once the number of running zeros or ones exceeds the length of two timestamp plus a sentinel value, you instead[1] emit that sentinel value to indicate that sparse capture is happening now, followed by the current timestamp, and then a second timestamp once the value changes again.
I now also found this "fact sheet": https://scdn.rohde-schwarz.com/ur/pws/dl_downloads/dl_common_library/dl_brochures_and_datasheets/pdf_1/Option_sheet_-_RTx-B1_mixed_signal_analysis_v1.10.pdf
It says there is "no consumption of analog channels", and that the memory depth is "10 Msample". This contradicts the 20 Msamples table entries for when only analog channels 1 or 1+3 are active (probably just understating the capabilities), but it does also further suggest that all 4 analog channels can be enabled with any number of digital channels, and that there will always be at least 10 Msamples for all of those. On some signals, I might be able to reach up to 160 Msamples total with the proper triggers and fast segmentation.
MSOs don't have compression like most logic analysers. The reason is that compression won't work for analog signals AND you will need extra memory for timestamps. Compression doesn't fit in the memory model of a DSO.