1.1. Does it mean that the USB link is USB.2 ?
1.2. Did you test the performance of the device with different "power supply" :
for instance
=> the USB link
vs
=> a tweaked USB link that enable to get power from an external PS, for instance a "nice" linear PS (less noisy than the power coming from a hub / PC...)
2. Trigger frequency : internal vs external.
I guess this question is valid to any sampling scope... (?)
Is it better to rely on (your advice) =>
2.1. the hardware to find out the trigger frequency & stick to it (timebase + PLL ; given that both have their own jitter)
2.2. OR, using an external clock signal (that match the frequency to trigger) ; for instance, a DIY clock based on a "good" tcxo ? (as I don't deal with >100 different frequencies, , it is cheap to buy a fixed-frequency oscillator from a vendor..., and quite easy to get a rather good stability)
IMHO, the crappy TXCO is here a limitation factor.
As no external connection for any better reference OXCO to connect.
As 1Ts/s as 1ps sampling rate, IMHO any 1...5ps deviations will be hard to analyze. Or any live measurements Videos convince.
my 2 cents
Hp
How do I save/recall the settings? Is there an option to have the program use the last settings?
Your documentation mentions different firmware revisions. How are you making these available and what tools are in place to upgrade the product in the field? Or are you requiring that the product is returned to be upgraded?
From your post, I get the feeling the case must be opened to use this hardware you mention.
I would have thought any calibration data would be stored into an isolated area. If it is not, that you would have tools to merge it with the firmware.
Are you considering the FPGA part of the firmware? Or is everything done in the FPGA? If so, is that the tools you are referring to?
The binary data consists of d chunks of 3Nch bytes each, where Nch is the number of channels requested via the bitmask. Each chunk contains Nch three-byte entries, each representing a single CDF value F(V ;Δt) for each requested channel.
The first two bytes of each entry represent the voltage V on a scale from –1.5 V (0x0000) to +1.5 V (0xFFFF) in big-endian format.
The third byte represents the value of F(V ;Δt) multiplied by 255.
Reading your manual and thinking to write some simple software to try it out. I'm curious with the calibration, you talk about when the device has reached steady state that the need for calibration every second reduces. With that in mind, do you have a way to monitor the Xilinx or other parts for temperature? The manual does not mention this and I don't see it in your software.
The 1 ps timing precision of the GigaWave is guaranteed only up to one second after this com-
mand is issued. We therefore recommend issuing this command at least once per second. If
data acquisition (R) takes more than one second, the CAL command must be issued immediately
before the corresponding delay (D) and data acqusition (R) commands.
If the device is in thermal steady-state (∼5 minutes after warmup), the 1 ps precision can often
be maintained for up to 30 seconds without recalibration. If calibrating at a reduced frequency,
the 1 ps specified accuracy is not guaranteed, and it is the user’s responsibility to verify that the
timing precision meets the application requirements.
The commands seem fairly easy to follow, until I get to the Acquire CDF (R) command. I assume the software pulls the data for what ever channels are selected and increments the delay, runs a cal, pulls the next data, repeating the cycle for however much data we want.
QuoteThe third byte represents the value of F(V ;Δt) multiplied by 255.
I'm lost. 2.2.1 doesn't explain things in simple enough terms with enough detail for me to follow along with my limited math skills. An example in plain text would be helpful.
Or, do I need to find a stats book and start reading?
For single-valued signals (e.g. a simple periodic waveform), you can more-or-less average the two neighboring voltages where F(V; Δt) first goes from <0.5 to >0.5 to find the location of the step (V0). (The software does something fancier, fitting a Gaussian error function.)
Do you always use a Gaussian fit to get the centroid?
In the above example of processing the chunks of data titled "4.3.2 Example", the commentary mentions 0.476 mV - but I am pretty sure that is meant to be 0.476 V or 476 mV?
I suspect there is still some critical component I am missing. If I power up the GigaWave and attempt to control it with my software, it appears like it does not run. All the commands seem to return a valid response and the it sends back data but there its a flat line. Almost like it is not running. If I run your software first, then I can run my software and everything works. I can disconnect, restart and it works just fine. It acts like there is a command you are sending that I am not that sets the data collection into motion. Or, perhaps there is an order to commands that are sent to kick it off.
Another thing I noticed as I increase the resolution of the delay something happens with the DSO and it starts to complain about no triggers. If I increase the trigger holdoff time, it appears to correct it. I have been testing with a 1GHz source, and I need to push this this number out to 1us for example to use a resolution of 20ps, for example. The first several reads are always correct. How soon it fails depends on the combination of the holdoff and resolution (how much I increment the post trigger delay by.
Updated documentation is attached, and will be added to the next revision of the manual. We also caught another minor mistake - the returned value is actually 1 - F(V; ∆t).
Updated documentation is attached, and will be added to the next revision of the manual. We also caught another minor mistake - the returned value is actually 1 - F(V; ∆t).
Rather than s8 & e8, should those be s0...s7 and e0...e7? (for an 8 channel model).
Also having "Rx s0 s1 s2 s3 e0 e1 e2 e3" as the example command format was confusing to me, as that assumes a 4 or 8 channel model with a specific bitmask with 4 enabled channels.
Maybe "Rx s0 ... e0 ..."?
Or I may have misunderstood the syntax completely.
Unfortunately, we can't reproduce this problem - modifying the example program to take 101 samples between 15 ns and 15.1 ns (i.e. 1 ps resolution) works as intended with a just-plugged-in scope. Could you report if the read failure is persistent (i.e. if you retry the command a few times, does it fail every time after the first)?
We also caught another minor mistake - the returned value is actually 1 - F(V; ∆t).
I was confused by the R1 10000 10000 50000 50000 command. It appears to me that you are telling it you only have one channel enabled but are setting the limits for two channels.
Does the number of channels selected have no purpose in this context? Are you in this example showing that you have a 2-channel scope and are just setting the limits for both as a one time setup (assuming they do not change) and later requesting the number of channels (1 and/or 2) to read?
QuoteUnfortunately, we can't reproduce this problem - modifying the example program to take 101 samples between 15 ns and 15.1 ns (i.e. 1 ps resolution) works as intended with a just-plugged-in scope. Could you report if the read failure is persistent (i.e. if you retry the command a few times, does it fail every time after the first)?
Once it failed, it continues to fail every time. Is there a reason I couldn't request any number of samples I want? If I keep the delays and resolution the same where it would fail, I can reduce the number of requested samples and it ran fine. I was assuming there was no limit to how many requests I make. I thought it may have something to do with my asynchronous calibration command, so I disabled that and it has no effect. It's almost like I am causing some overflow condition by asking for too much data. I tried to throttle how fast I send these requests but I currently wait for the end of the frame before changing states.
QuoteWe also caught another minor mistake - the returned value is actually 1 - F(V; ∆t).To be clear, we take the (1 - (third number from each chunk divided by 255)) to get the 0-1 you show in your example? You are not wanting to add 1 to the third number divided by 255. Examples are good.
When you perform your Gaussian fit, you truncated the CDF within 10-90%) then mirror the CDF, then fit to that shape? I assume you are not fitting to the CDF's S shape directly.
There is no limit on the number of samples you can take. (In fact, there should be nothing "stateful" in the firmware, and issuing the R command repeatedly should result in identical behavior.)
Does the point of failure always occur at the same number of samples? If so, then this may be caused by the serial buffer filling up. When implementing the example program, we issue a read immediately after sending each command to clear all serial buffers. We can take an arbitrary number of samples without issue.
The timing shouldn't be critical, and no throttling is done in our official software.
It repeats at the same location which is dependent on the Trigger Holdoff. GigaWave will start to send "NO TRIG ZERO SJLI" or "NO TRIG TIMEOUT n SJLI". With Trigger Holdoff set to 50, it will start kicking out these messages very soon. Set it to 500, and it will run for a long time. I don't see why this would cause any problems with a GHz waveform. Both values are certainly longer than the 1ns period.
Moving the R level setting into the first Read command as your example shows has no effect. It still behaves the same. I'm sure I am missing something obvious.