I'm not sure if this has been discussed already, but can someone explain how the

spec works:

Vertical Resolution: 8 bits

Vertical Sensitivity Range[3]: 1 mV/div~10 V/div

Dynamic Range: ±5 div (8 bits)

Note[3]: 1 mV/div and 2 mV/div are a magnification of 4 mV/div setting. For vertical accuracy calculations, use full scale of 32 mV for 1 mV/div and 2 mV/div sensitivity setting

If you set acquisition to normal and scale to 500uV/div, then the smallest step seen is 66.6uV. Setting to Dots or Vectors doesn't make a difference. This would imply the screen is showing 66.6uV * 255 bit / 10 divisions = 1.7mV/div (I'm not implying its actually measuring it, just showing it).

This is even seen at the 1ns time base. But the smallest x deviation I see is 40-90ps. 8GS/s = 125ps per sample. So I can't see it averaging x-data to get an intermediate value, but it may be a consequence of sin(x)/x calculations? With high-res on the y-resolution increases, as expected.

In the screenshot below, its a bit confusing to me to see a bunch of samples at the same level, then a single 90ps positive deviation, then back to the same level. This is an odd coincidence that the ASIC can do 10Gs/s, which is 100ps. Is there any chance it is not running at 8GS/s, or are these just artifacts of various internal calculations?

edit: on the 500uV or 1mV/div settings, 20MHz bandwidth limit is automatically enabled.

2mV/div range the smallest y deviation is 133uV, and x same as before 60-90ps.