What about the real-time performance? The lack of UI lag is one of the things I like the most about Keysight oscilloscopes, at least the ones I’ve used (which are all WinCE-based units, including older 1000-series).
The 1200x series (Linux) is based on same hardware as 1100x (WinCE) and FPGA binary is the same. I'd expect similar GUI performance between the two.
That is a very, very incorrect assumption to make. UI responsiveness has nothing to do with the acquisition hardware performance as such: many (most?) modern scopes have incredible acquisition hardware, but struggle to create a UI for them that is responsive. And it’s perfectly possible to create a snappy UI for very slow acquisition hardware (for example, Saleae Logic and a basic 24MHz logic analyzer device.) The worst is when you have a slow UI whose design creates dependencies on the acquisition hardware, such that the hardware causes the UI to hang while it does stuff.
The responsiveness of a UI hinges on many, many things, which is why the same hardware can be super responsive when running one OS or UI toolkit, and slow as molasses when running another. Among other things, memory footprint, memory management, amount of hardware acceleration used, functions available, amount of abstraction, algorithms chosen, skill of the programmers, choice of API, amount of compiler optimizations…
Just as an example: I remember when Mac OS X 10.0 first came out in 2001. While it was super stable compared to Mac OS 9 on the same hardware, it didn’t yet leverage all that much GPU acceleration, so it felt sluggish compared to Mac OS 9 on the same hardware. 10.1 optimized some things, but when 10.2 introduced hardware acceleration for several more layers of the graphics stack, it really felt snappy for the first time. Again, on the same hardware. Mac OS 9 basically allowed programs to write directly to the screen, whereas Mac OS X used a buffer for each window, which then got composited (layered with transparency, etc) and double-buffered to the screen to avoid tearing. More capable, more robust — and slower, especially when not hardware-accelerated.
What boggles my mind is how sluggish so many modern scope GUIs are, despite having application processors far more powerful than in older ones that were nice and snappy.