Even my Tek TDS460 has -- besides the annoyingly slow-to-respond menus -- an occasional bug which, I suspect goes something like: user input is interrupt triggered, and the front panel encoders go crunchy sometimes. So just sliding a cursor around can freeze the UI requiring a power cycle.
It may not be exactly the same thing, but the underlying problem – coupling each UI state change to a particular update – is the same.
Even in the example of
finite impulse response filter analysis as a standalone HTML page, I separated "calculation" from "redraw". Granted, because the redraw was fast enough even on the slowest machine (with a modern browser; mainly Canvas support), I just chained them together. If it had been a problem, all I'd need is to add a global variable representing the current redraw timeout. Whenever recalculate is triggered, it would first disable any existing redraws, and then set a new timeout after calculation completes.
I've used Gtk+ and Qt widget toolkits extensively. (Gtk+ is pure C, Qt is C++.) They both are based on an event loop under the toolkit control, so require an event-driven programming approach. Neither works well with major computation done in the same thread as the UI event loop, and this is typical for all UI toolkits and approaches. Even so, that –– do everything within the UI event loop; perhaps in the idle handler –– is what the widget toolkit documentation recommends. Stupid. You do not even need multiple
hardware threads! All you need is a way to interrupt or time-slice the major computation. The simplest implementation only requires a timer interrupt, and switching the processor state and stack between the different concurrent tasks.
One of the real tricks is having a way to cancel and/or restart the heavy calculation. (An atomic flag and an early return from the computation function works well.) In an oscilloscope, this is not really an option, because –– assuming I have the correct picture of how they work –– the "heavy computation" is actually communication with the dedicated capture hardware, and setting that up. I can well imagine that a hardware FPGA/ASIC designer without any user interface experience would design this communication to be a full setup information package, instead of a per-variable/feature one, because the former is just so much simpler and robust. But, it also means that the UI must be very careful as to when it decides to send such setup packages: delay too long, and the UI will be sluggish. Queue the changes, and you get the "twirl-a-knob-and-it-will-freeze".
No, the control messages need to be categorized by the latencies of their effects, for best results. Something like changing the trigger level should be basically instant. Something that causes e.g. a relay to change states will take human-scale noticeable time (a fraction of a second), and therefore must
override any of the faster changes. The UI side needs a configuration state machine that is aware of these latencies, so that it does not bother to e.g. set the input scale if it knows there is already a different scale selected in the UI. And yes, you indeed want to do the (necessary) highest-latency changes
first: I'll leave it as an exercise to realize why.
Simply put, you need an intermediary state machine or filter between the user events and the capture engine, to minimize the latency between UI changes to visible effects. For hardcore FPGA/ASIC designers who disdain of anything human-scale, this is "ugly" and "silly"; they feel it is an intrusion into their domain.
Similarly, if you write an user interface to communicate with say an USB or Serial or Ethernet-connected device, you need to split the communication into a separate thread (often a state machine), and use a thread-safe queue or similar to pass commands/requests/state changes and results between the two. Either side also needs that small state change optimizer machine, so that changes are not queued, but combined for efficiency. Hell, even if you just write a crude tool to display the microphone signal level at the edge of your display, you'll want to use a separate thread for that, so that UI hiccups do not affect the audio stream and vice versa.
For most programmers, this is hard. It is already hard to switch between event-driven and state machine logic; but to do so just because of getting a responsive UI is just not worth the effort to most –– even if they are paid to do just that.
I definitely blame the developers and programmers for this kind of issues. They are solvable, and the solutions are well known. The fuckers are just too lazy or inept to do the work
right. (Then again, the one complaint I always got when I wrote software for living was "
it does not need to be perfect; it just needs to look like it works. We can fix the issues later, when we have time". So maybe I'm expecting too much from people.)