Thanks very much for the update, atx, and the report on the still-broken decoding.
From my POV, this renders the instrument nearly worthless. Why do I say that? Because if I buy an MSO, I want to take advantage of both analog and digital channels for my protocol analysis tasks. The Failure results you reported show that the Best Case scenario is the 1 GS/s + 1.4M Sa setting, which buys you a whopping 1.4 ms of capture time. Now what one would normally do in that situation is back off on the sample rate (and increase the sample depth), to increase the time window. However, doing either results in a broken decode, at smaller and smaller intervals (sub-millisecond).
What good is that? If I'm doing SPI decode, for example, I may want to oversample by around 10x, just to have well-defined edge timing. But I don't want to have to oversample by 125x, just so my decoder will work. That's throwing away >90% of my sample space, for nothing. And on top of that, being able to utilize only 10% of the memory I paid for. (one should be able to do 175 ms, not 1.4 ms.)
The fact that you can pull the data out of the scope, and it is intact and analyzable on your PC, is small consolation (IMO). There are cheaper (and faster) ways of doing that than an MSO2000. Get a Pico box with a USB3 interface, and suck the data out at ~130MB/sec. Instead of taking 25 sec for a 14M sample (560 kB/s), as you indicated on your MS2000, you're looking at a fraction of a second (about 0.2 sec, with overheads).