No. As shown above, with 800 points (actually 1600, because it require min and max value for every point) you will get the following picture.
Yes, I realize that. I wasn't arguing for a mere 800 points worth of transmitted data.
Yes, you can compress it. But there are two issues. First, it should be no loss compression, because some artifacts on the image is not acceptable for oscilloscope.
That's fair, but that only constrains the degree or kinds of of loss that would be acceptable.
With no loss compression you cannot obtain good compression ratio.
That depends on the amount of difference between frames. If the amount of frame-to-frame difference is consistently substantial (like it would be when you're looking at random noise) then you won't get good compression, but at 24 frames per second or higher, the end result will be so much of a mess that accurate representation won't matter anymore anyway. The entire point of being lossless is so that you can see transients easily and so that you'll have correct gradients when the display is reasonably stable. Both of those situations are ones for which compression will be high.
So I expect that the amount of compression is actually likely to be
very high, even if you make it lossless.
Remember, too, that the vertical resolution is constrained by the sample resolution. If you're talking about 10 bits of resolution then
at most you're talking about 1024 values worth of vertical resolution. Oh, and the values themselves are
not RGB values. They're just intensity values.
The worst possible case here is a very wide window (e.g., 3840 pixels) being used to display noise that fills all of the vertical values. But for a 10 bit sample resolution, that's 3840 x 1024 x 8 bits per frame, or 31 megabits per frame, uncompressed. At 24 frames per second that's 754 megabits per second. If you could cut that in half somehow then you could just squeak it under the USB 2 bandwidth limit. Or you could drop to 12 frames per second for random noise, or something.
The second issue is that compression needs additional performance and resources for both - PC and for FPGA.
Of course. But as I said, the chipsets for this type of work are readily available and, as far as I know, relatively inexpensive (though I haven't done the research on that, and in any case, what we're talking about might well be a custom implementation anyway).
Since stream is pretty fast, the realtime compression can eat significant amount of CPU time. So what is better - compressed stream or a raw stream with no compression is not obvious question.
You either pay for it with a higher bandwidth link or you pay for it with compression hardware. There ain't no such thing as a free lunch. But remember that the original claim is that this kind of display
can't be done over USB 2, and that's clearly not true.
In some cases the compressed stream can allocate even more wide bandwidth than a raw stream with no compression.
That's true, but those are pathological cases, for which you'd want to do something else anyway.
And you cannot use some kind of MP4 with losses and good compression here, because an artifact can be interpreted as a signal glitch which is not acceptable.
Perhaps. Perhaps not. It depends entirely on the nature of the artifacts and how they show up in the first place. I don't know enough about how the compression works to make any kind of reasonable guess as to how well it'll work, but I fully expect that it involves frame difference encoding among other things. I
can say that in videos of people using their scopes, I've not seen anything to make me believe that it'll be a real problem.
But it should be easy enough to test this. Take a video of your scope's screen, then compare the resulting video against what you originally saw on the scope, and see if there are any obvious artifacts that would somehow cause you to change your assessment of what the scope was telling you.
Also compression needs a lot of additional resources on FPGA. So it will cost more money.
Of course. There ain't no such thing as a free lunch.
25 fps can be acceptable, but this is too low update rate to get "live effect".
I certainly agree that 60 FPS would be better, but if a tradeoff has to be made, a drop in FPS is the logical first step.
The real question is why you want a higher frame rate in the first place. This is a test instrument, not a video game. Interactive responsiveness is likely to be limited more by the other processing that happens within the scope than by the frame rate. And one can easily argue that a lower frame rate is
better for seeing transient glitches because it allows a glitch to remain on screen for longer.