Without turning this into an endless discussion, but the killer reason why GPUs are a bad choice for scopes is that you need fast, low-latency, random-access memory for "phosphor" simulation ("DPO" etc.). GPUs are typically optimized for different workloads; an exception are maybe ESRAM GPUs, but other than the Xbox One, I'm not aware of a GPU that offers this. Cache memory is not going to help, very unlike most 3D rendering workloads.
The other issue is the data rate - you'd need at least something like PCIe Gen3 x2 for even just one 1GS/s channel (and then you STILL need glue logic to interface your ADC to whatever high speed interface your SoC has). Almost no low-end SoCs offer PCIe at that bandwidth. The GPU isn't going to help a lot with streaming data, so the only operation that could really be CUDA-enabled is FFT. But it seems a cache-optimized FFT algorithm for a CPU will beat most GPUs.
In short, none of the perf-critical work on scopes really maps well to GPUs.