The FPGA/ASIC doesn't/shouldn't draw the actual image displayed on the screen. It "plots" traces by taking group min/max, for example min/max for samples 0-1023, 1024-2047, 2048-3071, etc. In my implementation I had the FPGA generate a series of these mipmaps, for example a /16 mipmap, /64 mipmap, /256, etc. The software does the drawing (which in my case was javascript in a browser) by requesting the lowest detailed mipmap that is equal or more detailed than the viewing zoom level. In my case the server side software does the downsampling from the selected mipmap level to the actual screen width in order to conserve network bandwidth. In a proper implementation for an oscope the mipmap generation adds very little memory bandwidth requirement because it is done directly on incoming data, NOT on data read from DRAM like an accelerator would do (that is why I dismissed the accelerator approach). However the implementation should still allow the possibility of reading data from DRAM, for example in FFT mode (assuming you have large >=1M FFT support at all which most scopes on the market don't).