Electronics > Projects, Designs, and Technical Stuff
Need to build On-screen display for VGA
Scrts:
Look into TechWell (now Renesas?) chips. They have video processors that do video overlay - either graphics or text. I had TW8836 evaluation kit and it worked well. See here: https://www.renesas.com/us/en/www/doc/datasheet/tw8836.pdf
The TW8844 is widely used in the vehicles for overlay generation of text and graphics:
Yansi:
--- Quote from: Berni on February 05, 2019, 06:41:09 pm ---Yes it can, but might require some clever coding to meet the timing requirements at 1080p.
Hardware timers can be used to trigger from Hsync and Vsync signals to provide an accurate interrupt call at the time the correct location is being drawn to the screen, this might then trigger a DMA transfer that shovels a large RAM array into a GPIO port where you connect your MUX signal and OSD color signals. Perhaps using 8bits on the GPIO port to have 1bit MUX signal, 2bit red, 3bit green, 2bit blue. The speed of the DMA could be tuned to make 1 transfer for every 4 pixels drawn to make fat long pixels and the DMA could be triggered to pump out the same array 4 times in a row to make 4x4 sized OSD pixels on the screen.
While the DMA is pumping out pixels this gives the CPU 4 video lines worth of time to fill up another array in RAM with the data for the next set of lines. Once the 4 lines are done the DMAs source address is then switched to the new array location to pump out the new pixels while the CPU starts preparing the next one, this repeats until the whole video frame is over and once vertical blanking starts the CPU now has a few microseconds of time to do anything else it needs to do, like perhaps user interface stuff, once a new video frame starts being drawn it all repeats again.
How much resolution and color depth you can afford depends on how well the code is optimized and how fast the CPU is. Maybe you need huge 16x16 sized pixels to have enough CPU time to do it. Or maybe you optimize the hell out of it and use the SPI bus running at 130MHz to get high speed output and get 1x1 sized pixels but only in a single color. Depends on what you optimize for.
Another trick that might be possible is to get a very powerful MCU that is capable of outputting 1080p video on a 24bit RGB bus. This bus is designed to connect a LCD to it, but with some hardware interrupt and register twiddling trickery you can perhaps synchronize the start of the frame on this video output bus with the start of frame on VGA. In that case you can overlay a high resolution 24bit color 1920x1080 picture over the original video by simply using MUXing between the video coming from the MCU and from VGA by using one of the color bits as the "transparency flag" to switch the mux at the right moment. Tho chips that can output 1080p video tend to also run Linux
--- End quote ---
Forget any SW solution at 1080p pixel clock speeds. >:(
You need a dedicated HW for that and at 1080p data bandwidths, no SW hack will produce any good results.
Berni:
Well with a software MCU solution obviously the OSD would be of lower resolution than the 1080p video its overlaying on. But it is perfectly possible with some skillful C programming and a good bit of time studying the MCUs documentation to abuse the peripherals in the right way People are doing similar stuff on a 8bit MCU while also generating sound trough PWM at the same time(Tho in those cases its all hand optimized assembler with instruction cycle counting)
But yeah if you want to overlay a 1920x1080 resolution full color OSD image onto 1080p 60Hz video it will be the easiest to just bite the bullet and use a FPGA to do it properly.
As for genlocking the video output of a Raspberry Pi, i think it would be very difficult to do because video output comes from the internal GPU that is running closed source firmware and drivers. There are some work in progress open source drivers to run the GPU, but they are quite complex stuff and im not sure if they go low level enough to allow you to trick the GPU to restart a frame on command. Also getting the sort of timing accuracy under the Linux OS can be rather tricky and timings in general are not very deterministic on these big boy ARM chips. There are lots of buses, memory controllers, caches, pipelining etc. that cause the execution speed and interrupt latency to jitter around. Smaller ARM chips execute directly from internal flash/ram and don't pipeline and as such can provide much more tightly controlled execution timing (in a lot of cases cycle accurate). To pull off this sort of trickery you likely require cycle accurate execution timing or your image would shake all over the place.
GeorgeOfTheJungle:
1080p @60Hz HSYNC frequency is ~ 1080*60= 65 kHz, pixel clock ~ 1080*60*1920= 125 MHz, 1080p @30Hz half that: 32.5 kHz and 62.5 MHz.
All you need is one parallel in serial out shift register than can run at those pixel clock frequencies (*). Or two...
Any µC can count HSYNCs @ 65 kHz, and load the shift register(s) between lines.
(*) Edit: not really, the OSD can run at its own pixel rate.
Yansi:
Shift registers = dedicated hardware.
I stay behind what I have stated. Forget SW solution. Not gonna happen at these speeds, this ain't no PAL. Even the slightest timing jitter will become clearly visible, rendering the result likely not very appealing.
Of course one can argue with "clever programing", clever this or that. But dedicated HW will be probably much less complex as a solution and will give you much more flexibility.
Even though I am not any kind of FPGA expert and I tend to stick to MCUs as much as possible, in this case I clearly recommend using a cheap FPGA or even larger CPLD to help with the job.
There are tons of cheap FPGAs available that would likely fit the job with less hassle and bucks spent, than a MCU with "just the right clever this or that peripheral abused" for the job.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version