Last night I extended the project to do Sobel-like edge detection:
The threshold is a little bit high at the moment so I'll fix that and get rid of the artefacts at the edges of the screen...
The true method to get rid of edge screen line artifacts on enhancement and sharpening/blur filters. Pay attention since altering the enhancement level isn't a good fix if you need high levels throughout...
Simplified illustration using 1 line of a 3x3 convolution matrix. You will need to expand this on the X and Y axis as needed. Include the Z axis if you are doing Z time domain filters like motion blur...
x1y1|x2y1|x3y1
-------------------
x1y2|x2y2|x3y2
-------------------
x1y3|x2y3|x3y3
As the video is coming in, once active video flag is enabled, for the first pixel, x1 and x2 and x3 should all be made equal to pixel 1. For the next pixel, x1 = the next pixel as x2 takes the previous x1 and x3= the previous x2. When beginning a new scan line, all the pixels in the filter coefficients become equal meaning 0 change meaning 0 artifact. After the end of the active video on the line, x1 should hold the previous pixel not changing. The shift roll effect here will have the same consequence of the edge pixels off the side of the image being equal to the pixels just before erasing that edge detect artifact.
On the Y lines of video, follow the same procedure. At the First new video line coming in, feeding y1, use a that y1 as you y2 and y3 as well. On the next line as your y2 buffer has been now filled, use the new y1, use the buffered y2, for y3, use the existing y2. On the next line, the new coming in y1 with the buffered y2 and the buffered y3 are all valid. At the bottom of the video, for the line where an empty y1 is coming in since there is no more picture, use the buffered y2 data as your y1 data. The buffered y2 will also be use as y2, y3 is still used and still gets it's data from y2... Ect... you should have the idea.
You are essentially doing a padding of picture data where there is not picture data. The padding data turns out to be the last pixel value data at the edge of the image. You just need to take care in handling Y axis as the padding data is held in a moving line buffer.
BTW, great job at decoding & encoding the HDMI bitstream.
