General > General Technical Chat
Just because technology can do something, doent meant its always right
BrianHG:
--- Quote from: tom66 on June 20, 2022, 09:29:21 pm ---
--- Quote from: bd139 on June 20, 2022, 09:22:54 pm ---It's more complicated than test patterns. The representation of text and how the eye perceives it is what you are paying for.
Compare an inkjet printout to a decent laser printer printout and you'll see what I mean.
--- End quote ---
I disagree. If you can't see the pixels then fundamentally it doesn't matter what your brain does with the information, everything beyond that is interpolation on data that is there at any resolution beyond the maximum fidelity of your optic system.
--- End quote ---
This is only true is the so called pixels arent fixed squares, like on monitors.
All you need is a 1.5 - 2.5 pixel thick line at off-axis angles, like 5 degree, or 85degree, (common in fonts) viewed on a 4k screen, then jumping to an 8k screen where those angled lines are now constructed with 3-5 pixels of width to easily see how much more comfortable it is on the eye, even if you cant see the individual pixels which make up the edges of that line.
This is coming down to those who have used such hi-res monitors to do actual work (video doesn't count) and those who haven't. The text reads far superior when it is constructed from pixels at least 4x smaller that what your eye can perceive when you have a pattern of lines, 1 pixel on, 1 pixel off.
Our eye are analog. We need an over sampled source to achieve comfort for varying thickness combined with angular drawings where sharp contrast exists.
SiliconWizard:
Yep.
Vision (and our other senses) is more complicated than just seeing individual "pixels" or "units" of images.
In the same vein, while we can't differentiate with our audition two frequencies that are two close to one another, when presented separately, if we just mix them, we'll hear a beating, so we are able to tell there just isn't a single frequency. Similar things happen with vision, and through a lot of "parallel processing", our nervous system is able to discriminate several simultaneous stimuli as "contrast" much finer than as single stimuli.
Now while this is a general consideration, no two people have the same eyesight, be it purely from the optics of our eyes, up to the retina, and then the cerebral structures, so I'm pretty sure while some people can definitely tell a difference from a Full HD and 4K screen the typical size of a TV set at say 3-4 meters, or even between 4K and 8K, some others may not even be able to see a difference between SD and Full HD. So never assume that your particular experience here can be generalized.
bd139:
--- Quote from: SiliconWizard on June 20, 2022, 10:05:06 pm --- So never assume that your particular experience here can be generalized.
--- End quote ---
Exactly so you have to build for the best or the worst case depending on your user. Which 8k covers. Which is why it exists. I'm sure 16k will have some value on larger screens as well.
tom66:
If that were true, it could be trivially improved by antialiasing, which is further improved by subpixel antialiasing, a function of most modern text rendering libraries.
I however remain unconvinced. My hypothesis is: If the test patterns above appear grey, or nearly grey with only mild high frequency content, you have effectively found the high-pass filter frequency for your optic system. You will not be able to perceive much more than this and there is no further benefit to additional pixels. You can improve the appearance of the pixel grid by antialiasing, to avoid the sharp cutoff effect, but that merely seeks to redistribute the energy in a bandlimited system.
I remember having a debate along these lines with a friend of mine when working on an oscilloscope project. It was based on whether antialiasing (display) would benefit the appearance of an intensity-graded oscilloscope. I contented that you would not want to antialias the vectors that form the waveform, as the ultimate density information comes from the histogram of display data. In fact, antialiasing would probably worsen the image somewhat as it would spread the information out further from the ideal distribution, which is an infinitely sharp band-pass filter for each column of data.
And while the eye is not a pixel camera, the density of retinal cells in the centre FOV is remarkably consistent, measurable as an angle of arc, and while not gridlike, it has a relatively consistent arrangement. The notable exception is for chrominance, as we know that cone cells are distributed more chaotically. (But that's why we use chroma subsampling, and whether people can reliably tell 4:2:2 and 4:4:4 apart for photographic tests is another interesting debate...)
bd139:
Anti-aliasing is only a perception hack for low resolution displays. Compare laser printed 1200dpi text is not anti-aliased.
Eventually the objective is to build displays where anti-aliasing is unnecessary.
Edit: technically anti-aliasing allows for more precise perceived positioning of glyphs in relation to each other and the representation of curves because it's quite jarring when they are discretised to pixel boundaries. If you throw enough pixels out there the problem goes away. Typography is a somewhat complex area to move into here as well. If you're looking at a photo, things are different!
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version