If that were true, it could be trivially improved by antialiasing, which is further improved by subpixel antialiasing, a function of most modern text rendering libraries.
I however remain unconvinced. My hypothesis is: If the test patterns above appear grey, or nearly grey with only mild high frequency content, you have effectively found the high-pass filter frequency for your optic system. You will not be able to perceive much more than this and there is no further benefit to additional pixels. You can improve the appearance of the pixel grid by antialiasing, to avoid the sharp cutoff effect, but that merely seeks to redistribute the energy in a bandlimited system.
I remember having a debate along these lines with a friend of mine when working on an oscilloscope project. It was based on whether antialiasing (display) would benefit the appearance of an intensity-graded oscilloscope. I contented that you would not want to antialias the vectors that form the waveform, as the ultimate density information comes from the histogram of display data. In fact, antialiasing would probably worsen the image somewhat as it would spread the information out further from the ideal distribution, which is an infinitely sharp band-pass filter for each column of data.
And while the eye is not a pixel camera, the density of retinal cells in the centre FOV is remarkably consistent, measurable as an angle of arc, and while not gridlike, it has a relatively consistent arrangement. The notable exception is for chrominance, as we know that cone cells are distributed more chaotically. (But that's why we use chroma subsampling, and whether people can reliably tell 4:2:2 and 4:4:4 apart for photographic tests is another interesting debate...)