So with that said, in the tube design itself. What about the physical construction determines the resolution? Lets take standard NTSC tvs for example. I know that the spec is 525 scanlines, but only 480 of them are viewable. Does that mean the physical construction of the tube has 480 lines of resolution? Also, what is the determining factor of the horizontal resolution? And how do TV lines relate to this.
Ignoring linearity and sticking with monochrome CRTs, the spot size determines the number of lines of resolution which is an easy enough test to make using a raster.
One more thing is displaying as interlaced vs progressive. Again, removing the chassis part of it, is there anything inherent in the CRT tube itself that prevents progressive display vs interlace display?
Except for constraints on how quickly the beam can be scanned and controlled, the CRT design has no influence on progressive versus interlaced display formats.
It would have to either be the ability to focus the beam accurately corner to corner. Which would be more difficult the more the screen deviates from spherical. Which I suppose is one reason an oscilloscope screen tends to be long and narrow.
Oscilloscope CRTs are long (for their target area) because deflection sensitivity which limits bandwidth is proportional to length.
A major limitation of bandwidth in oscilloscope and scan converter CRTs is charging and discharging the deflection plates to relatively high voltages. The best scan converter CRTs limited to a length suitable for rack mounted equipment were about 5 GHz. During the cold war, the Soviets had a much easier solution; they made scan converter CRTs which were 6 meters long to achieve a bandwidth of 13 GHz.
Also there is the phosphor itself. as the beam strikes the phosphor there must be a small amount of scatter to surrounding phoshor which would decrease the dot sharpness. Photographic film had a certain amount of light scattering within the emulsion that limited resolution. The phosphor has a certain thickness that may cause a similar phenomonon.
There is also the minimum dot size that permits sufficiently bright images. The smaller the beam the smaller and dimmer the image and the longer it will take to refresh the screen if it requires scanning more lines.
There are a bunch of things which affect the spot size. Lower acceleration voltages mean that the electrons have more time to mutually repel each other. (1) Different types of electron lenses suffer from various amounts of aberration. If scan expansion is used, it also expands the spot size. (2) The phosphor target itself diffuses the spot. (3)
(1) So higher acceleration voltages yield both a smaller and brighter spot.
(2) This explains why the last generation of pre-scan expansion mesh CRTs (50 MHz Tektronix 547) have a smaller spot size than later CRTs. On the other hand, scan expansion meshes yielded higher bandwidth because of higher deflection sensitivity. Some later CRTs replaced scan expansion meshes which quadrupole lenses but I think they bend up the beam a lot because they have little or no increase in sharpness. Or maybe my one oscilloscope like this is just old.
(3) The bright disc around the spot seen at low sweep speeds is from secondary emission; the electrons blasted free from the spot get pulled back by the high PDA (post deflection acceleration) voltage. The ghost seen at low sweep speeds which catches up to and then precedes the spot is produced by (secondary emission from?) the electron beam hitting the scan expansion mesh.
2nd anode voltage is the most important figure in beam sharpness; but you can't simply crank up the voltage on a poorly made tube (which will probably flash over before you get very high!). Sharp focus is best achieved by simply finding a well-made tube.
If you ever have the pleasure of working on an oscilloscope which supports reduced scan, then you can see the effect of the acceleration voltages on the sharpness and brightness. Increasing the cathode voltage (more negative) makes the focus proportionally sharper, incredibly so, but also lowers the deflection sensitivity proportionally.
I do not know why someone could not in theory have made a lower bandwidth higher deflection sensitivity CRTs with a higher cathode voltage and tiny spot size across a standard 10x8cm or larger graticule but I do not know of any. I assume diffusion through the phosphor target would have limited the improvement.
Lowering the PDA lowers the brightness and increases the spot size of course but counter intuitively also changes the deflection sensitivity because it acts as a final lens. If a scan expansion mesh is used, lowering the PDA counter intuitively *lowers* the deflection sensitivity. The Tektronix Circuit Concepts book on CRTs discusses this.
I did a bunch of tests on my 7904 and 7603 last year to see what effects altering the PDA has so we would have a better idea of what to look for on the TekScopes@yahoogroups.com list when someone suspects a missing PDA or bad high voltage multiplier.
I think I read at one stage HD CRT TV's were developed in Japan that had 1200 approx lines.
I still use a 1600x1200x85Hz Trinitron. I prefer the color fidelity, and haven't met a single LCD that can compare (though they're finally starting to get comparable).
The highest I've seen is 2048 x 1536, which is probably the practical limit of compensating an analog raster scan system in production. The maze of correction components in one of these displays is astounding: magnetally biased saturable reactors for correcting nonlinearity, switched capacitors for correcting sweep rate and ringing; etc. And at horizontal sweep rates over 100kHz, the poor yoke is being pumped with ~1kVA of reactive energy, just to push around a nearly massless electron beam!
I have a 21" shadow mask computer monitor which does 2048 x 1536 but I usually ran it at 1600x1200 to match my 19". Its horizontal output circuit keeps failing.