Trinitron monitors were probably pushing the limits of e-beam accuracy in a production context. Consider mine, which does 1600x1200 resolution; for a pixel to be where it ought to be, that's about a 0.1% error, not bad at all.
Thats interesting! I would be further interested to know how the 0.1% error figure was reached.
I was just introducing that as a rough assumption. One pixel out of 1600 is 0.07%.
I later refine that assumption by noting fringes against the internal structures, probably giving a 0.2 to 1% figure. (It's noteworthy that the fringes in and of themselves may not be an error -- simply that the intentional screen resolution doesn't match the number of wires in the aperture grille. Distortion in the fringe pattern would then be the real error, and I think is on the order of several fringes +/- at any given spot. This would give a figure around 0.1% instead.)
Obviously(?), I haven't measured the geometry with real physical instruments, so I don't know the absolute error independently. (What if the aperture grille itself is in error and the deflection is actually perfect?!)
I'm itching to know why would this be considered heroic? A 200,000 step DAC that corresponds to 18bit resolution for the deflection system. I perceive there is more than that what meets the eye, and probably I'm too naive to miss out some important issues. Could you please indicate what kind of complexity is one looking at to get to 10nm steps in a 2mm full field?
In short, it's a dynamic range problem, just with signals in different units (distance rather than voltage).
I'm sorry, I ddn't get what this statement meant?
Yes, precisely -- positioning alone is very high precision. We can certainly make voltages or currents with 18-bit accuracy and precision, but transducing those signals into other units is another matter.
I don't know if you have an intuitive feel for distances as signals, so I wanted to make that equivalence clear. It's not like we can simply put down some nanobots with carbon nanoribbon tape measures and have them mark out squares of exact size; even if we did that, the error in each individual measurement would accumulate across a wider area. We could mark it with an interference pattern, say, which will give reasonable periodicity, but the variance between each fringe will be sloppy, and the pattern may be distorted due to optics (is the fringe pattern actually a projection to a cylindrical or spherical surface, and slightly distorted or defocused when projected onto a flat surface?).
These distortions will dominate the errors in a transducer; we can drive one with a 24-bit DAC as much as we like, but if we're only getting say 14 good bits out of it, it doesn't much matter, right?
It could very well be that unavoidable fluctuations in the electron beam apparatus dominate in this range, so that the spot can't be well enough focused, or that it wobbles in a noisy fashion, or that the projected image is distorted in an inconsistent way. Needless to say, such apparatus needs to be extremely well shielded from ambient magnetic and electric fields (multiple layers of mu-metal, probably an interior of machined ceramic or aluminum, normalized (annealed, de-stressed)), as well as from fluctuations in temperature (a few mK fluctuation in a local area of the beam tube, and that side tilts up noticeably). The beam's projection is very sensitive to everything in the beam path, especially the cathode and first grid; up at the top, the electron velocity is low so a huge difference in trajectory can be made from a small influence.
I don't know nearly enough about electron optics to know what magnitude these (and other) effects have on it, or what the ultimate physical limitations are (say due to quantum uncertainty in the position of the cathode, and the fields around it?) and how close to them we can get, but I'm at least going to guess that they're doing a lot of very careful (heroic, you might say) work to tune out probably hundreds of such errors.
Tim