Speaking of which, I would be more impressed by a full analog solution. Or at least something partial.
You could (almost trivially) mux the LEDs with the video and scan signals, but because the luminosity is orders of magnitude less than a good high voltage CRT, you'll be lucky if the image is viewable even under dim lighting.
If, instead, you used a S&H that maintains the illumination of each LED for the duration of a scan line, you could get a passable image. I think one possible arrangement could involve, sampling an entire line, then lighting it while the next line gets sampled, and so on.
- The LEDs are wired in a matrix (as usual)
- Each matrix row is driven by a high side switch to +V; rows are selected using a full width decoder (or tree of smaller decoders) and a row counter
- Each column gets a current amp
- Each current amp is driven by an analog transfer gate (like a bucket brigade device, the analog equivalent of a D flip-flop)
- The transfer gates are, in turn, supplied from the analog input through a full width mux (bidirectional mux/demux, the analog switch kind; like the decoder, this can use a tree of smaller devices), and the column counter (pixel clock'd)
This sounds very insane, so let's save cost in an equally insane way.
- Ideally, the current amps are FET or CMOS op-amps, with a current source output (using a BJT or MOSFET in the usual circuit). Cut out the op-amp and use a single 2N7002 with series source resistor. Or, who even needs a source resistor? Get rid of that too. Use just the transistor. (The low input bias requirement will become obvious shortly.)
- The analog transfer gate should behave like so: the output voltage remains constant and unchanging, until the clock switches, at which point it should instantly take on the value at the input. The traditional way to do this in digital (if you look inside the structure of a D flip-flop) is two storage bits (RS flip-flops) which are switched, one from the other, and the other from the input, on opposite clock phases.
One possible analog implementation could use two capacitors, where one is connected to the input voltage, the other to the output voltage; then when the clock changes, the input and output get swapped to opposite capacitors (make-before-break). This requires four analog switches (e.g. sections of a CD4066), which isn't hard or anything, but still pretty tedious. We should be able to do better.
Suppose instead, we allow that the clock event can be a short sampling pulse. If the input is low impedance, we can use a single switch to momentarily "bump" a sampling capacitor to the new voltage. As long as the clock pulse width is a few time constants, it'll settle accurately, and as long as that total pulse width is significantly less than the scan line period, it won't get in the way of anything. Great.
Further cheapening option: dual diode. Yeah, serious. Use a BAV99 or other series dual junction diode (not schottky -- low leakage will still be worthwhile). Connect the analog input to the cathode end, the sampling capacitor to the middle, and the cathode to GND. To sample, drive the "ground" end of all the sampling caps to -5V (the diode to GND holds the "top" end just below GND, so all the caps get precharged uniformly to about +5V), then drive to +5V (the excess charge is pushed out of the input terminal, which is fine because it's a low impedance), then return to 0V. The capacitors remain charged to the input voltage minus 5V.
Note that the sampling strobe voltage needs no DC offset (it's cap coupled!), so it can also be 0, 5 and 10V, or -10, -5 and 0V, or whatever. Or 5 and 12, or...
- This works for sampling an entire row at once, but a low impedance is needed. Instead of a whole column worth of op-amp samplers, we'll use a capacitor again. That's not a very low impedance, you might think. In fact, the value doesn't matter; we can fix the loss by adding extra gain at the front end!
So, as each column demux switches in turn, the first sampling cap gets charged to the input voltage. At the end of a line, the second sampling cap gets strobed down, up and back, taking on a voltage corresponding to the sampled voltage. Now the whole column stays at the same level (more or less) for the duration of a scan line, and we can continue sampling pixels in the background without interference.
Now, we've introduced a whole lot of shittiness into the system. Low gain samplers. Leakage. Drifty current sinks -- not even current sinks at all, they're just naked transistors! What the hell!
Here's where the magic comes in. We only have to do it once, for the whole system, so we can spend a whole lot more money and space doing this.
The most awful part is the dependency on gate characteristics of the column "current sink". Vgs(th) and transconductance are poorly defined both from manufacture and from thermal drift and aging. No way you're going to have a trimpot per column -- besides, where would you put a resistor divider in a low-leakage (charge transfer sampling) circuit? So instead, here's what we will do.
Once all the visible rows have finished scanning (there's always plenty of blank scanlines for retrace), reserve a couple for calibration. This involves some address decoding and special purpose logic, but nothing crazy I think -- still doable in discrete (MSI). The principle is this: during a retrace scanline, light up only one pixel, and calibrate the voltage offset and gain of that column. Since we're in retrace, the actual row being worked should be invisible -- it could be an N+1th row, loaded with regular silicon diodes instead of LEDs.
Each time this calibration line is engaged, set the input to zero, except when lighting the pixel, in which case connect the input to a DC bias. (Use a "calibration address" counter to select the pixel -- increment each scan, so eventually all columns get calibrated.) When the line strobe fires, measure the current draw on the calibration row: only one column is lit, so exactly that current flows. Sample the result, perhaps with an ADC (it can sample at any time during the subsequent scan line -- remember, the row stays charged for an entire scanline), and store the result in RAM.
What gives with the DC bias? Ah, well remember that we've stored a number in RAM; suppose we hook a DAC to the same address, so that when the pixel is lit, it gets lit with "what we think" it should be illuminated at (say for some "gray" level of intensity). When it propagates through the scan circuitry and gets lit, the resulting current draw gets stored in RAM: if we use an op-amp to subtract that from what the current should be (say, 1mA), the value stored is the error voltage.
The secret is this: the RAM is always being addressed by the column address counter (asynchronous SRAM would be good for this), and the DAC is always converting its data value. While drawing normal pixels, for each column address, the error corresponding to that column gets added into the input voltage, so that it gets lit at -- hopefully -- the correct intensity. If it's not quite the right intensity, the process repeats, until it converges on the correct value.
The number of refresh lines is limited (20-50 for VGA I think?), but the calibration process could be done for pretty much all of them. In this way, after a few dozen full frames -- less than a second -- all columns get re-calibrated. This should pretty well take care of thermal drift, even fairly rapid drift. The settling time of the error system will depend on the loop gain, which depends on the transistor gain. This procedure also only accounts for voltage offset; however, if every other cal cycle (be it every other line, or every other complete pass) is performed at a higher current (say 100mA "full white"), and those results stored into an additional RAM; likewise, a second DAC produces the "full error" value, and an op-amp or VGA at the input resolves this into a proper two-point (offset and slope) calibration.
Damn, I kind of really want to build one of these now, but fuck if an array isn't going to be expensive. If I build it myself -- assuming I even keep at it -- a 160x100 array (nary a quarter of old school VGA's 320x200 mode) takes 16 thousand tricolor LEDs, 480 columns, and with 3/8" pitch, spans a whole 5 x 3 feet. If it takes me two seconds to trim the leads of an LED, I need ten whole hours just to trim them all! Let alone build the matrix: that requires 1/3rd of a mile of hookup wire and over three pounds of solder, not to mention some sort of mounting base.
The drivers wouldn't be terrible, at least: the rows could be served by seven 74HC154s, a hundred gate driver chips, and a thousand ~2A P-FETs (ten acting in parallel to cover each row). The columns would need thirty 74HC4067s (ten per color, plus one 'HC154 to select banks), 960 capacitors, 480 dual diodes and 480 2N7002s. (In contrast, the auto-cal circuitry could be some digital logic or an FPGA, plus a few op-amps and stuff, on a small board supplying the signals to the matrix.)
...Now where were we? Oh, I've written a novel again.. drat..
Tim