Not that trivial: Barcodes usually use interleaved digit calculations plus checksum. You can do that by a matrix, but this will still require calculations, i.e. creating a computer from scratch.
I'm wondering if some genius couldn't come up with circuitry similar to the way TVs used to sync signals using only analog means.
I suspect you may vastly overestimate the complexity of analog television!
The problem at hand is more akin to creating a multisync monitor. Whereas a TV can very safely assume its sweeps will be within a modest band around 15.7kHz and 60Hz, a more general device will have to monitor inputs, detect the frequency, or somehow adjust to it, and set its internal voltages, and whatever compensation adjustments need be made (S-correction capacitors, saturable inductors) to display an accurate picture. And indeed, this is what the actual units do, detecting the sync frequencies and using lookup tables to determine the correct settings, all in one or a few convenient ICs. Not necessarily that analog multisync is impossible, but man would it be hard to design, let alone adjust. They had it hard enough with color convergence/landing, a dozen controls (including adjustable resistors and inductors) that all interacted; plus the magnets on the CRT neck, and sometimes around the bulb too...

Anyway, the fundamental problem is optical perspective. Depending on distance and orientation between label and scanner, the pattern will be read at any frequency over a wide range -- maybe say 10:1 or something. And the code doesn't last long enough to lock a PLL to it, it's not easy to track the signal.
In comparison, NTSC colorburst was always at a reliable position during a line, with consistent frequency and phase; often instead of a full-on oscillator and PLL, they just injected the burst into a free-ringing crystal, and just let it ring down between lines. (Effectively treating the colorburst separator as a filter following a gated (tone-burst) waveform. Gated with a low duty cycle means, there's a peak at 3.58MHz, but it has a quite wide spectrum (having a sinc-like profile, built of 15.7kHz sidebands and etc. It takes a very sharp filter indeed, to pick out just the very peak of that spectrum. Fortunately, quartz crystals can deliver.)
Getting the signal, by the way, isn't too bad -- if we allow lasers (hey, HeNe is a tube, right?), then we can just take a nicely collimated one and sweep it across the label. I don't think there's really much to gain with different codes -- the straight bar version is about as good as any. A circular pattern is rotationally symmetric, but if we're scanning linearly, what good is that except when perfectly across the axis? The straight-bar pattern is tolerant of view-axis (roll) rotation, and modestly tolerant of off-axis rotation (i.e., including a perspective transformation). Perspective is nonlinear, meaning the frequency of the texture varies along any line not parallel to the plane and viewing angle; we could deal with more oblique viewing angles by using longer codes (encode it with more timing information?) and larger ratios between mark and space symbols (for pulse codings).
Anyway, just shining a laser wouldn't be quite enough, we would want to use a modulated beam -- this gives better discrimination against ambient noise, background light etc. Same principle as IR remotes, for example: use a carrier tone, then filter out just that, then do AGC, demodulation (can be synchronous, since we control the source too), and do thresholding and decoding on the demodulated signal. That's all normal stuff, radios have been doing that for a very long time indeed -- in fact, NTSC vertical sync is a case of this, where the H-sync pulses widen during vertical retrace, the pulse width being detected with an amp and filter as V-sync.
So we get a sequence of bits, interspersed with random environmental noise or backscatter. Basically we can say it's a serial signal at some baud rate, but we don't know what that rate is. If we can mask out noise, it might be feasible to run a PLL from it, given a self-clocking code like Manchester (or generally whatever FM or MFM we might choose). After some number of scans, we can guess the PLL is probably locked on, and decode that -- say clocking a shift register at its rate, thus latching out a digital word corresponding to the code, plus whatever stray bits at either end get dragged along by the gating. (That we get a code at all, is mainly what matters; how many bits we waste in the register, is a matter of efficiency rather than utter possibility.)
Oh, interesting aside, we could do analog levels as well. Doesn't really amount to much, it's gotta be decoded digitally somehow or another -- it's a code, not an image -- but it's interesting, and this idea had practical application, such as LaserDisc. (Which was notoriously sensitive to cleanliness, so, there's that, too.)
PLL can work both ways, maybe we servo the mirror motor itself to give the same frequency of scanned code; that might be easier, or maybe it doesn't matter. (An RC relaxation oscillator can be used as VCO, about as easily as running a power amp into a DC motor.)
And then actually decoding the captured data of however many bits, take your pick. Probably some rudimentary ALU and state machine, a computer stripped down to the barest hard-wired essentials. No idea how many tubes that might take, a hundred? Surely under a thousand. A serial machine would be nice (saves on bus drivers and gate arrays; shift registers are fairly cheap, especially with glow-lamp logic say), but might not be as practical as it sounds (using the signal directly would be nice -- it is its own memory -- but you're only scanning the code
while you're scanning it, and then it goes away; you still need a register to hold onto results).
The digital solution, in contrast, might not even bother with the PLL, but just clock at a much higher sampling rate, so take down maybe a thousand bits instead, then figure out how many adjacent bins of 1s/0s correspond to each code, and scan and convert the array into the actual code. And obviously, with a proper ([memory-constrained] Turing complete) CPU, it's just a matter of writing the firmware, then waiting for the conversion; it might take 10k's of CPU cycles, but that's fine, at 1MHz+ it's still done faster than you can blink. But it's a lot harder to do that with when every bit of storage and every gate of logic costs multiple whole tubes!
Tim