Bit of a long shot but it may well be fruitful to ask --
This is exactly why I would like to experiment. I've tried to take some online classes, but even something as basic as 101 classes about Schrodinger's equation at some point simply pulled a step in the demonstration out of thin air, render the whole previous lessons futile. I assumed there it must be some trivial math that the teacher didn't explained and I couldn't see it myself. Could be, the math was way over my head, but it didn't looked like that step was coming from math. It looked like an arbitrary physics decision made to match the theory.
And even so, the current classes all start from the same experimental observation and the same mainstream premises, which premises (some of them) seems wrong to me. I don't know any physicists that would hand-walk me in understanding why it can not be like I think it is.
This bothers me a bit as well; engineering is especially rife with assumptions, given apart from any reasoning or proof. I'm at least clever enough to see through those, or at least imagine what and why, if not exactly to write up rigorous proofs of them. Heh, or I was once upon a time, it has been a while since I learned these things after all.
Physicists at least are more inclined to give proofs, but still have much in their toolkit that's simply taken as given.
I think part of the reason can be given thus: a huge amount of work was done in the early history of modern physics as we know it. People far cleverer than us, came up with the tools (or found them in extant mathematics) to do work in the earliest, still very poorly understood, field of quantum mechanics. Who knows how many things they tried, or what thought process or intuitive leap led them to apply those things that were ultimately successful.
Some illumination might be gained by reading the original sources -- actual papers and books by Schrodinger, Dirac, Pauli, etc.
Or perhaps not, as they're likely quite dense documents, too. Well, one certainly does not gain understanding without putting in some effort.
Alas, one cannot get far without a deep understanding of what mathematical tools were available; and I feel this myself, much of the higher level or cutting-edge stuff invented in the 19th century feels rather perplexing (but also, it seems, not especially useful in my work, so I also have no
need to pursue it), let alone since then.
So, instead of following a convoluted, and ultimately not very useful, detailed history of the derivation and application of the methods -- I think physicists prefer to simply cut to the chase, and take the useful results as given.
Does that sound like your sort of issue?
I'd be glad to discuss it in detail, to the extent that I remember it anyway (and, having a classical physics education, I'm afraid I don't have the details of exactly that early work). Probably, others here even better versed in these subjects (I'm pretty sure at least one is, actually, though whether they're at liberty to discuss, for reason of free time or others, I don't know), so it may prove worthwhile after all.
Oh, and that can be in a separate thread if you prefer.
That's what I'm after. I suspect it should be possible to build a functionally equivalent of a Bloch sphere, but with macroscopic objects and no cooling to near 0K. That's an old pet-theory of mine from some years ago, idea that happen to surface again two days ago only by pure serendipity. Not trying to simulate that with a digital computer, but to use macroscopic objects about the same a photon or an ion is used in the existing (cryogenic) quantum computers.
Hmm, interesting. So, how will that be modeled? Without a given differential equation, we can at least make some assumptions/assertions:
- The coherence time will be more or less equivalent to the time constant of the system. (I take it this is the reason you're particularly interested in high Q.)
- We can represent complex numbers as AC signals (phase/amplitude or I/Q), and vectors as collections thereof.
- What role would coupling (if any) play? We inevitably must couple signals into and out of this system, so it must ultimately be equivalent to a filter network. Whatever it is we're modeling, it must be an LTI system. (Note that in general, Schrodinger's equation is neither of these things; for that, the Hamiltonian must meet certain restrictions.)
- What parameters will the model be equivalent to? (Position, momentum, energy, wave function, etc.? Those are particle-specific of course; for the Bloch sphere, it would seem two general variables would suffice.) Will the model evolve over time or frequency, or something less direct (more abstract)?
Now, the thing about Q, given everything else, is this: to the extent that you wish to see any particular aspect of behavior of this system, you can always do it at some lower Q factor, if the couplings are adjusted, and rescaling the time/frequency appropriately. For example, two very-high-Q resonators coupled together, will have a double-peaked frequency response; if their individual Qs are low (Q*k < 1), the damping will dominate and only one peak will be observed; but we can just as well adjust parameters to create the same condition, scaled appropriately (increase k, driving the peaks proportionally further apart).
The special part about quantum computing, is not just the creation of for example very high Q (superconducting) resonators, with specific coupling factors between them; but to do so
and to extract energy from an auxiliary term, driving the output state to a desirable transformation of the input state. (Or, this is one possible realization; alas I know very little about modern techniques used for this.)
Note that, since the computation must be lossless, there is no pure "AND" or "OR" gate, but we can construct gates with such a transfer function
plus auxiliary outputs which preserve the "discarded" information that a lossy gate would otherwise incur. The gates are also reciprocal, so that we can remove energy from an "entropy" port, forcing the system into its desired final state.
And all this must be done, while avoiding perturbations from the outside world; the thing about entanglement is, not that it's special, but actually that
very low entangled states are special. What we call "collapse of the wavefunction" or "classical physics" is just the collective effect of myriad particles entangled into a relationships so complex that they cannot be untangled, and the statistics of the system reduce to the classical case. It's an information entropy thing. So, that's why it must be impossibly cold, and well shielded and all that; the more error that seeps into the system, the more its state gets corrupted from the intended pure state, and the less likely it is that we will find the desired result after letting the computation settle.
Which of course is why you say it's... difficult, at least, to scale up. I only take issue with this: I wouldn't go so far as to say impossible. It is certainly challenging. One option for example is to add error correction logic; the system becomes significantly larger (which helps even less with scaling), but the effective redundancy allows internal state to be preserved longer, basically allowing external interference to be bled off as excess heat, in the same way that the excess entropy (beyond the final desired state) is extracted.
In the limiting case, computation -- of any sort, quantum or classical -- is still possible as long as the bit error rate is below 50%. The amount of error correction required goes to infinity as BER goes to 50%, but it is finite below that.
Nondeterministic computation is something we have done very little practical work with, so far. Modern computers definitely do need to take account of errors -- with typical BER in the ppb range, we can afford to employ very modest error correction methods: parity check, CRC, hash, etc.; ECC RAM, detecting errors and simply repeating calculations; and in demanding cases, duplicating (or even tripling) direct effort (lockstep or redundant voting CPUs). It's all very inefficient, but it's easy to do, and the errors are sufficiently rare that it works out very well on the whole.
Presumably, soon in the future, we'll need to face this tradeoff, and develop even still more complex logic to deal with transistors (or other kinds of logic) that are slower and even less reliable than what we have today. With those tools better understood, perhaps it will be relatively easy to apply them to quantum computing, and greatly improve the scalability as a result?
Anyway, I digress. Back to the resonators -- it seems pretty obvious/intuitive, that some piecemeal LTI analog computer can't possibly have more computing power than even a classical nonlinear system (such as a digital computer, or any differential equation in general*) -- the challenge I take it, is showing how it must fail, while also showing how quantum computing can succeed -- would you agree?
*Differential equations are Turing-complete, of course. As a semi-famous example, when Feynman was working on the Connection Machine late in his career, he drew up a diff eq encapsulating what that computer was best suited for. That's a... very physicist way to look at it, and probably not very useful to computer scientists, but is certainly a technically adequate description.
Filter theory may apply, but I'm not trying to get any particular type of filter (though any resonance unintendedly implies filtering, too). In the beginning it will only be a pair of resonators, planning to add more resonators to the network only if the first two resonators won't prove my pet-theory to be wrong.
Unintended, perhaps, but absolutely and inseparably equivalent! Perhaps the equivalence isn't all that helpful (which is to say, if you aren't real hot on network analysis to begin with?), in which case that might be a secondary goal, just as well.
@TimFox
Also -- he gave a simpler version of my bandgap example. They're the two extremes of the same problem, of course. An isolated atom has some spectral response; a diatomic molecule has split levels; polyatomic, further splits still; and so on, up to the trans-finite case where the levels are so dense that they seem continuous, except for a conspicuous gap splitting the bands in half. There are many practical upshots of this: quantum nanodots for example, that have characteristic fluorescence or etc. tunable by particle size.
So, depending on how big of an array you wish to construct, and how you couple them together -- you can demonstrate this yourself.
Resonators are not quantum of course, you'll get continuous amplitudes rather than discrete photons, and bandwidths rather than sharp lines. Which, on that note -- atomic lines are nearly ideal, by themselves; they are almost entirely spread out due to extrinsic physical processes, for example Doppler effect in the hot plasma of a glow discharge. Lines may also be split due to atomic physics (fine/hyperfine structure), which may thus become overlapped into apparent continuua. Splitting also depends on ambient fields (Stark and Zeeman effects) which may be subject to random processes (e.g. electrical noise in the glow discharge?). So there are lots of conditions that act to apparently broaden atomic transitions, but they're really quite ideal on a per-atom basis.
This is probably most exaggerated with nuclear modes. NMR is extremely sharp, to the extent that the influence of local molecular fields is sensible (contributing consistent ~ppm shifts in resonant frequency, including -- you guessed it -- splitting due to nearby nuclei coupling to each other!). It even works all the way up with gamma rays (~EHz?), for extremely small (~Hz?) frequency shifts (Mössbauer effect).
Maybe a bit beside the point, but an important and interesting difference.
Wow, I must try that! I already have a DDS and an oscilloscope instead of an SA, but no air capacitor for now, only a sugar cube size one, from a former AM radio. Between its moving fins it has a dielectric that looks like plastic foils, I hope that to be polypropylene.
Double tuned resonators are easily tested with, much of any network of the sort, really -- you can take two LCs of equal value, and couple them with a small capacitor for example. Give this a play-around:
https://www.jrmagnetics.com/rf/doubtune/doubccl_c.phpOne of those AM radio varicaps should be fine, I'd guess (hope) the Q factor is well over 100, the main downside is probably just that you have only the one. Well, paired with a fixed resonator, you can see how the relative amplitude of the peaks shifts with mismatched Fo's -- one of the interesting behaviors of this system, you're not so much tuning the peaks in this way but the amplitude balance between them. The coupling factor determines splitting, and the geometric average of Fo's determines the center frequency (I think?).
Tim