**RoGeorge**:

Can you post a proof that, while resources grow linearly, the size of problems your computing method can solve grows at least exponentially with fast falling error probability? If not, you are missing something.

Not exactly a formal proof, but I can show an example that highlights why the computing complexity in terms of big-O analysis is the wrong question to ask.

Let's play again the "guess my secret number", this time with classical hardware only. Somebody pick a "secret" number, in the range 0..255. Let's try to find the secret number.

One approach will be to start asking one by one about each number:

- Is it 0?

- No.

- Is it 1?

- No.

- Is it 2?

- No,

... And so on until we find the "secret" number. This algorithm will find the secret number in at most 256 steps. In computer science, this algorithm is said to be be of a complexity of O(n), so at most 256 steps in our example

Another approach, much faster, is the so called "cut in half" method, AKA binary search. Things go like this:

- Is it bigger than 127?

- No.

- Is it bigger than 63?

- Yes.

... And so on until we find the secret number. This algorithm is said to have a complexity of O(log n), so in at most 8 steps for our example, it is guaranteed we will find the "secret" number.

OK. Pretty boring so far. Now, let's take a step back and see what we were really doing in the last algorithm. In fact, we were asking for the secret number

*bit by bit*. Asking if the number was bigger than 127 is equivalent of saying "tell me the most significant bit of the secret number". So, the binary search algorithm can be, as well, written like this:

- Tell me the first (most significant) bit!

- 0

- Tell me the second bit!

- 1

... and so on, until we get all the 8 bits of the "secret" number.

OK. So what? Well, there is yet another algorithm that is guaranteed to solve our problem in exactly one step, so O(1). This best possible algorithm works like this:

- Tell me the secret number!

- 75.

Problem solved in one step. Now, why we don't "ask" our classical computers just "tell me the darn answer", and solve any algorithm in one step?

The answer is trivial, but we were "trained" to disregard it. Any classical computer is a 1 bit

*serial* machine, AKA a Turing Machine. Essentially, a classic computer executes everything step by step, with a width of 1 bit. Yes, we do some tricks and now we all have 64 bits computers, but note that this 64 is a fixed number, no matter what algorithm we are running. We can not truly adjust the number of parallel bits processing according to the solutions space of each algorithm. We are doing the best we can we those 64 we have. 64 bits is better than a 1 bit Turing machine, but essentially it is still a serial machine.

What I am trying do show here is that the O(log n) limitation comes from the topology of current classical computers: they are serial machines, not parallel machines.

Now, even if it were to have computers where the width of a word can be adjusted upon wish, according to each algorithm, we still couldn't be as fast as an analog computer, or quantum one.

In fact, we already have the technology for adjustable width parallel digital computer in the form of FPGAs, so why the digital computer with a serial architecture is the king, at not the parallel computer? Because parallel digital computing doesn't scale well with big number of bits, while the serial one can scale up pretty well, but only up to to a point. From that point on, we realize that our serial architecture can not cope with some algorithms that tend to have a nasty big-O.

To conclude:

1. For now we have the following technologies:

- one bit serial Turing machine

- a few bits (64) parallel machine, but the architecture is still a serial

- other tricks to achieve adjustable width parallelism (yet still digital), e.g. distributed computing, multicore, etc.

- purely parallel digital machines, like a FPGA

- then there is analog computing and quantum computing

2. The problems where the number of steps for a given algorithm escalates too fast with big numbers might be mitigated by using parallel algorithms instead of serial algorithms.

3. To mellow down the number of steps even more, instead of digital algorithms with only 2 states (0/1) per bit, we can try analog algorithms, where the number of states "encoded" by each "analog bit" is virtually infinite. This is helpful because analog algorithms can operate now with a much broader range of values, virtually infinite number of values for each "analog bit". In practice, it is not possible to get infinite resolution from an analog value, but still, it's better than having only 2 possible values per bit, as in the 0/1 digital computing. Note that classical analog computing operates with contiguous values situated in R (real numbers).

4. Then, there is quantum computing, where each qubit value is a contiguous range of values situated in C (complex numbers), until the qubit is "collapsed" into a 0/1 at the final step of a computation (a contiguous C qubit is turned into a digital 0/1 bit by the measurement gate, which is always the

*last* gate in any quantum algorithm).

In the end, the speed of solving a computation problem is given by a combination between algorithm type, numbers type (digital, real or complex) and computer architecture type (serial or parallel). Ultimately, the proper combination between all these possible speeding factors is a mix dictatet by the availability technologies.