| General > General Technical Chat |
| Google claims to have reached quantum supremacy |
| << < (4/7) > >> |
| RoGeorge:
Thank you for finding the paper. :-+ Looks like they used a quantum computer indeed, their "Sycamore" processor. What I don't get is if that is a scientific paper or a journal article. Also I don't get it why? Doing a research just to be able to say "Quantum Supremacy" seems to me like writing "First!" in the comments section of a YouTube video. ;D Must be something else I am missing. Anyway, why did they take the announcement back? |
| maginnovision:
Maybe because it was a machine built to solve a problem rather than a machine you could make solve problems. That seemed to be IBM's issue with it. |
| golden_labels:
People forget the history so easily! Do you remember magnetism and then electricity? When they were very young? Everything was “magnetic” or “electric”, and that was widesread in quackery too. This is the same phenomenon we’re observing now with buzzwords like “quantum” and “AI”. I believe Google might have created what they claim they did. There is no reason to lie for the very same reason US had no reason to fake moon landing: just like NASA had all the means of sending astronauts to the moon in 1969, quantum computing alone is quite boring news in 2019. Both just had to use the resources very well to achieve some new goal, which no one else have reached before. The catch is that Google hasn’t made anything extremely new. They made it “quantum” to bring media attention. At the same time they’ve probably progressed in data mining algorithms much more, but who wants to listen about crunching numbers, pages of maths and being 1.004% better than foo? If the general public — and investors and speculators are the general public — doesn’t have a slightest idea about some topic, you can make them attach any meaning to it in their brains (in this case: “modern”, “faster”, “better”) and then use the term to invoke the corresponding feelings. It bears little resemblance to what is actually offered by given solution, but it sound good. And all the “buts” are needed, but they shouldn’t be seen as something negative. They’re inherent to the technology. Quantum computing will solve many problems much faster, but no one ever promised it will solve everything faster. There are problems, and most of those faced by an average person fall into this category, which are linear in their nature and using quantum computing will not make them magically faster. No, you will not get a billion FPS in your favourite game if you would run it on a quantum computer. ;) RoGeorge: Can you post a proof that, while resources grow linearly, the size of problems your computing method can solve grows at least exponentially with fast falling error probability? If not, you are missing something. |
| RoGeorge:
--- Quote from: maginnovision on September 24, 2019, 08:00:12 am ---Maybe because it was a machine built to solve a problem rather than a machine you could make solve problems. That seemed to be IBM's issue with it. --- End quote --- My understanding is that the Sycamore chip is a generic quantum computer, and it can run various other quantum algorithms. For that paper they program it with a specific quantum algorithm, one that will be particularly difficult to implement with a classic computer. It is very hard for me to believe that Google will take the time to develop the Sycamore chip just to say "First!" in quantum supremacy. |
| RoGeorge:
--- Quote from: golden_labels on September 24, 2019, 09:23:40 am ---RoGeorge: Can you post a proof that, while resources grow linearly, the size of problems your computing method can solve grows at least exponentially with fast falling error probability? If not, you are missing something. --- End quote --- Not exactly a formal proof, but I can show an example that highlights why the computing complexity in terms of big-O analysis is the wrong question to ask. Let's play again the "guess my secret number", this time with classical hardware only. Somebody pick a "secret" number, in the range 0..255. Let's try to find the secret number. One approach will be to start asking one by one about each number: - Is it 0? - No. - Is it 1? - No. - Is it 2? - No, ... And so on until we find the "secret" number. This algorithm will find the secret number in at most 256 steps. In computer science, this algorithm is said to be be of a complexity of O(n), so at most 256 steps in our example Another approach, much faster, is the so called "cut in half" method, AKA binary search. Things go like this: - Is it bigger than 127? - No. - Is it bigger than 63? - Yes. ... And so on until we find the secret number. This algorithm is said to have a complexity of O(log n), so in at most 8 steps for our example, it is guaranteed we will find the "secret" number. OK. Pretty boring so far. Now, let's take a step back and see what we were really doing in the last algorithm. In fact, we were asking for the secret number bit by bit. Asking if the number was bigger than 127 is equivalent of saying "tell me the most significant bit of the secret number". So, the binary search algorithm can be, as well, written like this: - Tell me the first (most significant) bit! - 0 - Tell me the second bit! - 1 ... and so on, until we get all the 8 bits of the "secret" number. OK. So what? Well, there is yet another algorithm that is guaranteed to solve our problem in exactly one step, so O(1). This best possible algorithm works like this: - Tell me the secret number! ;D - 75. Problem solved in one step. Now, why we don't "ask" our classical computers just "tell me the darn answer", and solve any algorithm in one step? The answer is trivial, but we were "trained" to disregard it. Any classical computer is a 1 bit serial machine, AKA a Turing Machine. Essentially, a classic computer executes everything step by step, with a width of 1 bit. Yes, we do some tricks and now we all have 64 bits computers, but note that this 64 is a fixed number, no matter what algorithm we are running. We can not truly adjust the number of parallel bits processing according to the solutions space of each algorithm. We are doing the best we can we those 64 we have. 64 bits is better than a 1 bit Turing machine, but essentially it is still a serial machine. What I am trying do show here is that the O(log n) limitation comes from the topology of current classical computers: they are serial machines, not parallel machines. Now, even if it were to have computers where the width of a word can be adjusted upon wish, according to each algorithm, we still couldn't be as fast as an analog computer, or quantum one. In fact, we already have the technology for adjustable width parallel digital computer in the form of FPGAs, so why the digital computer with a serial architecture is the king, at not the parallel computer? Because parallel digital computing doesn't scale well with big number of bits, while the serial one can scale up pretty well, but only up to to a point. From that point on, we realize that our serial architecture can not cope with some algorithms that tend to have a nasty big-O. To conclude: 1. For now we have the following technologies: - one bit serial Turing machine - a few bits (64) parallel machine, but the architecture is still a serial - other tricks to achieve adjustable width parallelism (yet still digital), e.g. distributed computing, multicore, etc. - purely parallel digital machines, like a FPGA - then there is analog computing and quantum computing 2. The problems where the number of steps for a given algorithm escalates too fast with big numbers might be mitigated by using parallel algorithms instead of serial algorithms. 3. To mellow down the number of steps even more, instead of digital algorithms with only 2 states (0/1) per bit, we can try analog algorithms, where the number of states "encoded" by each "analog bit" is virtually infinite. This is helpful because analog algorithms can operate now with a much broader range of values, virtually infinite number of values for each "analog bit". In practice, it is not possible to get infinite resolution from an analog value, but still, it's better than having only 2 possible values per bit, as in the 0/1 digital computing. Note that classical analog computing operates with contiguous values situated in R (real numbers). 4. Then, there is quantum computing, where each qubit value is a contiguous range of values situated in C (complex numbers), until the qubit is "collapsed" into a 0/1 at the final step of a computation (a contiguous C qubit is turned into a digital 0/1 bit by the measurement gate, which is always the last gate in any quantum algorithm). In the end, the speed of solving a computation problem is given by a combination between algorithm type, numbers type (digital, real or complex) and computer architecture type (serial or parallel). Ultimately, the proper combination between all these possible speeding factors is a mix dictatet by the availability technologies. |
| Navigation |
| Message Index |
| Next page |
| Previous page |