I have no idea who put it there, but I can assure you the actual paper contains no such nonsense.
I would have thought whoever writes the google tech blogs is more competent than bottom tier science journalists. But in this case I think it is more reasonable to assume malice, as the post is authored by the Google Quantum AI Lead, and makes more sense as hype-boosting buzzword bullshit than as an honest misunderstanding that was not caught during editing.
There are compelling arguments to believe in the many-worlds interpretation.
No sign of a Heisenberg cut has been observed so far, even as experiments involving entanglement of larger and larger molecules are performed, which makes objective-collapse theories hard to consider seriously.
Bohmian theories are nice, but require awkward adjustments to reconcile them with relativity. But more importantly, they are philosophically uneconomical, requiring many unobservable — even theoretically — entities [0].
That leaves either many-worlds or a quantum logic/quantum Bayesian interpretations as serious contenders [1]. These interpretations aren't crank fringe nonsense. They are almost inevitable outcomes of seriously considering the implications of the theory.
I will say that personally, I find many-worlds to focus excessively on the Schrödinger-picture pure state formulation of quantum mechanics. (At least to the level that I understood it — I expect there is literature on the connection with algebraic formulations, but I haven't taken the time to understand it.) So I would lean towards quantum logic–type interpretations myself.
The point of this comment was to say that many-worlds (or "multiverses", though I dislike the term) isn't nonsense. But it also isn't exactly the kind of sci-fi thing non-physicists might picture. Given how easy it is to misinterpret the term, however, I must agree with you that a self-aware science communicator would think twice about whether the term should be included, and that there may be not-so-scrupulous intentions at play here.
Quick edit: I realise the comment I've written is very technical. I'm happy to try to answer any questions. I should preface it by stating that I'm not a professional in the field, but I studied quantum information theory at a Masters level, and always found the philosophical questions of interest.
---
[0] Many people seem to believe that many-worlds also postulates the existence of unobservable parallel universes, but this isn't true. We observe the interaction of these universe's every time we observe quantum interference.
While we're here, we can clear up the misconception about "branching" — there is no branching in many-worlds, just the coherent evolution of the universal wave function. The many worlds are projections out of that wave function. They don't discretely separate from one another, either — it depends on your choice of basis. That choice is where decoherence comes in.
[1] And of course, there is the Copenhagen "interpretation" — preferred among physicists who would rather not think about philosophy. (A respectable choice.)
I think the key point that makes the quoted statement sciencey gibberish is that the Many Worlds Interpretation is just that - an interpretation. There is no way to prove or disprove it (except if you proved that the world is not actually quantum mechanical, in which case MWI might not be a valid interpretation of the new theory). Saying "this is more evidence for MWI" is thus true of any quantum mechanical experiment, but anything that is evidence for MWI is also exactly as much evidence for Pilot Waves (well, assuming it is possible to reconcile with quantum field theory), the Copenhagen Interpretation, QBism, and so on.
As a side note, there is still a huge gap between the largest system we've ever observed in a superposition and the smallest system we've ever observed to behave only classically. So there is still a lot of room for objective collapse theories, even though that space has shrunk by some orders of magnitude since it was first proposed. Of course, objective collapse has other, much bigger, problems, such as being incompatible with Bell's inequalities.
Edit: I'd also note some things about MWI. First, there are many versions of it, some historical, some current. Some versions, at least older ones, absolutely did involve explicit branching. And the ones that don't have a big problem still with explaining why, out of the many ways to choose the basis vectors for a measurement, we always end up with the same classical measurables in every experiment we perform on the world at large. Especially given that we know we can measure quantum systems in another other basis if we want to. It also ultimately doesn't answer the question of why we need the Born rule at all, it still postulates that an observer only has access to one possible value of the wave function and not to all at once. And of course, the problem of defining probabilities in a world where everything happens with probability 1 is another philosophically thorny issue, especially when you need the probabilities to match the amplitude of the wave function.
So the MWI is nice, and it did spawn a very useful and measurable observation, decoherence. But it's far from a single, satisfying, complete, self-consistent account of the world.
This would be true if we were talking about something like String Theory, or Loop Quantum Gravity.
But it is not true for MWI: MWI was designed from the ground up as an interpretation of the mathematics and experimental results of quantum mechanics.
It is designed specifically to not match all of the predictions of quantum mechanics, and to not make any new predictions. Other interpretations are also designed in the same way.
So, if the people creating these interpretations succeeded in their goals when making them, then they will never be experimentally verifiable.
I think the point about it being unscientific is completely fair, as far as a press release aiming to appear scientific is concerned.
However, I also think there is a tendency among well-educated people in physics to dismiss philosophical questions out of hand. It's fair enough when the point is "let's focus on the physics as it's hard enough", but questions of interpretation have merit in their own right.
MWI or Parallel Worlds is an interpretation of QM, it is one of the 15-20 major interpretations of QM. Nothing at all wrong with MWI. Sean Carrol speaks kindly towards WMI and I have tended to agree with his views over the years. I don't see any wild claims being made that would warrant a major reaction, but I would agree Willow's results are so impressive that it should lead one to consider at minimum that it counts as evidence in favor of the WMI. I don't see how this doesn't count as evidence for MWI.
Thank you for this clarification -- for me it addresses a good part of the crank/fringe/sci-fi aspect
> While we're here, we can clear up the misconception about "branching" — there is no branching in many-worlds, just the coherent evolution of the universal wave function. The many worlds are projections out of that wave function.
That's right, I agree that Multiple Worlds isn't any less correct/falsifiable than quantum mechanics as a whole.
I've never heard about quantum logic before. The "Bayesian" part makes sense because of how it treats the statistics, but the logic? Is that what quantum computer scientists do with their quantum circuits, or is it an actual interpretation?
"Many-world interpretation" is just a religion, it has nothing to do with physics. Pilot Wave is an example of a physical theory, Copenhagen is an administrative agreement.
I'm pretty sure pilot wave is the same kind of unfalsifiable interpretation of the experimental results that MWI is. Also I think people are making too big a deal out of the comment in the article. I took it as kind of tongue-in-cheek. An expert would know MWI is unfalsifiable and inconsequential.
I'm sure they meant it refutes pilot-wave theory, though it seems that's not precisely true if you consider a non-local hidden variable to explain instantaneous interaction.
Quantum computation done in done multiple universes is the explanation given by David Deutsch the father of Quantum Computing. He invented the idea of a quantum computer to test the idea of parallel universes.
If you are okay with a single universe coming to existence out of nothing you should be able to handle parallel universes as well just fine.
Also your comment does not have any useful information. You assumed hype as the reason why they mentioned parallel computing. It's just a bias you have on looking at world. Hype does helps explain a lot of things. So it can be tempting to use it as a placeholder for anything that you don't accept based on your current set of beliefs.
I disagree that it is "the best explanation we have". It's a nice theory, but like all theories in quantum foundations / interpretations of quantum mechanics, it is (at least currently) unfalsifiable.
I didn't "assume" hype, I hypothesized it based on the evidence before me: There is nothing in Google's paper that deals with interpretations of quantum mechanics. This only appears in the blog post, with no evidence given. And there is nothing google is doing with it's quantum chip that would discriminate between interpretations of QM, so it is simply false that "It lends credence to ... parallel universes" over another interpretation.
From what I understand, David Deutsch invented the idea of quantum computer as a way to test Parallel Universes. And later people went on and built the quantum computer. Are you saying that the implementation of a quantum computer does not require any kind of assumption on computations being run in parallel universes?
It's just not how it works. All this type of quantum computer can do is test some of the more dubious objective collapse theories. Those are wrong anyway, so all theories that are still in the running agree.
> If you are okay with a single universe coming to existence out of nothing you should be able to handle parallel universes as well just fine.
I can handle it, sure, and the idea of the multiverse is attractive to me from a philosophical standpoint.
But we have no evidence that there are any other universes out there, while we do have plenty of evidence that our own exists. Just because one of something exists, it doesn't automatically follow that there are others.
I believe their point was that, if you accept the reality of _this_ universe being created from nothing, why wouldn't you also accept the notion of _other_ universes similarly existing too.
I can get on board with that: that there may be other, distinct universes, but I do not understand how this would lead to the suggestion they would be necessarily linked together with quantum effects.
Disagree with that. The fact that we reasonably accept a well-proven theory (ie the observed universe exists) that has some unexplained parts (we don't currently have a reasonable explanation for where does that universe comes from) doesn't mean that we should therefore accept any unproven theory, especially a unfalsifiable one.
Presumably the 'nonsense' is the supposed link between the chip and MW theory.
Let me add a recommendation for David Wallace's book The Emergent Multiverse - a highly persuasive account of 'quantum theory according to the Everett Interpretation'. Aside from the technical chapters, much of it is comprehensible to non-physicists. It seems that adherents to MW do 'not know how to refute an incredulous stare'. (From a quotation)
Everett interpretation simply asserts that quantum wavefunctions are real and there's no such thing as "wavefunction collapse". It's the simplest interpretation.
People call it "many worlds" because we can interact only with a tiny fraction of the wavefunction at a time, i.e. other "branches" which are practically out of reach might be considered "parallel universes".
But it would be more correct to say that it's just one universe which is much more complex than what it looks like to our eyes. Quantum computers are able to tap into this complexity. They make a more complete use of the universe we are in.
This might turn into a debate of defining "simplest", but I think the ensemble/statistical interpretation is really the most minimal in terms of fancy ideas or concepts like "wavefunction collapse" or "multiverses". It doesn't need a wavefunction collapse nor does it need multiverses.
A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".[85]
Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."[86]
In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory",[87] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[88]
A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[89]
A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[90] the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.[90]
i think if these polls were anonymous, copenhagen would lose share. there's a reason why MWI is disproportionately popular among people who basically have no professional worries because they are already uber-distinguished.
No, but as non-experts in a given field, the best information we have to go on is the consensus among scientists who are experts in the field.
Certainly this isn't a perfect metric, and consensus-smashing evidence sometimes comes to light, but unless and until that happens, we should assume that the people who study this sort of thing as their life's work are probably more correct than we are.
I think the idea here is that the choice on which hypothesis to verify is based on the risk assessment of the scientist whose goal is to optimize successful results and hence better theories are more likely to surface. In this way one does not need to form a consensus around the theory but instead make consensus on what constitutes a successful result.
Ideally this would be true, but funding agencies are already preloaded with implicit asssumptions what constitutes a scientific progress.
I assume this is a rhetorical question, since you are perfectly capable of doing a search for "the scientific method" on your own.
MWI has not led to any verifiably-correct predictions, has it? At least not any that other interpretations can also predict, and have other, better properties.
Okay. I have a hypothesis that the rain is controlled by a god called Ringo. If you pray to Ringo and he listens to your prayer it will rain in next 24 hours. If he doesn't listen it won't rain. You can also test this experimentally by praying and observing the outcomes.
i don’t really view “shut up and calculate” or very restrained copenhagenism as a real view at all.
i think if you were to ask people to make a real metaphysical speculation, majority might be partial to everett - especially if they felt confident the results were anonymous
I believe the vast majority of researchers in quantum computing* spend almost no time on metaphysical speculation,
*Well, those on the "practical side" that thinks about algorithms and engineering quantum systems like the Google Quantum AI team and others. Not the computer science theorists knee-deep in quantum computational complexity proofs nor physics theorists working on foundations of quantum mechanics. But these last two categories are outnumbered by the "practical" side.
not metaphysically equivalent. also, i’m not so certain it will always be untestable. i would have thought the same thing about hidden variables but i underestimated the cleverness of experimentalists
I think "experimentally equivalent" is what GP meant, and as of today, it holds true. Google's results are predicted by other interpretations just as well as by Everett. Maybe someday there will be a clever experiment to distinguish the models but just "we have a good QC" is not that.
You don't even have to get to the point where you're reading a post off Scott Aaronson's blog[1] at all; his headline says "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel."
In the same way people believe P != NP, most quantum computing people believe BQP != NP, and NP-complete problems will still take exponential time on quantum computers. But if we had access to arbitrary parallel universes then presumably that shouldn't be an issue.
The success on the random (quantum) circuit problem is really a valdiation of Feynman's idea, not Deutsch: classical computers need 2^n bits to simulate n qubits, so we will need quantum computers to efficiently simulate quantum phenomena.
Does access to arbitrary parallel universes imply that they divide up the computation and the correct answer is distributed to all of the universes or in such a collection, there will be sucker universes which will always receive wrong answers ?
Good question! The whole magic of quantum computation versus parallel computation is that the “universe” probabilities interfere with each other so that wrong answers cancel each other out. So I suppose the wrong ”universes” still exist somewhere. But it’s a whole lot less confusing if you view QC as taking place in one universe which is fundamentally probabilistic.
I don't understand the jump from: classical algorithm takes time A -> quantum algorithm takes time B -> (A - B) must be borrowed from a parallel universe.
Maybe A wasn't the most efficient algorithm for this universe to begin with?
Right, and that's part of the argument against quantum computing being a proof (or disproof) of the many-worlds interpretation. Sure, "(A-B) was borrowed from parallel universes" is a possible explanation for why quantum computing can be so fast, but it's by far not the only possible explanation.
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.
That's in line with a religious belief. One camp believes one thing, other believes something else, others refuse to participate and say "shut up and calculate". Nothing wrong with religious beliefs of course, it's just important to know that is what it is.
The Schrödinger equation inherently contains a multiverse. The disagreement is about whether the wave function described by the equation collapses to a single universe upon measurement (i.e. whether the equation stops holding upon measurement), or whether the different branches continue to exist (i.e. the equation continues to hold at all times), each with a different measurement outcome. Regardless, between measurements the different branches exist in parallel. It’s what allows quantum computation to be a thing.
> The Schrödinger equation inherently contains a multiverse.
A simple counterexample is superdeterminism, in which the different measurement outcomes are an illusion and instead there is always a single pre-determined measurement outcome. Note that this does not violate Bell's inequality for hidden variable theories of quantum mechanics, as Bell's inequality only applies to hidden variables uncorrelated to the choice of measurement: in superdeterminism, both are predetermined so perfectly correlated.
When taking the entire universe as a quantum system governed by the Schrödinger equation, then ψ is the universal wavefunction, and its state vector can be decomposed into pointer states that represent the “branches” of MW.
Non of that honkey ponkey is needed if you just give up locality and a hard deterministic explanation like De-Broglie-Bohm gives all the same correct measurements and conclusions like Copenhagen interpretation without multiverses and "wave function collapses".
Copenhagen interpretation is just "easier" (like oops all our calculations about the univers don't seemt to fit, lets invent "dark matter") when the correct explanations makes any real world calculation practically impossible (thus ending most of physics further study) as any atom depends on every other atom at any time.
De Broglie–Bohm doesn’t remove anything from the wave function, and thus all the pointer states therein contained are still present. The theory basically claims that one of them is special and really exists, whereas the others only mathematically exist, but philosophically it’s not clear what the difference would be. The Broglie–Bohm ontology is bigger than MW rather than smaller.
I suspect the real issue is that Big Tech investors and executives (including Sundar Pichai) are utterly hopped up on sci-fi, and this sort of stuff convinces them to dedicate resources to quantum computing.
>Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?
Doesn't this also mean that other universes have civilizations that could potentially borrow capacity from our universe, and if so, what would that look like?
It's a perfectly legit interpretation of what's happening, and many physicists share the same opinion. Of course the big caveat is that you need to interfere those worlds so that they cancel out, which necessarily requires a lower algorithmic bound which prevents you from doing infinite amount of computation in an instant.
This is a viable interpretation of quantum mechanics, but currently there is no way to scientifically falsify or confirm any particular interpretation. The boundary between philosophy and science is fuzzy at times, but this question is solidly on the side of philosophy.
That being said, I think the two most commonly preferred interpretations of quantum mechanics among physicists are 'Many Worlds' and 'I try not to think about it too hard.'
Well. If you study quantum physics and the folks who found it like Max Planck, they believed in "a conscious and intelligent non-visible living energy force .. . the matrix mind of all matter".
I don't know much about multiverse, but we need something external to explain the magic we uncover.
Energy and quantum mechanics are really cool but dense to get into. Like Planck, I suspect there's a link between consciousness and matter. I also think our energy doesn't cease to exist when our human carcass expires.
"Just marketing" in science journalism and publications is basically at the root of the anti-intellectualism movement right now (other than the standard hyper-fundamentalist Christians that need to convince people that science in general is all fraud), everyone sees all these wild claims and conclusions made by "science journalists" with zero scientific training and literal university PR departments that are trivially disproved in the layman's mind simply by the fact that they don't happen, and they lose faith not in the journalists who have no idea what they are writing about, but in science itself
I used to love Popular Science magazine in middle school, but by high school I had noticed how much it's claims were hyperbole and outright nonsense. I can't fathom how or why, but most people blame the scientists for it.
You've mentioned this in another comment. I have to point out, even if this is his opinion, and he has been influential in the field, it does not mean that this specific idea of his has been influential.
Correct. The laws of quantum mechanics (used for building quantum computers among other things) make very little assumptions on the nature of the universe, and support multiple interpretations, many-worlds being only one of them.
Quantum mechanics is a tool to calculate observable values, and this tool works very successfully without needing to make strong assumptions about the nature of the universe.
> It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years.
If it's not, what would be your explanation for this significant improvement then?
Quantum computing can perform certain calculations much faster than classical computing in the same way classical computing can perform certain calculations much faster than an abacus
I see the evidence, and I see the conclusion, but there's a lot of ellipses between the evidence and the conclusion.
Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?