Hacker News new | past | comments | ask | show | jobs | submit login
The Trouble with Many Worlds (rongarret.info)
93 points by lisper on July 21, 2019 | hide | past | favorite | 108 comments



The many-worlds interpretation is a good way to reason about quantum mechanics (certainly better than giving some special status to the experimenter), but it's meaningless to ask whether those other worlds really exist or not, not just because it's un-falsifiable, but also because if one defines "existence" to include alternate worlds, then every possible state of the world exists. If everything exists, then it's no longer a useful property to talk about.

One can replace many-worlds with an idea similar to relativity: you can only make statements about the universe from the perspective of a particle within that universe. Things entangled with that particle are "real", and one can make statements about them, things not entangled with that particle are not real.

When dealing with composite observers like ourselves, it's possible for a particle to be entangled with only part of the observer. Since entanglement propagates quickly, this is only a temporary inconvenience, but there are certainly some metaphysical questions to answer about that interim state.


>it's meaningless to ask whether those other worlds really exist or not

Dunno - if you think of something like the two slit experiment whether the worlds where the particle goes through slit one or where it goes through slit two exist or not seems to have experimental results as they produce an interference pattern.

You can also ask about more complicated situations and whether the other worlds exist may not be meaningless if they have an effect on event probabilities in ours, if only a small one.


> if you think of something like the two slit experiment whether the worlds where the particle goes through slit one or where it goes through slit two exist or not seems to have experimental results as they produce an interference pattern

Intereference is not due to the interaction between "worlds". You don't have "one world" where the particle goes through one slit and "another world" where it goes through another. (If you think that's the case, how does "our world" where the interefence can be observed fit in that description?)

You can perfectly explain interference with a single world where the particle doesn't go through any particular slit. It doesn't have to, because we're not trying to determine which slit it goes through (and if we do, the interference disappears).


In the many worlds interpretation, as soon as a world has "split"/"the wave of differentiation has hit"/"particles have been entangled"/"whatever terminology you want to use" from the observer, it has zero impact on any future results, not just a small impact. Until that split happens, we're just talking about our own world. It's never possible to observe any evidence for more than one world, that follows from the way we define what a "world" is.

Many-worlds does not try to explain why particles have a wave function, it only tries to explain the source of the apparent randomness in the way that wave function collapses, whilst removing the need for experimenters to be in some way "special" and immune from quantum effects.


Is that really true? In principle, if many worlds is the case then you could have a super machine enact a powerful unitary transformation which transforms the state of the observer such that there is interference between the two copies of the observer.


If "many-worlds" is unfalsifiable, doesn't that mean "not-many-worlds" is unfalsifiable?


"not-<anything>" is un-falsifiable. You can't prove that something is true with an experiment (which is what your double negative is suggesting).


I think you're still saying that X and not-X aren't symmetric. But it seems to me that you can always define not-X as Y, and X as not-Y.


Why is it better than Pilot Wave Theory?

It doesn’t even rule out non locality.


I didn't comment on its superiority to any other interpretation, in fact I only pointed out a problem with it?

Pilot wave theory seems good in that it contains more falsifiable components than many other interpretations. I have no idea whether it's true or not though. It feels like trying to "avoid" the weirdness in quantum mechanics is a human bias though.


Answer me this: what is, really, the difference between bias and knowledge?


Disliking weirdness is a bias: Something you feel and act upon. Knowledge has nothing to do with what you feel.


Something that never made sense to me is why people think of Quantum Mechanics (the non-relativistic classical one) as fundamental. While some of its principles certainly are, clearly QFT (quantum field theory) is fundamental. From the perspective of quantum field theory there really is no observer just different field that couple to each other. Something like a measurement can in principle be described by a complicated interaction of fields (think of shining light at a double slit). The big mistery is I would rather say why the path-integral formulation of QFT (of which the principle of least action is the classical limit) agrees so well with reality.


There are a couple of reasons (you alluded to them when you said some of the principles of QM are indeed as fundamental as QFT):

- My favorite one is that in terms of computational complexity, the QM and QFT are equivalent. A computational machine based on the rules of QM can simulate QFT efficiently.

- How does QFT describe the following situation: you have a particle created at location x that then propagates and passes through a double slit setup where you might selectively close one of the slits at any time (i.e. you measure the particle's location). Any challenges of interpretation present in QM are still present in the QFT formalism (but mathematically, both do predict the evolution of the system correctly).

- While my QFT experience is limited, I have the impression that the tools for dealing withixed states it has are less developed than the ones in QM. Given that due to the previous reasons QFT does not give much new insight to the topic of measurements, it is reasonable to stick to the simpler (but equivalent in our parameter regime) theory of QM and it's more sophisticated mixed-state toolkit.


- Regarding your first point, I'm pretty sure this is untrue without further qualifications (which I understand that you don't mention because this a forum and not a scientific setting). In any case any such mapping would most likely be NP hard (assuming your quantum computer has q-bits that can only interact according to some graph, I would expect this to reduce to a graph isomorphism problem).

- You would deal with that by writing down a path integral (see for example Feynman's PhD thesis) (btw. particle creation is not possible in non-relativistic QM). The tricky thing is that of course "closing the slit" is a very tricky thing to model in a path integral, but the situations "slit closed" and "slit open" are rather straightforward. In any case even without any computation what is clear from the path integral perspective is that nothing mysterious is going on: You are supposed to sum over all possible histories of the particle passing through the slit and hitting the screen, weighted by e^iS. If there is some temporal variability in the position of the slit, this just results in a huge complication in the integral to be carried out (if you model closing the slit by a time dependent potential let's say). The path integral is fundamentally a very good way of thinking about this, because it generalises to much more complicated settings (gauge theory etc.), whereas there really is nothing fundamental about measuring, you just happen to drastically and in a temporally complicated way change the background your quantum fields are propagating in.

- On the contrary, basically QFT is the only way to deal with mixed states in a principled way. For scattering you typically start out with the assumption that things are in pure states at t=-infty and t=+infty, but in between the whole point of introducing quantum fields is to keep track of how things are spatially (plus gauge degrees of freedom etc.) "mixed". To be more specific QM is just a D=1,0 QFT, with 1 time dimension and (zero-dimensional) points as spatial dimensions. A mixed state rho is nothing more than a general quantum field on these discrete points.


On point 1: I did not get your argument about NP hardness and equivalence to graph isomorphism. On the contrary, algorithms for efficiently simulating non-trivial QFTs on a non-relativistic quantum computers exist: https://arxiv.org/abs/1111.3633

Point 2: If "there is nothing fundamental about measuring" in the QFT case, I do not see what is so special about the QM case (you just take a partial trace over the "environment"). I (admittedly with humility as I am not as well versed in QFT) really do not see how your answer is any different from this (and I do not see how the path integral gives you anything new for this particular problem, albeit being a beautiful formalism).

Point 3: QM has developed a lot of tools to deal with Marcovian and Non-marcovian non-unitary dynamics (the whole zoo of master equations available in it). Of course QFT can deal with density matrices if QM already can do that, but the sophistication of the toolkit used for that purpose in QM seems yet unsurpassed to me. And to your last point about creation of particles: Second quantization is already available in non-relativistic QM, so there is nothing weird about an a^dagger*b Hamiltonian in QM (I use it all the time for cavity-qubit interactions) - so, yes, you can not deal with the creation of arbitrary particles in QM, but you can still easily work with some restricted modes of a field, without involving QFT.


Regarding point 1: I should say that I'm not an expert in Quantum Computing, but reading the referenced paper leaves me unconvinced that such an algorithm exists in general. In the paper they show that they can simulate phi^4 efficiently and with arbitrary small discretisation error. My point regarding the graph isomorphism in this case is as follows: In order to carry out the discretisation, they employ a d-dimensional lattice and additionally introduce a discrete number of Q-bits per lattice site. Then they require to be able to evolve the state according to some time dependent hamiltonian (with interactions adiabatically switched on and off). All proposed realistic quantum computers only allow for a limited operator set of primitive operations, dictated by the physical geometry of the implementation (1d lattice of atoms in a trap etc.). My point was that more likely than not you will always be able to come up with theories for which a mapping respecting these physical constraints is hard (you won't easily be able to simulate a 3d lattice with multiple q-bits per lattice site on a 2-d lattice of q-bits). Also the article has to restrict itself to the case of massive particles and as you know lattice simulation of fermions is also a problem.

Point 2 & 3: You are right of course, ultimately this is a question of what techniques are useful. My comment was mostly aimed at the situation where people start to discuss the philosophy of QM. There I find that QFT clarifies the situation more than any philosophical elaboration on "measurement" and things like that do.


I find this article rather hand-wavy. I mean, we have a very good understanding of the measurement process in quantum mechanics through experiments performed e.g. by Haroche's and Siddiqi's groups, which demonstrate that quantum measurements are continuous, deterministic processes. Irreversibility and the transition of a quantum state to a "classical" state can be explained very well using entanglement and decoherence, which originate from the coupling of the measured system with a very large, external quantum system. With this approach, no magical wave-function collapse or breaking of time reversal symmetry is required to explain the observed collapse of quantum oscillations after measurement.

The interpretations of quantum mechanics that rely on wave-function collapse need to provide a plausible mechanism for how the irreversibility within the measurement process comes about, as a collapsed wave function is no longer time-reversible (i.e. if we would flip the sign of the Hamiltonian we would not be able to go back to a previous state as the wave function was irreversibly changed during the measurement) and I haven't seen any physically plausible attempt at this so far. So personally I still favor the many-worlds hypothesis, if only because it doesn't require invoking a so-far unknown process that destroys reversibility during quantum evolution.


What do you mean by measurements being deterministic? In Haroche's cavity-QED experiments the outcome of each individual measurement is fundamentally random and cannot be predicted. Quantum mechanics (regardless of which interpretation one choses) only gives a probability distribution for the outcome of many measurements.


Quantum measurements are not discontinuous, instantaneous processes but rather determined by the continous evolution of the joint Hamiltonian comprising the system being measured and the measurement system. You can perform partial measurements of a quantum state and even exert feedback on the quantum system to keep it at a given state (so-called quantum feedback). The outcome of a given measurement is only random because we are part of the measurement system and become entangled with it, which in combination with decoherence - due to the near-infinite number of degrees of freedom in the measurement system - leads to our experience of "quantum jumps". All of this can be explained with the Schrödinger equation alone, without the need to invoke the principle of wave-function collapse.

I don't like theories that require such a wave-function collapse mechanism in order to get rid off all the extra "worlds" because they would require a physically plausible mechanism that governs this collapse and the breaking of time-reversal symmetry that it creates. It is also hard to think about a scaling mechanism that would govern the wave-function collapse: If we imagine that we will one day be able to build large quantum computers with many millions or even billions of quantum bits we will be able to couple e.g. a single qubit to the computer and perform a pseudo-random (but entirely deterministic) range of operations on the other qubits. Should we assume that at some point the wavefunction of the single qubit that is coupled to the many other qubits will collapse? In that case we should not be able to reverse the qubit to its initial state by reversing all operations performed on the other qubits. At which scale should this happen then? If we could eventually scale up the quantum computer to comprise a near infinite amount of qubits in such a way that it's able to run a simulation of a toy universe with a conscious observer in it, would that be enough to induce wave-function collapse (even if we can still reverse the deterministic gate sequence at any time)? In such a case, for each qubit state there would also be a version of the observer, each seeing a different "collapsed" wave-function of the qubit. However, we could still reverse the gate sequence of the computer at some point and bring it back to its original state, uncollapsing the wavefunction of the qubit in the process. Now, we would of course also reverse the state of the observer(s) to their/its initial one. The question is, can we restore the state of the single qubit to an uncollapsed quantum state while keeping the state of the observers untouched? And if we could do it, what would it mean for the observer(s), will we collapse their wave function in the process? Given infinitely precise control over our quantum computer we should actually be able to do it since the evolution of the quantum system is still deterministic. The more interesting question is if this reversal of the single qubit state will automatically lead to a collapse of the two observer(s) back to a single one as well (here I'm not sure but this should be calculable).

In any case the conscious observer in our quantum computer would have no way (as far as I can think of) to determine whether it lives in a many-worlds quantum universe or one where wave functions collapse. We, on the other hand, would know that its a many-worlds quantum universe because we just execute a reversible, deterministic gate-sequence on the computer.

Sorry for rambling, my point is that it's a very metaphysical and (mostly) irrelevant question for most physicists, and it's unlikely that we will really come up with a way of deciding this question as we are part of the system that we are trying to investigate. Answering the questions for a quantum world that we simulate in a powerful computer might be possible, but it won't tell us much about our own universe. I personally just prefer the many-worlds hypothesis since it seems more elegant and simple and does not require coming up with a new mechanism for destroying time-reversal symmetric in quantum mechanics.


> All of this can be explained with the Schrödinger equation alone, without the need to invoke the principle of wave-function collapse.

But you need to invoke the principle of "we are part of the measurement system and become entangled with it, which in combination with decoherence leads to our experience of "quantum jumps"" which is not that much easier to understand and you still have to introduce the Born rule somehow if you want to predict probabilities.

The questions in your comment may be good questions, but the MWI doesn't really answer them either.


> All of this can be explained with the Schrödinger equation alone

That depends on what you mean by "all of this." If you include the Born probabilities in "all of this" then no, "all of this" cannot be explained by the SE alone. That's the whole point of the article.


As someone who's read a lot of work on the philosophy of quantum mechanics, I want to say that this piece is VERY well written. (And I wasn't the person who posted it to HN.)


Thank you! (I'm the author and the poster.)


My apologies if I get this wrong. Is this stating that the many possibilities are all coexisting and that our brain is acting as a filter to provide a view of possibilities that we define as ourselves?


That depends on what you mean by "this". If by "this" you mean MWI (the Multiple Worlds Interpretation of quantum mechanics) then:

> Is this stating that the many possibilities are all coexisting

Yes.

> and that our brain is acting as a filter to provide a view of possibilities that we define as ourselves?

No, your brain is not "acting as a filter". There are multiple "your brains" all "coexisting" in multiple universes (which MWI-ers call "the multiverse"), none of which can ever communicate with each other.


Your objections seem to me to arise straightforwardly from a disconnect over the definition of "you". Taking DW's position, and your axiom of unique self, I can resolve the issue by saying something like:

Knowing that you can never be certain which branch you will end up in, bet on the outcome that maximizes the liklihood that you will end up in a branch you favor. Your bet, of course, will follow the form of the Born rule.


Please explain the disconnect. In MW, there are many future "you" (or "I"), all of which will exist. The fact that they do not share information with each other doesn't change the fact that they are all you (or I). There isn't a single branch you "end up" in, you are in all of them.


Your choice of the future here is arbitrary - we could just as easily say that there are many past and present "you" in the multiverse. This assumes a definition closer to DD/DW's than the author's.

The author seems to assert that his experience of self is incongruous with this definition of your identity. As a historical fact, "you've" always either chosen chocolate or vanilla, not both. The other branches aren't in fact you, any longer.


What is the meaning of the phrase "which branch you will end up in" given that MWI implies that all branches exist?


Armchair physicist here, and yet:

> The probabilistic predictions of quantum theory are conventionally obtained from a special probabilistic axiom. But that is unnecessary because all the practical consequences of such predictions follow from the remaining, non-probabilistic, axioms of quantum theory, together with the non-probabilistic part of classical decision theory.

Well this seems crazy! We can drop the Born rule axiom, because you (a game theorist) will make the same decisions whether the universe is deterministic or probabilistic?

The difference is crucial, right? A PRNG requires hidden state: exfiltrate the state and predict all future results. But there is no room in the wavefunction + Schrödinger equation for such state: you either augment it (Bohmian mechanics) or accept the essential probabilistic nature.

Probabilistic predictions cannot be obtained from deterministic axioms, just like PRNGs cannot produce true randomness.

How does Deutsch resolve this?


Deutsch is a many-worlder so he would say there is nothing to resolve because there is no randomness. Everything in the time-evolution of the multiverse is deterministic.


Could I ask a question? As you are clearly someone with a deep knowledge QM.

Why is Bohmian mechanics so frequently dismissed? It's such a straight forward premise - there is an actually physical guiding wave and everything else falls out "normally". Instead pages are written about MWI and hypothetical agents optimizing their overall decision in universes they'll never see. And undefinable concepts of trillions of universes splitting simultaneously. And I'm not trying to be snarky at all, as I feel what you wrote above is one of the best presentations od the topic I've seen.

To be perfectly frank QM is starting to scare me about physics. To a layman which I am, Bohmian mechanics is so simple and straightforward it's almost "obviously right". The macroscopic analogy of Farady waves is almost a nail in the coffin (to a layman). Allow for non-locality of guiding waves, just like ripples on a lake and everything else is deterministic and matches all scientific observations. Events have probability for the same reason the weather has probability: because we can't measure with the sensitivity required to account for the complexity of the system and the difference in initial conditions.

And yet, I've never seen any write up against Bohmian. I've never seen anyone with deep knowledge on the subject discuss why Bohmian isn't the best interpretation. It's just dismissed as an "also-ran". What scares me is 20 years from now something will emerge making it clear that Bohmian is the "correct" interpretation. And then the question in my mind will be, "what took it so long to even be seriously considered?". It would be proof to me that physicis is being shifted from investigating and following up on the best explanation for data to instead becoming "lawyers" trying to find data to support their favorite pre-decided argument (ie their favorite QM interpretation).

Now all of that said, there is a reason why laymen don't make meaningful contributions to a field. Their are deeper complexities that make their intuitions wrong. But why are these flaws in Bohmian never discussed?

Would you mind taking even a few sentences to write up "what's wrong with Bohmian" that would make the infinitely more complex MWI a more likely candidate? I'm lost.

[1] - http://www.tcm.phy.cam.ac.uk/~mdt26/tti_talks/deBB_10/bush_t...


The best place to find the answer to this question that I know of it David Z. Albert's excellent book, "Quantum Mechanics and Experience." But the short answer to your question is that Bohmian mechanics has two problems:

1. In order to account for the outcome of Bell-type experiments on entangled particles it has to assign a temporal ordering to space-like separated events. The technical term for this is that you have to choose a "preferred foliation of space-time". There has to be a preferred reference frame. But you can never actually know what the preferred reference frame actually is.

2. Yes, particles "have positions", but you can never actually know what those position are (which is why I put "have positions" in scare quotes). This is where all the quantum randomness hides in Bohmian mechanics. It's all "pre-computed" in the infinite precision of a particle's position, but that position is necessarily hidden from observation. I call this an IPU, an Invisible Pink Unicorn. It's exactly the same thing as universe-weights in MWI -- a set of numbers that are part of the theory but rendered immune from observation not by practical limitations on technology, but by the theory itself.

This is the fundamental problem with all attempts to make quantum mechanics look deterministic. The simple fact of the matter is that it's not deterministic, so any attempt to make it look deterministic that makes the same predictions as QM has to hide the randomness somewhere. Bohm hides it in particle positions, and MWI hides it in universe weights. But it doesn't matter what you call the place in the theory where you've hidden the randomness. What matters is that there is a place in the theory where you've hidden the randomness, where it must forever remain hidden from the prying eyes of experiment. So the claims that both Bohm and MWI make of being deterministic are misleading at best.


>but that position is necessarily hidden from observation. I call this an IPU, an Invisible Pink Unicorn.

But why expect that all state of the universe be open to observation? This seems counter-intuitive to me. It seems far more reasonable that there necessarily are facts about an implementation that no supervening system can determine from within that system. For example, there are facts about a physical computer that no software running on that computer could deduce. So the fact that a QM theory posits state that is in principle off-limits to observation doesn't seem like a reductio, but the expected case.


That's a good point, but remember, this is about rhetoric, not physics. The question is not whether hidden state exists (it clearly does) but what kind of story you want to tell about it. If you find it enlightening to think about hidden state as position, and you don't mind accepting all of the difficulties that entails (like having to choose a preferred foliation), then by all means go for it. But that is very different from saying that this story is actually true. The only reason to prefer Bohm over a similar story that ascribes the randomness to a literal invisible pink unicorn making decisions about experimental outcomes is aesthetics.


What EPR and Bell's arguments showed is that if you have definite results of experiments at the space-time locations where/when we think the results happened, then there has to be something non-local going on and a foliation is the simplest way to orchestrate that. So either give up a certain kind of definiteness (MW) or introduce a foliation (BM). A foliation is somewhat unpleasant, but there are ways to tease them out of the existing structures of relativity + wave function [0].

As for determinism, that is not the main point for many proponents of BM. Rather, it was about having a clear theory. Let's start at the beginning. We decide to have a theory about particles. What does that mean? Well, there is some stuff with positions and those positions change in time. And that's what BM gives. It explains, immediately, why a wave function is on a configuration space of particles. Having this leads to a variety of important notions. For example, having a position leads to clarity on identical particles, namely, use a space without labels on the particles and the wave functions just work out[1] (disclaimer: I am an author on that one and did part of my PhD thesis on that [4]; my thesis also derives spin as well as the Dirac equation from a Bohmian perspective).

Another example is QFT and divergences. From a Bohmian perspective, QFT is best thought of as about wave functions over a configuration space consisting of different disjoint sectors involving different number of particles. To do this, there is a random jump process created (not deterministic!). Thinking about wave functions that work with that, one is led to wave functions where the probability moves from one sector to another appropriately based on this jumping. And that solves, at least in some simple cases, the UV divergence of QFT.[2]

The randomness of BM is actually quite interesting. It is all about the Quantum Equilibrium Hypothesis, something which is justified in similar ways to thermodynamics kinds of arguments. In fact, it makes it clear that what is interesting is not why we can't know some things, but why we can know stuff at all. [3]

Also, in case you have not seen it, you might want to take a look at a version of MW by some of the Bohmians behind the papers.[4] As with most things, there is a lot of clarity in their perspective.

Finally, as for why BM is not more popular, well, let's just say it is rather hard to stay in academia as a Bohmian. I barely tried, largely because academia is unpleasant for a variety of other reasons, but it simply is very hard to get hired when working on unfashionable material. Grants and all that. This is on top of the problem of getting people to change some fundamentally long held beliefs. This goes hand in hand with the fact that standard QM is the thermodynamics version of BM and so QM agrees with BM empirically to the extent that QM makes predictions. That is to say, BM provides the rigorous foundation for the collapse rules.

[0]: Can Bohmian Mechanics be Made Relativistic? http://arxiv.org/pdf/1307.1714 [1]: Fermionic Wavefunctions on Unordered Configuration Space. http://arxiv.org/pdf/1403.3705 [2]: Bohmian Trajectories for Hamiltonians with Interior–Boundary Conditions. https://arxiv.org/pdf/1809.10235.pdf [3]: Quantum Equilibrium and the Origin of Absolute Uncertainty. http://arxiv.org/pdf/quant-ph/0308039 [4]: Many Worlds and Schrodinger's First Quantum Theory. http://arxiv.org/pdf/0903.22111 [5]: Connections with Bohmian Mechanics. http://jostylr.com/thesis.pdf


> We decide to have a theory about particles.

I would say that's exactly where Bohm runs off the rails. The fact of the matter is that quantum systems are not particles. They are waves [1]. They can sometimes bunch up into very small spaces and behave to a very good approximation as if they were particles, but they aren't.

If you insist that your theory talk about particles, then Bohm is a not-entirely-unreasonable place to end up. But that's kind of like saying that if you insist that your theory tell you how many angels can dance on the head of a pin that 42 is a not-entirely-unreasonable answer.

Thanks for the references, those look interesting.

P.S. The reference 4 link is broken. I'm guessing you meant https://arxiv.org/abs/0903.2211

---

[1] https://arxiv.org/abs/1204.4616


Yes, that article is what I meant.

I read the article you cited. While enjoyable, it does not make its case for me. It very much feels like presuming the wrong ideas of what a particle theory is.

---

First, double slit. Points on the screen, wave pattern built up. The natural conclusion is a wave guiding a particle. That's what BM gives. BM does not deny the existence of the wave function. Rather, it gives it a reason for being. The article talks about viewing the wave function collapse as a ballon hitting a needle and popping. But it doesn't really explain why a world full of waves should have anything point-like. It certainly doesn't come from the dynamics of the Schrodinger equation. It just is a statement that the wave should localize when it encounters something already localized. Now, this being the wave function, it is not multiple waves on real space, but rather waves on R^3N space where N is the number of particles which includes those of the detecting screen. But what particles, right? An electron does not have a wave function. There is only one universal wave function. This is not N waves rolling around in 3-space. It is one wave in 3N space and its relation directly to our experience in 3-space is rather obscure.

This does not mean you can't have waves. But I think your angel analogy applies to saying "everything is a wave" and MW is not an unreasonable place to end up in.

The article also says that particles are not logically consistent with the 2-slit experiment. That is simply false as BM demonstrates. That theory has been mathematically proven to exist and agree with the standard QM predictions. It works. The double slit is explained as particle AND wave, two separate entities.

---

Another piece of the article was about a confined wave that instantly expands. It seems to get at the heart of the misconception. There is no notion in BM of having to confine waves to make them particle like. Rather, BM allows waves to expand as they do. They can be their own thing, doing whatever they like. The particle aspect is handled by the particles which can do what the wave tells them to do. In the Dirac version of BM, the velocity can never be greater than the speed of light for the particle, basically by construction. To the extent that a narrowly confined wave function leads to problems, this would say, those would not be the relevant wave functions. One can always replace such a sharp thing with a close enough in L2 approximation to not have that kind of sharp behavior. This gets into the domain of the Hamiltonian which can have some pretty important aspects to the evolution.

---

The vacuum and movement. This is just me spitballing, but in the ground state, according to BM, particles will not move. They just sit there. So it is quite possible for the vacuum to look empty (nothing moving), but there is plenty of stuff out there in terms of particles. Then when someone moves, they see the particles moving. Not sure if this is reasonable or not, but it is my thoughts on how one can have a vacuum with "no particle" and then have particles present with a relative motion.

---

The final part I will comment on is that of QFT. This would presumably be the strongest of the arguments. But here, the field of operators is fundamentally different than the wave function. As presented in the article, we have a field of operators based on EM. They operate on wave functions over Fock space, which is the union of R^3N spaces (removing collisions and replacing that with boundary conditions). Is the claim that the operator field's expectation values are the reality of our experience? I am guessing the claim is that if I want to verify my chair's placement then I am supposed to compute the expectation values of those operators at the space-time point and out pops a chair?

The Bohmian version of this is that we use the wave function, evolving according to a Hamiltonian which may contain an operator valued-field, to guide the particles, including controlling the creation and annihilation of the particles. We can then do an analysis of the theory and come up with predictions, generally in the form of operators as observables. These are deduced, not postulated. A chair is where it is because the chair's particles are where they are. It is a conceptually simple process to map my experience to what the theory is talking about. Having a simple map from my experience to some state of something in the theory is a really nice feature. It is not crucial, but verifiability is greatly helped by this.

--- A very important part of BM is that operators as observables is not postulated, but deduced. This has major advantages in that it is practically trivial to write down a Bohmian theory on a manifold. The standard stuff has problems, such as what the momentum operator becomes. In BM, you write down the theory and then one can analyze to see what would emerge, if anything, to take the place of that observable.

An application of this is creating a quantum theory on shape space, namely, the space of relative configurations that Julian Barbour likes to work with. It is a very natural space to consider and Bohmian mecahnics can accommodate it easily: https://arxiv.org/pdf/1808.06844.pdf


> The natural conclusion is a wave guiding a particle.

Indeed. But the natural conclusion can be wrong. Case in point: hold an object in your hand and let it go. It falls. The natural conclusion is that there was a force pulling it down. But this is not actually true. (I gather I don't have to explain this to you. AFAICT, you know physics better than I do.)

In fact, if you think about it, "points on the screen" is the only reason we have to believe in particles, which is to say, in spatially-localized quanta. But this is not probative. It only shows that the spatial localization is of the same order as the size of an atom. But we already know that atoms aren't particles, so this is manifestly not slam-dunk evidence that whatever is tickling those atoms is a particle, nor than an atom's constituent parts are particles.

> So it is quite possible for the vacuum to look empty (nothing moving), but there is plenty of stuff out there in terms of particles.

That doesn't seem reasonable to me. Particles exert forces on each other via gravity and electromagentism. I don't see how you're going to get a stable static vacuum configuration out of that without special pleading.

> Is the claim that the operator field's expectation values are the reality of our experience? I am guessing the claim is that if I want to verify my chair's placement then I am supposed to compute the expectation values of those operators at the space-time point and out pops a chair?

That's a bit of a caricature, but yes, if you want a completely accurate answer, that is what you have to do. Just as if you want a completely accurate answer about what happens when you drop an apple you have to solve Einstein's field equations. F=Gm1m2/r^2 is a damned good approximation, but it's deeply wrong about the physics.

You might find this interesting:

http://blog.rongarret.info/2018/05/a-quantum-mechanics-puzzl...

http://blog.rongarret.info/2018/05/a-quantum-mechanics-puzzl...

http://blog.rongarret.info/2018/05/a-quantum-mechanics-puzzl...


> "points on the screen" is the only reason we have to believe in particles

There are also the cloud chamber paths. But more generally, I would say most people's experiences correspond closer to stuff having well-defined localization which is more in the ballpark of particles than waves. Waves spread.

Also, this is not a matter of what is true, but rather what is plausible. A natural conclusion which works would be a good candidate to continue to pursue. My opposition to the paper cited was simply that it wants to argue that everything is a wave, something which seems to be an assumption not supported by evidence. That reality can be described that way is one thing, and a fine thing if it leads to interesting notions, but to say it must be a certain way is a completely different kind of claim.

> That doesn't seem reasonable to me. Particles exert forces on each other via gravity and electromagentism. I don't see how you're going to get a stable static vacuum configuration out of that without special pleading.

Particles in BM do not exert forces on each other. The wave function moves the particles about directly by specifying the velocity, not the acceleration. The forces are all in the wave function evolution (gravity is a bit of a mystery, but what else is new). Keep in mind there is a single wave function that represents the universe. It has a complicated dynamics which is where the forces are at work.

Maybe the short paper Are All Particles Identical [0] might help. It describes a version of BM in which all particles are identical and electron, quark, etc., are different states of a single particle type with the mass being incorporated into the wave function itself. Particles in that theory really are just points with nothing else intrinsic about them.

So the typical vacuum state might be just a small part of the story with a non-interacting sector. I don't know, but it certainly does not seem to rule out particles to me as being impossible.

> Just as if you want a completely accurate answer about what happens when you drop an apple you have to solve Einstein's field equations.

Yes, but if you present me with the solved system, I understand immediately what it is describing: a path through space-time of the apple. In a solved version is easy to see the correspondence.

This is not true of quantum wave stuff. It is true of BM. Relativity might mess with our intuition and be difficult to compute, but it is easy to understand how the elements correspond to our experience. That's the crucial difference.

Now, there is no reason to believe that the fundamental theory has to have that property. It might not. But it is extremely important than to be very clear about how the elements of the theory, the stuff it cares about saying what it state is, does correspond to our experience.

[0]: https://arxiv.org/pdf/quant-ph/0405039.pdf


> There are also the cloud chamber paths.

Those amount to the same thing.

> Particles in BM do not exert forces on each other.

Right. That's part of the problem IMHO. Particles in BM don't really do anything except get pushed around by the wave function. So they don't really correspond to what most people intuitively think of when they think of particles: electrons and protons (and neutrons), which combine not only a spatial location but also mass and electric charge into a single unified package. BM particles only have the spatial location part.

> it is easy to understand how the elements correspond to our experience

I think it's not so hard to see how QM corresponds to our experience if you look at it the right way. Not quite as easy as relativity, but not nearly as hard as it's commonly made out to be.

Note that our experience is at odds with "reality" long before you get to QM. Even in a purely classical model of an atom, it's mostly empty space, and what we perceive as "solid" objects are really just electrons in outer shells trying to push each other out of the way.


I'm not qualified to answer your question, but I'm guessing it'll boil down to "Bohmian mechanical interpretation is more mathematically complicated than the others. Therefore, even though it is the least counter-intuitive of the possibilities, Occam's Razor still requires us to discount it because of the complexity."


Thanks for the writeup, I enjoyed it very much. How does Deutsch connect this view to experimental results? Is the idea that all results are realized, or is my choice of experiment itself predetermined?


> How does Deutsch connect this view to experimental results?

MWI makes exactly the same predictions as regular QM, so it's connected to experimental results in the exact same way that regular QM is. The only difference is rhetoric. The canonical interpretation (Copenhagen) says, "The outcome is random, we don't know why, and we can't know why." MWI says, "The outcome is not random, all possible outcomes actually happen in point of physical fact. It appears random because when you make a quantum observation you split into multiple copies of yourself, and every one of those copies has the subjective sensation of seeing a random outcome even thought the actual outcomes were deterministic." So...

> Is the idea that all results are realized,

Yes.

> or is my choice of experiment itself predetermined?

No. This is yet another interpretation of QM called superdeterminism.

https://en.wikipedia.org/wiki/Superdeterminism


My layman’s understanding is that all results are realized many times. The values that we usually interpret as probabilities are really the proportion of universes in which the event occurs; a 95% probably of an experimental outcome means that if we sampled 100 worlds that are successors of the one in which the experiment occurred, 95 of them would have produced that outcome and 5 wouldn’t.

We perceive this as randomness due to the weak anthropoic principle: from our perspective, there is only one universe and things actually occurred in only one way, because there’s no interaction between worlds after the split; at every opportunity, we effectively draw one random result from the distribution because all of the other worlds don’t contain us, they contain copies of us.


Yes, that is almost exactly right. The only issue is that what distinguishes one universe from another is not actually well-defined, so you can't actually "count universes". You have to base the calculation of what proportion of universes experience one outcome versus another on something else. That something else turns out to be "branch weight", which, it turns out, is where the quantum randomness hides in MWI because branch weights can't be measured, not even in principle.


Both you and the other reply refer to counting universes/branches, which is obviously a term of art that I’m not understanding. Where I talk about sampling successor universes, are you just emphasizing that we’re drawing from a continuous probability distribution instead of a discrete one, or is there something else?


> a term of art that I’m not understanding

Actually, it's a hand-wavy term that is not well-defined, because "universe" and "branch" and "world" are not well-defined.

> sampling successor universes

The problem with this is that in order to sample something you have to specify the procedure by which you are going to do the sampling because your results are going to depend on that procedure. So even leaving aside the fact that the thing that you're trying to sample is not even well-defined, there are other issues.

Sampling finite sets is straightforward, but for infinite sets it gets tricky [1], and for infinite sets without a total ordering it gets very tricky. Consider, for example, generating a random complex number. There are at least two plausible ways to do this:

1. Generate two random reals x and y, and combine them to produce the random complex number x + iy.

2. Generate two random reals, r and theta and combine them to produce the random complex number r * (sin(theta) + i cos(theta)).

Depending on which of these methods you choose, you'll get different-looking distributions. Neither one is "correct".

If you're going to talk about sampling worlds you have all these difficulties in spades because worlds are rays in Hilbert spaces, i.e. vector spaces with an infinite number of dimensions. There are an infinite number of ways to generate random distributions over a Hilbert space. The trick is to make an argument for why one particular way is better than all the others.

The obvious argument is to present a way of sampling that reproduces the Born rule and argue that it's better because it reproduces the experimental results. But in order to not be begging the question, this sampling procedure can't have the Born rule hidden within it. That is what Wallace and Deutsch claim to have done, but which I (and many others) dispute.

[1] https://math.stackexchange.com/questions/997173/can-you-pick...


Thanks; I had assumed that drawing a sample was a fundamental operation on a well-defined probability distribution, and that existing QM tools were sufficient to define that distribution, if not distinguish between the various interpretations.


That means branch counting matters, which means there has to be some way for the universes to communicate the results of branch counting with each other. Otherwise there's no mechanism for probability to arise.

If you try to fall back to the position that the probabilities are somehow inherent in the interaction between all the universes, then you may as well dispense with the universes and just say the probabilities are defined by a hidden non-local (i.e. not Bell-violating) interaction in this universe.

Because otherwise you still have to explain how a split (or differential propagation or whatever you want to call it) in one location generates an entire universe instantaneously - i.e. faster than c - while preserving state that is beyond the event horizon of the current location.

There's a kind of sleight of hand about the argument. You can just about imagine multiple copies of a single particle existing along a foliated timeline. But when MWI people say "What splits is the wavefunction of the universe" they're just handwaving.

The wavefunction of the universe doesn't exist as a testable abstraction, and even if it did, are we supposed to believe that it's the only physical entity discovered so far that can ignore relativistic limits? And even if that were true, it implies that the wavefunction of the universe is somehow perfectly deterministic - which is unproven at best.

Those are all very strong claims to be making. In fact (IMO) there's a strong whiff of handwavy nonsense about the idea.

But I have the luxury of not doing this for a living, and I'm very happy to be proved wrong by people who do.

Observer independence - and observer relevance - seems to be a separate problem.


> But I have the luxury of not doing this for a living, and I'm very happy to be proved wrong by people who do.

I'm in a similar (but likely even less knowledgable) position; I'm having trouble following your logic, presumably because it's relying on terms and results I'm unfamiliar with.

> That means branch counting matters, which means there has to be some way for the universes to communicate the results of branch counting with each other. Otherwise there's no mechanism for probability to arise.

I don't know what branch counting is, but I don't see how the various universes need to communicate with each other. Observers in most of the universes will agree with us about the probabilities, as that's what the history of those universes has shown. In a small but nonzero number of them, however, everything will have happened the way that classical physics predicts and they will have never come up with quantum mechanical theory in the first place.

> Because otherwise you still have to explain how a split (or differential propagation or whatever you want to call it) in one location generates an entire universe instantaneously - i.e. faster than c - while preserving state that is beyond the event horizon of the current location.

I don't see a problem conceptually with envisioning each split like a stress fracture that propagates at the speed of light. As information can't travel faster than that (per relativity), there's no need to envision the two new worlds separating from each other faster than that anyway.


There is no way to prove that a supposedly random source of information is not effectively a PRNG. Similarly, the Born rule must apply unless and until we learn that there is some deterministic process that QM reduces to, and even then, unless we know how the PRNG was 'seeded', we must progress _as if_ the process was in fact random.

The deep point here is, arguably, randomness is in the eye of the observer.


I'm not sure what you mean by "effectively" a PRNG, but the big objection here is the EPR paradox. A true RNG can be stateless, but a PRNG necessarily requires state. So where is that state?

It's not in the wavefunction, so we must introduce new axioms: non-local variables, or super-determinism, etc. Yuck!

The MWI program is eliminating the objectionable "part two" of QM: probabilistic measurements and wavefunction collapse. But it's no progress if the replacements are even worse!


"non-local variables, or super-determinism, etc. Yuck!"

All of these things are less philosophically objectionable to me than the idea that all outcomes (in some sense) "occur" or that the true substance of the universe is an inaccessible, linearly evolving, quantum mechanical wave function.

I mean seriously - the latter is an enormous enlargement and modification of our ontology. Non-locality and super-determinism are actually much less objectionable. Indeed, before quantum mechanics, we were essentially _certain_ that determinism would win the day.

And we only need to jettison one of them, of which non-locality seems the most plausible. In that view, spacetime emerges from some underlying non-local dynamics. This isn't particularly hard to imagine (see a Smolin paper called "Nonlocal beables" or something like that).


Thanks for calling out how the Canon interpretations of QM do not not actually save us from much of the problems that are objected to in other interpretations. Sometimes I feel like everyone just repeats what they heard as a student. Smolin's work looks interesting, thanks for the recommendation.


I knew the Phd would come in handy eventually.


> There is no way to prove that a supposedly random source of information is not effectively a PRNG.

Maybe no practical way. But a finite state PRNG will be periodic while a true RNG won’t. And the length of the non observed period gives you an estimate of the minimal size of state.


What would be the period of a 4096 bit pseudo-random number generator iterating at a time scale commensurate with the plank length?

I haven't done the math, but the answer is, a long, long, long time.


It can be periodic but unbounded.


Anyone else read Neal Stephenson’s Anathem? It drew heavily from Deutsch, so it’s not totally surprising, but this argument seems pretty close to “Did you read Anathem? Because Anathem.”


This has got me thinking about MW and the speed of light .... a quantum event occurs here and the universes split, that split propagates out at the speed of light ... at roughly the 'same' time a quantum event happens at Alpha Centuri ... the universe splits there and that change propagates out at the speed of light .... 2 years later half way in between the splitting universes meet - what does that mean?


"Splitting" does not propagate, or happen with any locality - it's the wave function of the entire universe doing the splitting, so there's nowhere for it to propagate!


It makes sense to talk about the split propagating. A subsystem which is spatially located at a distance from the splitting event will not immediately become involved in the superposition, but may do once information about the event has traveled.


When we talk about MWI as the asker is trying to more fully understand, we are explicitly not talking about subsystems. There's just the one big evolving state (of the universe).


That's true, but that global state of the universe can be written in terms of its reduced states on spatial subsystems. From this perspective we can talk meaningfully about propagating superpositions. For example, take a toy example in one dimension with three spatial subsystems A, B and C representing disjoint intervals arranged in order. Immediately after a splitting event the combined wavefunction over all subsytems might be:

(psi_1 + psi_2)_A x phi_b x rho_C

(ignoring normalisation for convenience) then after a certain amount of time it becomes

(psi_1 x phi_1 + psi_2 x phi_2)_AB x rho_C

and then eventually

(psi_1 x phi_1 x rho_1 + psi_2 x phi_2 x rho_2)_ABC .

Now, all of that is certainly happening within the realm of some joint superstate, but it still makes sense to talk about how fast the split propagates, surely?


All depends on the context. Does it make sense to talk about propagating splitting in this context? Surely no - for three reasons:

1. You're describing a time evolution _of the subsystems_, which isn't really a thing in the MWI.

A many-worlder would say, instead, that the Universe has split a bunch more in the interim. He would point to the time evolution of the state of the Universe, and perhaps there have a discussion about how the inseparability of particular subsystems has propagated over time. Put differently, the many-worlder might say the correlations of these particular relative states with one another propagated over time.

What you've done, Everett would call characterizing branches of the universal state in a space-like locality.

2. Split != superposition. Frequently, splitting in MWI is identified with decoherence, so in that sense there is a self-consistent way to describe local splitting - but then you'd really mean, when you referred to the splitting of "an object" or "a system", that Universal splitting had occurred in such a way as to cause the object to exist in some particular multiple new branches.

3. None of this line of discussion helps the parent gain an understanding of how MWI is importantly different from (and the same as) other interpretations of QM. It's far too shallow to amount to any real expert insight and yet too technical to amount to any real layperson insight.

What can a discussion on propagating splitting illuminate here? It seems to me that it is a less than useful idea for the parent and readers like him/her, and many-worlds is more clearly understood without it.


> You're describing a time evolution _of the subsystems_

In my head I'm thinking about the time evolution of the global state, but examining the reduced state over certain subsystems at specific points in time.

> Split != superposition. Frequently, splitting in MWI is identified with decoherence

Decoherence is a superposition effect, is it not? Entanglement with the environment, i.e. a superposition of system-environment states.

> then you'd really mean, when you referred to the splitting of "an object" or "a system", that Universal splitting had occurred in such a way as to cause the object to exist in some particular multiple new branches

Yes, this is what I mean.

> What can a discussion on propagating splitting illuminate here?

Tbh I think it's unlikely that the parent is still following but I'm continuing for the selfish purpose of trying to better understand your point. That said, I believe that considering my toy example of a global quantum state in one dimension would illuminate their question about superpositions propagating from here and alpha centauri and meeting in the middle.


> Decoherence is a superposition effect, is it not? Entanglement with the environment, i.e. a superposition of system-environment states.

The point I'm trying to make is that "splitting", while sometimes identified with decoherence, isn't superposition (or any other well defined traditional QM phenomenon). It's a term peculiar to MWI and it importantly has no clear canonical technical definition. It generally refers to something just considered abstractly: the branching of a single _universe_ into multiple. If you use "split" and "entanglement" or "superposition" or any other QM term interchangeably, you are bound to invite misunderstanding.

> ...illuminate their question about superpositions propagating...

Agreed... if that was their question. But their question didn't reference superposition at all, it was about a split propagating:

> ...a quantum event occurs here and the universes split, that split propagates out at the speed of light...

Which is why I responded as I did. It is understandably confusing to wonder what it means for propagating split universes to meet years later, if you start talking about splits in this way. Propagating superposed particles? Much easier to make sense of.


Thank you for bearing with me for so long. I think I understand the point of contention, i.e. that "split" is a slightly nebulous term which depends not only on splitting but somehow on there being a negligible likelihood of future interference between branches. In this context I agree it doesn't make sense to speak of a split being spatially localised.


It all adds up to normal.


The idea of branch-counting seems untenable when we consider irrational probabilities (which they are, of course, overwhelmingly likely to be),though I think it would be begging the question to say, on that basis alone, that branch-counting is the wrong way to look at it. I think the example used to motivate branch indifference would be more persuasive if it used irrational probabilities.


> it would be begging the question to say, on that basis alone, that branch-counting is the wrong way to look at it.

Exactly right.

> I think the example used to motivate branch indifference would be more persuasive if it used irrational probabilities.

What matters actually is not the probabilities but the branch weights/amplitudes (because you have to square those to get the probabilities). And those are irrational if the probabilities are 2/3-1/3.


I take your point about the amplitudes not being rational, but in the examples of branch counting that I have seen, including this particular example, the counting is being done from the probabilities, not the 'raw' amplitudes.


We do not have any experimental reason to believe that "irrational probabilities" have any physical meaning. There is no method to measure them even indirectly, so a theory that operates only with natural numbers is quite plausible.


That's an interesting point. I did wonder, after posting the above comment, whether frequentists, at least, are ipso facto committed to all probabilities being rational, on account of their insistence that they are only meaningful in the context of repeated trials? (FWIW, I lean towards the frequentist viewpoint, insofar as I have a coherent position on the issue.)

But then, what are we to make of physical formulae in which complex, transcendental and other non-rational numbers appear? Are they 'just' tractable approximate models of physical reality? I guess the Maxwell-Boltzmann distribution, at least, could be fairly characterized as such.


As far as i can understand all our measurements would be the same if all equations using real numbers were replaced by equations with finite precision (containing several hundred digits). The only experiment that could be different is quantum computer, because quantum fourier transform on n qubits requires at least n binary digits of precision on probabilities too. This is why some people think quantum computers won't work.


I did not like this article. I think the author incorrectly has some idea that many worlds asserts the multiverse undertakes a real “splitting” activity when observing the outcome of an experiment.

It does not. Rather all the different branches have always already existed all the time, and you merely observe what branch “you” are on.

When you see the outcome of an experiment, it doesn’t mean “you branched” as if the branches wouldn’t have existed if you hadn’t done the experiment.

It only means your brain got knowledge about which subset of all possible universes you happen to be in.

The probabilities in quantum outcomes, like all probabilities, are about subjective degrees of belief, as in they are about the state of a mind and not objective attributes of nature.


> all the different branches have always already existed all the time

How many branches are there?

The answer is: there's no way to know because what constitutes "a branch" is not well-defined. So saying that "all the different branches have always already existed all the time" is meaningless because the phrase "all the different branches" is meaningless.

> The probabilities in quantum outcomes, like all probabilities, are about subjective degrees of belief

No, they aren't. When you listen to a geiger counter, the number of clicks it produces is an objective fact, completely independent of any sentient being's beliefs. It is also random.


There would have to be an uncountably infinite number, and the surface of branches would be smooth, likely differentiable, because many experiments can have outcome spaces of uncountably infinite cardinality.

> “No, they aren't. When you listen to a geiger counter, the number of clicks it produces is an objective fact, completely independent of any sentient being's beliefs. It is also random.”

You are simply incorrect about this. Did the Geiger counter click or did you hallucinate or did the counter misfire or did your friend replace it with a joke Geiger counter that always clicks or did a cosmic ray hit the circuitry at just the right moment etc.

The probabilities around these things are about your subjective state of knowledge, always.


> Did the Geiger counter click or did you hallucinate or did the counter misfire, etc.

Doesn't matter. What matters is that objective observers listening to the counter independently will all agree on how many clicks it made in a given period of time. It is this agreement that needs explaining. One possible explanation is that the counter did, in point of physical fact, click that many times. In fact, that seems like a very plausible explanation for the agreement to most people. That plausibility doesn't necessarily make it correct, but it does mean that you cannot dismiss it as "simply incorrect." If I hallucinated the clicks, why then do I agree with all the other observers, including inanimate ones like electronic counters? Are we all experiencing the same hallucination? If the counter "misfired" then why do geiger counters give the appearance of producing random clicks in response to radiation?


Huh? If the counter is broken, you’ll all agree on the wrong number, because of your mistaken brain state (believing the counter to be functioning correctly). You may use a lot of italics, but the statement doesn’t add up to anything.


> If the counter is broken, you’ll all agree on the wrong number

In what way will this number be "wrong"?


I love the post. As time passed, I've come to think of our Universe differently.

When I was out of school and Physics was cool: The Universe existed, created me, I observe, things change.

When I am older and all the things are being unravelled: I existed, I begin to observe, The Universe around me changes.

I am not on drugs, it's how I approach our world today.


What has the human brain to do with anything? This whole thing is philosophical at best. Physics is real whether we observe it or not. I think we settled on that one. Using subjective experience in trying to explain physical systems is bonkers.


On the contrary, decades of experiments violating Bell's inequality have shown that the statement "Physics is real whether we observe it or not" can only be true if we abandon locality. Non-local interpretations of QM, such as Bohmian mechanics, are however incompatible with the Standard Model which is by far the most accurately tested theory in physics.


There is no incompatibility between orthodox quantum mechanics and Bohmian mechanics, the latter makes exactly the same predictions as the former. As does any other interpretation of quantum mechanics. That's why they're called interpretations, they're ways of ascribing an ontology to or making sense of the mathematical models that are known to work so well to make experimental predictions. They're not supposed to be competing theories.

There is nothing inconsistent or inherently problematic about abandoning locality, quantum mechanics has some intrinsically non-local components anyway. It speaks in favor of Bohmian mechanics that it explicitly describes this non-locality and isolates it.


The point is that non-relativistic quantum mechanics and therefore also Bohmian mechanics are wrong. They cannot even describe hydrogen atoms correctly! Solving this issue required developing relativistic quantum field theories. So far nobody managed to create a convincing relativistic version of Bohmian mechnics.


Realism can be salvaged in a fully deterministic theory.

Note however that empirical science in a deterministic world is a strange concept.

Edit: However in my opinion empirical science in a nonlocal or nondeterministic world is equally strange, if you really think about it.


Using decision theory to explain quantum mechanics seems like a huge mistake for a very simple reason: It confuses "ought" for "is". Decision theory makes statements about what one should do if one wants a certain outcome. As the author points out, what statements are we making about what we want? We are free to decide that dividing ourselves and dividing an outcome are not equivalent. But really the whole idea should look silly long before we come up with such a specific counterexample: the laws of physics have no dependency on the wants of human beings, in fact the wants of human beings are fully dependent on the workings of physics (humans being physical systems), so an explanation of physics in terms of statements about wants should appear absurdly circular.

There does not need to be an agent optimizing outcomes in order for the predictions of quantum mechanics to be correct. There doesn't need to be agents at all, since the laws of physics worked just fine in the billions of years before there were people. This seems a bit like dressing up the problematic Copenhagen notion of a privileged "observer" in different clothes.

I have never understood what is left to be explained in many worlds, and maybe someone with deeper understanding can explain it: what is the problem with simply saying that the squared amplitude gives the fraction of the wavefunction that has evolved from the initial state into the final state? What need is there to bring probability into the physics? If we are accepting the initial wavefunction state as a premise x, and we associate each configuration in the final state y with an experience we might have, then isn't asking the "probability" of experiencing y given x, in a Bayesian sort of way, naturally, emergently the same thing as asking the fraction of [the wavefunction that evolved from all the configurations associated with x] which is associated with y? What is missing and what is the dispute?

Last, what is this talk of "splitting"? I thought it was true that the wavefunction is "incompressible", in that if some measurable that becomes more confined, there is always some other measurable that becomes correspondingly less confined? That is to say, if there is some axis in state space which splits (distinguishes) universes by narrowing possibilities, there must always be some other axis that merges (confuses) universes by widening possibilities? That is to say, if ever you know more about what universe you are in, you must know less (along some orthogonal axis) about what universe you came from? I.e. for every "branch" there is a "merge".


It appears that a similar issue crops up in the interpretation of classical mechanics.

That is, suppose we have our laws of classical mechanics. Hence, we have a nice way of propagating states along time. Now we want to understand thermodynamics and the arrow of time. Maths tell us that, if we slice-and-dice microstates and macrostates in a certain way, then thermodynamics is correct. In a certain way means "any non-crazy way with respect to Liouville measure" and the entire thing only holds "for sufficiently chaotic/ergodic systems".

We observe (empirically / subjectively) that thermodynamics holds, and we have an arrow of time. In order to explain this observation, we need an additional axiom:

(A) The universe looks like it has Liouville-random but phantastically low-entropy initial state in the far past. Liouville-random means something like "absolutely continuous wrt Liouville-measure".

We could split that into

(A1) Thermodynamics makes sense. The universe cares about Liouville-measure. (A2) With respect to Liouville-measure, we have a low-entropy state at some point in time (hence we can define "past (noun): In the time-direction of the designated point in time with low entropy").

In the overwhelming majority of such universes (classic trajectories), observers see things that are compatible with both thermodynamics and an arrow of time (and there is a possibility for conscious overservers to evolve).

Mathematically, there are many ways of measuring phase-space volume, in a way that is invariant under time-evolution. Assuming ergodicity/chaoticity/mixing, there is only a single such way that has bounded density function with respect to "ordinary n-dim volume" (SRB-measure), and all these converge to Liouville. But it would have been mathematically conceivable to take the same evolution equations, and use a different measure that is supported on weird subsets of low fractal dimension, and end up with different thermodynamics. Hence, we need an extra axiom to separate observed reality from mathematical possibility, and single out Liouville / ordinary volume.

Nobody takes issue with accepting this axiom. So why do people have issues with accepting the Born rule as an axiom?

It is clear that we must add axioms to the evolution equations to explain that we care about squared-amplitude and that we have an arrow of time. Sure, you can try to prove theorems showing that alternatives to the Born rule are crazy-weird, and make the axiom weaker ("if we care about anything at all, and the thing we care about is not batshit insane then the Born rule holds"). For example, MWI tries to replace copenhagen "if humans do experiments, use squared-amplitude" with axiom "the universe cares about squared-amplitudes in a non-crazy way" plus pseudo-theorem "human-scale experiments can be approximated by Born rule". Certainly more elegant, because the universe doesn't need to care about squishy emergent approximate concepts like "humans" anymore. But it is the same axiom in the end, modulo some plausible not-quite-proven maths for the pseudo-theorem.

Not-quite-proven even in the classical case: Formally proving unique ergodicity / chaoticity / fast mixing is far beyond us for almost all non-trivial systems. Just like e.g. complexity theory (we don't even have a formal proof of P!=NP, so the name of the game is "mathematical plausibility" that is spot-checked by tiny little theorems).


> why do people have issues with accepting the Born rule as an axiom

Because it offends people's intuitions about what must be happening behind the scenes to make it true. Thermodynamics is mathematically complicated, but intuitively very straightforward: you've got a bunch of billiard balls bouncing around. People can visualize that. QM is fundamentally different. For starters, the wave function operates in configuration space rather than physical space, and is not always separable (which is what produces entangled states). Furthermore, in classical mechanics, the limitation on knowing the complete state of a system is merely technological. If we had accurate enough measuring equipment the state of any system could be known in principle. In QM this is no longer true. QM makes the true state of a system inaccessible even in principle.

BTW, with respect to the arrow time, you might enjoy this:

http://blog.rongarret.info/2014/10/parallel-universes-and-ar...


Yeah, QM offends my aesthetic sensibilities as well. I cannot but shake my head in disgust at creation.

But it does not get better by attempting to derive the Born rule. Of all things to get offended by, why do people single out the Born rule? For example, non-locality (e.g. Aharanov-Bohm) is imo much easier to understand and already disqualifies reality from "believable theory-building". I genuinely don't get it.

Re your link: I don't understand why the linked post talks about QM at all. The arrow of time is a perfectly classical phenomenon; phrasing it in terms of QM only helps us understand the formalism of QM if we already understood the classical arrow of time. Or am I misunderstanding something?


> it does not get better by attempting to derive the Born rule.

Well, it would get better if you actually could derive the Born rule without begging the question. But you can't so it doesn't.

> Of all things to get offended by, why do people single out the Born rule?

I guess you'd have to ask someone who was actually offended by it.

> am I misunderstanding something?

Yes, I think so. The arrow of time is not a purely classical phenomenon. Perhaps it could be, i.e. if we were living in a purely classical universe there might still be an arrow of time, but this is a moot point because we are not living in a classical universe, so classical thermodynamics is not what creates the arrow of time in our universe. It's more fundamental than that. It impossible to extract a classical universe from quantum dynamics without establishing an arrow of time in the process.


Ah, ok, with "purely classical phenomenon" I meant "the classical limit h->0 also exhibits an arrow of time; observing the arrow of time and thermodynamics is not enough to distinguish between a classical and quantum universe". Therefore we gain no additional understanding of the phenomenon by looking at QM; at most, we gain understanding of the mathematical framework of QM by seeing how its formalism describes the classical arrow of time.

The arrow of time is a phenomenon that is robust under changes of underlying theory; may as well discuss the simplest models that exhibit it, and these are classical. Just like you would explain chaos with simple models first, e.g. horseshoe, Lorentz, 3-body, before going into highly specific 1000 dimensional systems.

edit: "Deriving the Born rule". An insightful "derivation" would be a pseudo-theorem "if you want QM to limit to classical mechanics + Liouville Axiom, then you must adopt the Born rule". But afaik this has been mostly done, so there is nothing to explain anymore?


> the classical limit h->0 also exhibits an arrow of time

That is certainly true empirically. But it is much more difficult to explain why this happens in classical terms without begging the question -- you can't just invoke the second law of thermodynamics here. You have to derive the second law from Newtonian mechanics. That is an unsolved problem.


No, deriving the second law from Newton+Liouville is easy (in the form of "mathematically plausible pseudo-theorem", that's late 19th century physics). If you don't adopt Liouville measure as axiom, then it is impossible by (more modern) counter-example: Thermodynamics looks different if you pre-suppose that god cares about weird measures; and it is unavoidable if you pre-suppose that god cares about Liouville measure.


OK... but this seems to me simply like trading one assumption for another. Is there any reason why adopting the Liouville measure as axiom is any less arbitrary than just adopting the second law directly?


Of course it is trading one assumption for another.

It basically comes down to "absent other specifications, one state is as probable as another". As good Bayesians, we should reject this sentence: We always, always need a prior; and if the prior is too crazy (e.g. has zero probability mass on the correct hypothesis), then no amount of observation will ever help us.

Since we deal with a continuous system, this is also a nonsensical sentence on purely mathematical grounds: It could be "absent other specifications, one state has as large probability density as any other, measured relative to Liouville". An easy calculation shows that this state of affair (thermodynamic equilibrium) is preserved by the flow. Furthermore, "If any state has as large probability density as any other, measured to a small (absolutely continuous) distortion of Liouville, and you wait long enough, then it all evens out to Liouville", i.e. small distortions decay (mixing).

Large distortions do not need to decay. For example, you could single out one specific crazy periodic trajectory (there are many crazy periodic trajectories, but the set of crazy trajectories has small ordinary phase-space-volume), and our large distortion is "I am somewhere on this specific periodic trajectory". If we start on the periodic trajectory, then we stay on it; hence, this state of affairs is preserved as well.

In order to separate the two, we need an axiom, like "Liouville is a good prior". The equations of motion don't tell us that the first corresponds better to reality than the second! This is an empirical observation.

Now start with Liouville, but prescribe that our initial data are in a specific low-entropy macro-state. This means that we cut out some non-crazy region of phase space and say "start anywhere here, but then count all states as same probability". Then the second law holds (this is pseudo-theorem).

If we initially started with our crazy measure (concentrated on a single periodic trajectory), then the second law fails / is vacuous. If we permitted to cut out a crazy region of phase space, then it would also fail (crazy region: Take a sane region R0 at time T0 and a sane region R1 at time T1, and our crazy region consists of all points in R0 that will end up in R1 after time T1-T0. All these points will end up in R1, so no second law for you).

This general program is as strong as it gets, and all the "non-crazy" caveats are imo much less arbitrary than just adopting the second law directly. Of course this opens a giant can of worms: What exactly does non-crazy measure mean? What is a non-crazy way of slicing-and-dicing phase space into macro-states? For which systems of physical thermodynamic interest can we formally prove chaoticity / fast mixing?

Ok, I actually know the answer to the last question: Almost none. But I also know the answer to "for which such systems of interest does anyone seriously doubt that we have chaoticity / mixing": Almost none.


> Of course it is trading one assumption for another.

Why "of course"? If would only be "of course" if it were impossible to derive the second law from Newton's laws. AFAIK that hasn't been proven impossible.

But I think you may have misinterpreted my objection. It's not that I object to substituting one axiom for another in general. That can often represent progress. For example: before Einstein, it was observed that inertial mass and gravitational mass were very close to each other, lending considerable weight to the reasonableness of assuming that they were in fact the same. It turns out that you don't have to assume this. There's another set of axioms that allow you to prove this, and these axioms are "better" because they provide a much larger scope of explanatory power for the same axiomatic price.

By way of contrast, Turing machines and the lambda calculus are "the same" in some deep sense that makes it silly to argue about which one is "the right model" of computation.

It's not clear to me whether adopting Liouville really represents progress a la relativity, or whether it's arguing potato-potahto a la Turing vs Church.

BTW, it just occurred to me that there is a fundamental difference between the thermodynamic and quantum arrows of time: the thermodynamic arrow of time is reversible in a non-isolated system. The quantum arrow is not.


My rule of thumb with regard to objections to Many Worlds is that if the article discusses consciousness, or brains, in any way, it’s not about Many Worlds, but about consciousness, or brains, and I will stop reading it at once.


Your comment enticed me to actually read the first few paragraphs of the article, skim the rest, and ctrl+f for "conscious" and "brain". In my humble opinion, your rule has raised a false positive here. The post is extremely well written and sourced, weaving a story straight from highly technical research papers. I'd suggest judging this article by itself, not a string-matching filter.

Besides, isn't discussion of brains and consciousness perfectly relevant in the context of QM, given the requirement for rigorous definitions of observation, observers, etc.?


Sure. Relevant insofar as pointing out that brains and consciousness aren't required for a quantum measurement to take place.


Following your suggestion, I’ve read it now in its entirety and I’m considering adding “subjective experience” to my axiom, but that might be too aggressive.


It is the nature of subjective experience that is the source of all the trouble with interpreting QM. If you ignore subjective experience, there is no problem at all: the universe is quantum, and that's all there is to it.

But most people want to know why, if the universe is quantum, does it present such a convincing illusion of being classical.


I suspect that a lot of physics on the edges of our understanding is as much about consciousness as it is about reality.


You shouldn't need to invoke consciousness to discuss physics. It's ultimately an attempt to describe reproducible observation using mathematical objects and concepts that are sufficiently commonsense that they can be communicated to fellow physicists and agreed upon.

You can still ask if there are other methods that convey understanding about reality than this method, but it's not in the purview of physics.

With respect to Many-Worlds, it is better understood as a formal problem. It stands in opposition to a formulation of quantum mechanics that cannot be used to model the universe as a system because what constitutes measurement was not described by that formulation, but measurements affected the system.


The whole "waves of differentiation" idea is quite an interesting one and certainly seems to me to be an almost physical description of philosophical concepts like free will and determinism. Perhaps these waves themselves interfering are what gives rise to what we consider consciousness, since what we experience subjectively is changing at every moment. Maybe the only reason we are able to "think" at all and remain "coherent" and "self-subjective" in our own brains is that these waves create images of the past on all their neighboring matter that have a continuous time-evolution. There certainly (to my mind at least) seems to be an element of randomness to thought that feels quicker and less coherent than could be easily explained even by a large-ish number of neurons with a comparable but larger number of interconnections.

I dunno, it seems a little overly-philosophical but surely there are meaningful connections to be made between physical reality and the transience of subjective experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: