You had a wave function for the particle/detector system, you performed a measurement, then you didn’t have any longer the wave function that you previously had. I think that’s what von Neumann would call “measurement causing the collapse of the wave function”.
What Von Neumann -- or anyone else -- would or would not call it doesn't change the truth of the matter.
Von Neumann also believed that consciousness was required to collapse the wave function. This is a non-falsifiable and hence non-scientific theory (which is the problem with all collapse theories). Von Neumann was a really bright guy, and he very nearly got the right answer, just as Lorentz very nearly got to relativity before Einstein. But just as Lorentz refused to let go of the idea of Galilean invariance, Von Neumann refused to let go of the idea that classical reality is a primary phenomenon. It isn't. Classical physics is an emergent, not a primary phenomenon. There are no particles, only quantized fields. (To be fair, Von Neumann did not live long enough to see Bell's theorem or the Aspect experiment, so he didn't quite have all the information he needed. But he was bright enough that he probably could have come up with Bell's theorem on his own if he'd decided to go that direction.)
I really don't understand what you are trying to say. Do you dispute the name or do you dispute whether it happens?
From the quantum state of the particle/detector which is a superposition of both possible outcomes, what happens according to you as a result of the measurement? Is there a definite result?
What do you mean when you say that "if the detector is working properly then these amplitudes are zero and the interference term vanishes"? Is going from non-zero amplitude to zero amplitude a physical phenomenon? Why does it happen?
I have not tried the video, but I've read the paper. As I pointed out, your description of a measurement in section 4.1 is a textbook description of wave function collapse. I'm trying to understand how do you think that your explanation is different. Or maybe I'm missing your point completely.
I don't see the relevance. My question about the description of the measurement process on section 4.1 (which is supposed to be more fundamental: "Let us begin with the simple two-slit experiment.") stands.
Section 3 presents an argument for why is cannot be the case that a measurement causes a physical change in the system being measured (because it would result if FTL communications). If you don't see how that's relevant then you are beyond my ability to help.
> supposed to be more fundamental
You are reading way too much meaning into that introductory sentence. The two-slit experiment is often presented as fundamental because it is easy to describe, and because historically it was done very early in the development of quantum mechanics, but it is not fundamental. Indeed it is this false assumption that 2-slit is fundamental that has lead to the horrible pedagogical mess that QM suffers from to this day. It is simply false that no one understands QM, or even that QM is particularly hard to understand. It's just that the usual pedagogy of QM is based on a false assumption.
I don't want to get lost on terminology (what is more fundamental, what do we call a physical change...). I try to understand the consequences of the approach you propose.
In 4.1 you look at a couple of similar examples that are quite simple, a particle/detector system in the first half ("measurement") and a particle/particle system in the second half ("entanglement").
The "measurement" part starts like von Neumann's measurement scheme, which is a two step process. In the first step, one assumes that the "target" and the "probe" are composed into a single quantum system (there is entanglement like in the particle/particle system considered later, I agree with that). This system is in principle in a superposition of states, the density matrix does contain interaction terms.
In the second step, one applies a non-unitary operator to actually take the measurement at the classical/macroscopic level and the interference terms dissapear. Instead of a pure quantum state we end with a diagonal density matrix representing a mixed state (the system is either in one state or the other, we don't know which one but we know the corresponding probabilities).
I still don't understand how the second step works in your approach and why do the interference terms vanish.
I understand even less how do you conclude that "the entanglement destroys the interference in exactly the same way (according to the mathematics) that measurement does." The entangled pair looks identical to the pre-measurement target/probe pair, but interference is still present at that point (and vanishes because of the measurement, whether or not we call this the collapse of the wave function).
By the way, in the video there is a slide saying that the resulting wave funtion is |ΨU |^2 + |ΨL|^2 but that's not a wave function.
You are talking about the math, and I'm talking about what happens physically. As an analogy, this is like talking about whether F=G * m1 * m2 / r^2 is an accurate mathematical model of gravity in the weak limit (it is) vs talking about whether earth is actually pulling you down towards its surface (it isn't).
But since you are obviously well versed in the math, I suggest you read the Cerf and Adami paper that my presentation is based on: https://arxiv.org/abs/quant-ph/9605002 Also, Zurek's work on decoherence. Those will probably answer your questions better than I can. I'm just a popularizer, not a physicist.
I was trying to find what does happen physically according to you. For the time being, you've only explained what does not happen ("measurement does not cause the collapse of the wave function",
"When you measure a particle, nothing happens to that particle as a result of the measurement.").
Reading your paper I thought something happened, somewhere in the transition from page 6 to page 7, because I understood that you were getting a well-defined value out of the measurement. The paper you linked now makes clear that in this model nothing happens at all, ever. Anything less than the wave function for the whole universe is just an illusion, there is no classical world.
It's not an accident that your description looked like the standard measurement scheme from von Neumann, as they are just reproducing it. They seem to think that when von Neumann described the measurement process as an interaction of two quantum systems he didn't notice that this interaction was beyond the classical concept of correlations, and this is why he needed a second stage to perform the measurement and collapse the state vector.
In the pre-measurement step, the 'target' (system to be measured, with states |u> and |d>) and the 'probe' (quantum detector, with states |+> and |->) are combined. This system will be a superposition of the states |u,+> and |d,->. The density matrix includes the diagonal terms |u,+><u,+| and |d,-><d,-| and also the diagonal terms |u,+><d,-| and |d,-><u,+|.
Von Neumann introduces a projection step which keeps only the diagonal terms. The resulting density matrix is no longer a pure state but a mixed state (for a closed system, pure states correspond to eigenvalues of the density matrix and then a diagonal matrix has to represent a classical mixture of the corresponding pure states and cannot be a superposition). After the measurement following von Neumann we have a statistical ensemble of the possible outcomes and choosing one of them doesn't pose any conceptual problem.
Cerf and Adami avoid the second step by taking the entangled system, ignoring the state of the target and looking only at the probe. Mathematically, one obtains a reduced density matrix by tracing out one subsystem and the result is a diagonal density matrix which contains only |+><+| and |-><-| without any interference terms. However, this is an improper mixed state and cannot be interpreted as before (because it's not the description of a full system but the result of looking at a subsystem).
Their reduced density matrix does not represent a defined (but unknown) state because the subsystem remains entangled with the rest of the system (target+probe). They say that the full system continues to evolve in a unitary way without any kind of collapse, but then how is any measurement done at all? Their description of the detector is an (improper) mixture of |+> and |->, at what point do we get one or the other as a result?
Note that Cerf and Adami consider their model completely unrelated to Zurek's. Decoherence and einselection are interesting, provide a physical mechanism to explain how the interference terms become (close to) zero in the "correct" basis in a (practically) irreversible way, and have some experimental support. But it doesn't completely solve the problem of measurement: how does the system settle on one particular outcome?
Before I respond I would like to say that this interaction is tremendously helpful for me. You obviously know what you're talking about, and I appreciate you taking the time to engage on this. If you don't mind, I'd like to learn more about who you are IRL. If you want to do that off-line my contact info is in my profile.
But to answer your question:
> at what point do we get one or the other as a result?
That is like asking "at what point does a blastocyst become a fully fledged human being?" There is no sharp dividing line between the quantum and the classical. There is not "point" at which the one transitions sharply to the other. There is a range beyond which the classical approximation is good enough for all practical purposes, but the physical world never "actually becomes" classical, just as gravity never "actually becomes" Newtonian.
> Cerf and Adami consider their model completely unrelated to Zurek's
I'm not sure that "completely unrelated" is a fair characterization. I think they would regard it as an "improvement". (But they are both still alive. We could ask them.)
It may or may not answer any of your questions, but I would be very interested in your feedback either way. (And keep in mind that this was written for a lay audience.)
On the relation to decoherence, maybe "completely unrelated" is too strong but they say their model is "distinct from" and "not in the class of" environment-induced decoherence models. I think it's debatable if it is an "improvement" or not, but I don't think they get any closer to "solving" the measurement problem than Zurek does.
I find interesting that London and Bauer proposed a similar model already in 1939 (ref. [12] in the "Quantum mechanics of measurement" pre-print):
[9. Statistics of a System Composed of Two Subsystems]
"The state of a closed system, perhaps the entire universe, is completely determined for all time if it is known at a given instant. According to Schroedinger's equation, a pure case represented by a psi function remains always a pure case. One does not immediately see any occasion for the introduction of probabilities, and our statistics definitions might appear in the theory as a foreign structure
"We will see that that is not the case. It is true that the state of a closed system, once given pure, always remains pure. But let us study what happens when one puts into contact two systems, both originally in pure states, and afterwards separates them. [...]
"While the combined system I + II, which we suppose isolated from the rest of the world, is and remains in a pure state, we see that during the interaction systems I and II individually transform themselves from pure cases into mixtures.
"This is a rather strange result. In classical mechanics we are not astonished by the fact that a maximal knowledge of a composite system implies a maximal knowledge of all its parts. We see that this equivalence, which might have been considered trivial, does not take place in quantum mechanics. [...]
"The fact that the description we obtain for each of the two individual systems does not have the caracter of a pure case warns us that we are renouncing part of the knowledge contained in Psi(x,y) when we calculate probabilities for each of the two individuals separately. [...] This loss of knowledge expresses itself by the appearance of probabilities, now understood in the ordinary sense of the word, as expression of the fact that our knowledge about the combined system is not maximal."
[10. Reversible and Irreversible Evolution]
"I. Reversible or "causal" transformations. These take place when the system is isolated. [...]
"II. Irreversible transformations, which one might also call "acausal." These take place only when the system in question (I) makes physical contact with another system (II). The total system, comprising the two systems (I + II), again in this case undergoes a reversible transformation so long as the combined system I + II is isolated. But if we fix our attention on system I, this system will undergo an irreversible transformation. If it was in a pure state before the contact, it will ordinarily be transformed into a mixture. [...]
"We shall see specifically that measurement processes bring about an irreversible transformation of the state of the measured object [...] The transition from P to P' clearly cannot be represented by a unitary transformation. It is associated with an increase of the entropy from 0 to -k Sum_n |psi_n|^2 ln |psi_n|^2, which cannot come about by a unitary transformation."
[11. Measurement and Observation. The Act of Objectification]
"According to the preceding section, [the wave function after the measurement] represents a state of the combined system that has for each separate system, object and apparatus, the character of a mixture. [...] But of course quantum mechanics does not allow us to predict which value will actually be found in the measurement. The interaction with the apparatus does not put the object into a new pure state. Alone, it does not confer on the object a new wave function. [...]
"So far we have only coupled one apparatus with one object. But a coupling, even with a measuring device, is not yet a measurement. A measurement is achieved only when the position of the pointer has been observed. It is precisely this increase of knowledge, aquired by observation, that gives the observer the right to choose among the different components of the mixture predicted by the theory, to reject those which are not observed, and to attribute henceforth to the object a new wave function, that of the pure case which he has found."
They go on to discuss how "the observer establishes his own framework of objectivity and acquires a new piece of information about the object in question" while there is no change for us if we look at the "object + apparatus + observer" system from outside. The combined system will be in a pure state and we will have correlated mixtures for each subsystem.
London and Bauer say that the role played by consciousness of the observer is essential for the transition from the mixture to the pure case. Cerf and Adami claim to do without the intervention of consciousness but it's not clear how do they explain the transtion from the mixture to something else. They say things like the following, which doesn't look different from London and Bauer to me:
"The observer notices that the cat is either dead or alive, and thus the observer’s own state becomes classically correlated with that of the cat, although, in reality, the entire system (including the atom, the γ, the cat, and the observer) is in a pure entangled state."
From a very superficial look at the blog posts you linked to, I think I agree with some things (v.g. that most of this ideas have been around for quite some time), I disagree with others (v.g. that there is no measurement problem) but I guess most of the discussion is meta-physical so it cannot be "wrong".
I think dodging the measurement problem is not solving it. The core of the problem is here:
"But, while QM predicts that you will be classically correlated, it does NOT (and cannot) predict what the outcome of your measurements will actually be."
QM does not predict that the measurement will have an outcome. The transition from that probability distribution to one definite result is the measurement problem.
I am not an expert and I am not sure to what extent a solution to the measurement problem is possible, or even needed, but I think it would mean that the current QM theory is incorrect or at least incomplete. In the first case (v.g. the "objective collapse" theories) the Schroedinger function may be very good approximation and it may be practically impossible to get a experimental/observational verification. In the second case (v.g. the "non-local hidden variables" theories) the predictions of the Schroedinger function may be exact so a verification could be impossible even in principle.
He gives more details in section 3.7 of his "Lectures on Quantum Mechanics" (Interpretations of Quantum Mechanics) which ends as follows:
"My own conclusion is that today there is no interpretation of quantum mechanics that does not have serious flaws. This view is not universally shared. Indeed, many physicists are satisfied with their own interpretation of quantum mechanics. But different physicists are satisfied with different interpretations. In my view, we ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is only a good approximation."
The last chapter in Commins' "Quantum Mechanics: An Experimentalist’s Approach" (The Quantum Measurement Problem) gives a nice description of the problem and the proposed solutions which fall into three categories:
1.- there is no problem. [Decoherence doesn't completely avoid the problem; it makes the off-diagonal elements of the density matrix zero but the issue of going from a mixture to a specific outcome remains.]
2.- the interpretation of the rules must be changed but this can be done in ways that are empirically indistinguishable from the standard theory. [De Broglie-Bohm pilot-wave theory postulates "real" particles compatible with QM predictions, but currently only works well in the nonrelativistic setting.]
3.- deterministic unitary evolution is only an approximation. [Adding non-linear and stochastic terms produce "spontaneus localization", tuning the parameters we can explain both the microscopic "unitary" and the macroscopic "non-unitary" behaviours.]
"In conclusion, although many interesting suggestions have been made for overcoming the quantum measurement problem, it remains unsolved. We can only hope that some future experimental observation may guide us toward a solution."
[By the way, I sent you an email but I don't know if you got it.]
Weinberg either does not understand the multiverse theory or he intentionally misrepresents it. It is not the case that the universe splits when a measurement is made. That description begs the question because it doesn't specify what counts as a measurement. It's putting lipstick on the Copenhagen pig. A much better (though still not very good) description of the multiverse can be found in David Deutsch's book "The beginning of infinity" chapter 11. Universes do not "come into being" when measurements are made. The entire multiverse always exists. This is the pithy summary:
"Universes, histories, particles and their instances are not referred to by quantum theory at all – any more than are planets, and human beings and their lives and loves. Those are all approximate, emergent phenomena in the multiverse."
Indeed, time itself is an emergent phenomenon:
"[T]ime is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks – or of any objects usable as clocks) into the same history."
Here is Weinberg's fundamental problem:
"[T]he vista of all these parallel histories is deeply unsettling, and like many other physicists I would prefer a single history."
Well, I'm sorry Steven, but you can't have it. That's just not how the world is, and wishing it were so sounds as naive and petulant as an undergrad wishing Galilean relativity were true, or Einstein wishing that particles really do have definite positions and velocities at all times. Yes, it would be nice if all these things were true. But they aren't.
> Weinberg either does not understand the multiverse theory or he intentionally misrepresents it. It is not the case that the universe splits when a measurement is made.
I don’t know if your point is that the universe splits more often than that or never. Note that Weinberg is referring to the usual MWI formulations, I don’t know what is the precise definition of the “multiverse theory” you mention. And actually he says that “the fission of history would not only occur when someone measures a spin. In the realist approach the history of the world is endlessly splitting; it does so every time a macroscopic body becomes tied in with a choice of quantum states.”
“when an experiment is observed to have a particular result, all the other possible results also occur and are observed simultaneously by other instances of the same observer who exist in physical reality – whose multiplicity the various Everettian theories refer to by terms such as ‘multiverse', ‘many universes', ‘many histories' or even ‘many minds'. “
The “plain English” description in the book you cite lacks any rigour and can be confusing if you know a bit of QM because it postulates that the universe splits already (but in what basis?) in the cases where single-universe QM works fine (the wave function can describe a pure quantum state which is a superposition). These “soft splits” can be undone and allow for interference (corresponding to the unitary evolution of the Schroedinger equation).
But, he says that “interference can happen only in objects that are unentangled with the rest of the world” and “once the object is entangled with the rest of the world [...] the histories are merely split further”. These “hard splits” are the branches in the MWI. And correspond perfectly to Weinberg’s description: “[the world splits] every time a macroscopic body becomes tied in with a choice of quantum states.”
In Deutsch’s multiverse the number of universes doesn’t grow because there is always an uncountable infinite number of them, but I’m not sure this makes the theory much better.
Unfortunately I don’t have time to continue this interesting discussion in the near future. At least you have to concede that in an infinite number of alternative universes you found my arguments convincing. That’s good enough for me.
> it does so every time a macroscopic body becomes tied in with a choice of quantum states
But again this just begs the question. How big does a body have to be before it counts as "macroscopic"?
> The “plain English” description in the book you cite lacks any rigour and can be confusing
Yes, that's why I said that it wasn't a very good description (despite being better than Weinberg's).
> These “hard splits” ... correspond perfectly to Weinberg’s description
No, they don't. "A macroscopic body" and "the rest of the world" are not synonyms.
> I’m not sure this makes the theory much better.
I agree with you. That's why I prefer the "zero-universe" approach, and consider our universe and the wave function to be in different ontological categories.
> Unfortunately I don’t have time to continue this interesting discussion in the near future.
:-(
> At least you have to concede that in an infinite number of alternative universes you found my arguments convincing.