Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can retrocausality solve the puzzle of action at a distance? (aeon.co)
113 points by abhishekjha on March 8, 2018 | hide | past | favorite | 85 comments


What confuses me about these experiments is that, per special relativity, from the point of view of the photon the moment of it's emission and detection are the same. It's traveling at the speed of light, so no time passes between those events. So talking about it's state at this or that time is not talking about it's intrinsic state, only our external view of it from our frame of reference. But in terms of a description of the photon in it's frame of reference, that's meaningless.

EDIT: Or is this what the Paris Zigzag does - treat the state of the photon and it's twin as a single state, regardless of our time-based view of it, incorporating the moment of detection, emission and detection of the entangled twin within the same description?


> per special relativity, from the point of view of the photon the moment of it's emission and detection are the same. It's traveling at the speed of light, so no time passes between those events

This is not correct. The correct statement is that, in SR, the photon's worldline is a null worldline. But the concept of "proper time" is not well defined on a null worldline, so you can't say that "no time passes". In fact, in any inertial frame, the time of the detection event will be later than the time of the emission event, so time certainly does pass between the two events in the only sense in which "time between events" can be consistently defined in SR.

Also, the photon's state does not have to be constant along a null worldline.

>is this what the Paris Zigzag does - treat the state of the photon and it's twin as a single state

No. The "Paris Zigzag" just says that causality can flow "backwards" along null worldlines, from what we call the "detection" event to what we call the "emission" event, instead of just the other way around. All of what I said above still applies.


There is no reference frame where the photon is stationary: this is directly axiomatic, because we take the speed of light to be equal and nonzero in all reference frames.


And yet the photon travels at the speed of light. You're probably quite right, but how should we think about time with respect to the state of a photon?


In relativity, the the meaning of the term "reference frame" is "coordinate system." The fundamental geometry of the situation gives us a bucket of possible coordinate systems. From this bucket, we can choose a coordinate system where any give object appears stationary, unless the object is moving at the speed of light in which case there's no corresponding coordinate system.

The coordinate systems are the fundamental physics part, and the idea of matching coordinate systems with objects comes out as just something we happen to be able to do for some objects.


You had it right the first time. A photon has no time. The Universe has 0 size in the reference frame of a photon. In a photon's reference frame, photons are absorbed by the instant they are emitted, discretely, they do not travel in a continuous manner. It doesn't really make sense to think about the reference frame of a photon.


> A photon has no time. The Universe has 0 size in the reference frame of a photon.

Both of these statements are incorrect. The correct statement is that the concept of "proper time" does not apply to a photon, and that "the reference frame of a photon" doesn't make sense, so neither does "the size of the Universe in the reference frame of a photon". See below.

> It doesn't really make sense to think about the reference frame of a photon.

This is correct, but it contradicts your previous statement, quoted above.


Not a physicist so please bear with me:

Isn't this just an argument about definitions? It makes no sense to say that the universe has 0 size from a photon's reference frame and it also makes no sense to say the photon has a reference frame, but intuitively the experience of space "shrinking" with correlation to higher velocity for the viewer in some reference frame suggests that the as the reference approaches the speed of light, the length of space approaches zero, and while it makes no sense to speak of something with mass traveling at the speed of light, the intuition is that from that imaginary perspective space has no size.


> Isn't this just an argument about definitions?

No, it isn't.

> the experience of space "shrinking" with correlation to higher velocity

This is not a good description. Length contraction is not a real process of "space shrinking"; it's a matter of perspective, similar to the way objects can have different apparent sizes in ordinary geometry depending on which direction you look at them from. An object moving with a high velocity relative to you is "rotated" in spacetime relative to you such that it appears shorter; but the object itself is unaffected.

Notice that I said "high velocity relative to you"; this is another way of seeing why "space shrinking" is not a good description. Velocity is relative: "high velocity" relative to you is not the same as "high velocity" in an absolute sense (there is no such thing). So saying that "space shrinks at high velocity" doesn't make sense.


I understand that space "shrinking" is a perspective (which is why I originally placed it in double quotes) and that velocity is relative - my question was meant more as a thought experiment of taking the subjective experience of what things seem like as relative velocity increases and then extrapolate it to what they might seem like as relative velocity approaches the speed of light, i.e what would experience be like were it possible to ride on a photon. Since riding on a photon is not possible, perhaps this question is meaningless - it's just that since there is a pattern to how space is experienced with increasing relative velocities, the tendency is to extrapolate this to try and imagine how space/time would be experienced if it were possible.

That being said, I understand that a "universe of length zero" or "no time" is not relevant nor practical in the context of doing physics.


> what would experience be like were it possible to ride on a photon

But that's not what "space shrinking" represents. The object or person that is moving at close to the speed of light relative to you does not see their own space shrink. Their "space" appears to you to "shrink" (I put the words in scare-quotes because of all the issues I already brought up). So even if "space shrinking" is a reasonable interpretation of something, that something is not what you would even want to extrapolate to "the experience of riding on a photon" (which is meaningless anyway, as I said in my other post just now), because it doesn't describe the experience of the observer moving at high speed relative to you, it describes your own experience, and to yourself, you're at rest.


> Since riding on a photon is not possible, perhaps this question is meaningless

Yes.


Read up on Lorenz geometry.


Why should this be downvoted? It exactly says how to think about the question the parent asked.


It's not a helpful one-line reply; you could at least have offered up a link or reference to your suggested place to study.

It's also wrong in that you use "Lorenz" (as in Ludwig, the Danish one) instead of "Lorentz" (as in Hendrik, the Dutch one). Don't feel too bad; lots of respectable authors have made a similar typo ("Lorentz gauge" instead of "Lorenz gauge"), but it makes your one-liner even less useful as a piece of advice.

https://en.wikipedia.org/wiki/Ludvig_Lorenz?oldformat=true

https://en.wikipedia.org/wiki/Hendrik_Lorentz?oldformat=true


Thanks


From what I can read, entanglement extends to other particles like electrons, which have mass and do take time to travel. So, your theory doesn't explain entanglement for electrons.


> from the point of view of the photon the moment of it's emission and detection are the same. It's traveling at the speed of light...

From the perspective of the photon (really kind of silly since the photon can't have any mass, and therefore no observer), there is no time or distance traveled. It takes 0 time to travel 0 distance.


I've been wondering this lately. Does the photon actually exist in the space between where it is emitted and where it is absorbed? In other words, if nothing interacts with it along the way, is it really there?


The answer is a curious no.

More precisely, the double slit experiment demonstrates that the possibility that it existed in one place at a particular point in time interacts with the possibility that it existed elsewhere, and the interaction affects where it can be possibly found.


That's metaphysics. Physically, there's no way to answer the question. See also: https://en.wikipedia.org/wiki/Virtual_particle


That's really the riddle of the Schroedinger equation for any quantum entity.


> From the perspective of the photon (really kind of silly since the photon can't have any mass, and therefore no observer), there is no time or distance traveled. It takes 0 time to travel 0 distance.

This is not correct. It is not possible to have an inertial frame in which the photon is at rest. It is possible to construct coordinate charts in which the photon's worldline has constant values for all but one coordinate; but that coordinate is not a "time" coordinate, and there is no well-defined notion of "time" or "distance" along the photon's worldline in such coordinates.


I favor this quantum interpretation a bit for very similar reasons to that. Relativity already tore away our ideas about time being a single steady march forward in an independent fourth dimension, which was a big step. Similarly, I don't consider this "retrocausality"; I consider it a sign that time works differently than we thought yet again, and in fact all qubits and their interactions are probably actually moving forward in time, in what is arguably the most real sense of "time". However, in that view, time is shaped very, very funny relative to what we humans would consider "time", because that means that a photon in the cosmic background radiation emitted that many billions of years ago that hits one of our instruments is still a single atomic event.

I consider this view supported by relativity where, as you mention, technically a photon is an atomic event from its own point of view. There isn't any before or after in its own frame of reference.

What we call time would then be a secondary phenomenon on top of this much more complicated (from our point of view) time organization of the universe. (However, I reject the idea that that makes "our" time an illusion, or some BS like that. Our human conception of time is perfectly real. It just may not be fundamental. But even if we discover that "real time" is something like what I'm hypothesizing here, human time will still be as real as ever.)

What is not clear to me is whether or not this could be set up in a reasonable manner. It's on my long list of things I'd love to have time to try my hand at. It's feasible, you just need to start with maybe a 5 or 6 qubit system that interacts in this "real time" manner in the counterintuitive "retrocausal" manner, but for which there is a point of view in which it all happens forward in this "real time" metric. You don't have to simulate an entire universe to show this is a feasible idea, if you can establish it at that scale, it is merely a matter of extreme complication to assume the universe could work that way.

There's also an interesting implication for the "deistic" point of view. If this is how quantum mechanics "really works" then it becomes relatively easy to imagine that the Great Simulator is simulating the universe, but actually has little to no power to influence it, because from his point of view during the computation the entire universe is a tangled mess of everything essentially happening at once (from our point of view). The only thing the Great Simulator might be able to do is examine the final state of the universe after it has finished computing, at which point the entire history of the universe would be open to him, but to affect the state of the universe would be wildly more computationally expensive than the simulation itself, because any attempt to insert or modify even a single qubit's worth of change would essentially require the entire thing to be recalculated again as the changes propagate in (from our point of view) all the way forward and backwards in time again. Depending on the nature of "true time", though, it might be possible, depending on exactly how it ends up working.


I've been casually considering: what would the universe look like from a photon's point of view, considering that its entire lifespan is - from its own perspective - instant? Photon views itself as a point, with the totality of everything viewed as a radius from that path.

And great insight on God having a very different perspective of time. That's pretty clear in most theologies, but how it's different is never seriously considered. Only change to your point I'd make is: whatever the actual nature of time, a "tweak" (miracle?) may be made & propagate in a much simpler fashion than our perspective perceives it as (ex.: we see "spooky action at a distance", but God sees it as just changing the state of a singleton).


In the current theory, photons do not have a frame of reference.

"The spooky action at distance" is a media term. In quantum physics it is indeed a "single state".

I fail to see why Odin, or any other god you could have meant would "see" Universe's events any differently from physics. Physics by definition is seeking the full description of the Universe. What we argue here is how to interpret that, but that is mostly needed for people, who are too lazy to grok actual equations as they are.


"Physics by definition is seeking the full description of the Universe."

From the "inside". We can not have an exterior perspective, and the Church-Turing thesis suggests that we will never know what the "outside" looks like because there's any number of possible models and literally zero ability to discriminate them from within the system. The only way to find out would be for someone from the outside to tell us, and even then you can get into some interesting philosophical knots asking how you'd evaluate such a claim, or even be sure it was from the outside.

I HAVE HIJACKED JERF'S MESSAGE TO TELL YOU FROM THE OUTSIDE HOW YOUR UNIVERSE WORKS. YOUR UNIVERSE IS A QUANTUM SIMULATION RUNNING ON WHAT TO ME IS ANALOGOUS TO A SMALL BREAD PUDDING WHICH HAS JUST STARTED BAKING, WHICH I INTEND TO CONSUME SOON FROM MY POINT OF VIEW. YOU NEED NOT WORRY, IT'S MANY TRILLIONS OF YEARS IN YOUR PERSONAL VIEW OF THE FUTURE.


I agree with your view on this. We can not compute the computation that we are contained within. As for the "internal" vs "external" perspective of this computation, I have found the analogy of a music record a good metaphor for explaining. The entire album is contained staticly on the disc all at once as viewed from the external. However, from an internal point of view as the record needle it exists as a flow of time over the record.


You're failing to differentiate a program from the programmer.

You're equating Odin (or whatever) to the operating system, which while superior to other programs is, in essence, just another program.

I'm equating Odin (or whatever) to the programmer, who knows far more about the essence & foundations of the system, and even has a debugger capable of stopping the entire system, editing values, and resuming the system as though absolutely nothing happened (save for occurrence of a miracle).


"spooky action at distance" is a relativistic term, coined by Einstein.


We are well past Einstein at this moment.


These are beautiful ideas.


tl;dr (and mathlessly) Colloquially, you'd hold something other than the photon stationary and consider the light's path in that stationary observer's frame, and then it doesn't matter whether proper time is rigorously defined.

For a handful of others in this thread:

While it's strictly true that proper time is hard to define usefully for a classical massless particle, you can still parameterize a null curve any way you want, depending on what it is you're trying to do.

In flat spacetime you're rarely trying to eke out a transformation much more complex than a Lorentz transform (the matrix for which involves \gamma, which is undefined for v=c), and so you're stuck with ds = 0. On the other hand, your objects on lightlike curves are probably light itself (classical or otherwise), so you'd generally look to something like an affine parameterization like k^b\Delta_{b}k^a=0 and k_{a}k^a=0 where k^a(x) are the components of the photon's momentum, or (aiming for E=pc) parameterize on the wave vector k (where v=c implies k^2={\omega}^2, where \omega is the wave angular frequency).

Other sensible choices of affine parameter are available, and typically give you freedom to pick out a momentum vector k^\mu = \dot A^\mu where A is the affine time; usually you'd do this so as to match the momentum-energy at the emitter or absorber (or the redshift along the null curve).


This is a really weird, but really neat concept. Where hidden variable theories assert that the state of a particle is fixed at any given point in time (but that quantum descriptions don't fully capture this state), retrocausality seems to say that the state of a particle is under-determined at any given point in time, and when the undetermined fragment of state is needed to determine the outcome of some future experiment, the result of the experiment adds information to the past in a consistent manner. But this has the side-effect of correlating all experiments that depend on this state!

I think this also clarifies why retrocausality doesn't allow information to be transmitted back in time. Every observation on the under-determined state in the past by definition doesn't rely on the undetermined fragment, and hence is consistent with whatever choice occurs in the future. There's some flavor of monotonicity here, but it's pretty bizzare!


It seems to be similar to Murray Gell-Mann's decoherent histories approach (aka consistent histories) where the state of a system can be described as additive sum of all possible histories.


I don't understand. Isn't action at a distance this: you take red and green marble and randomly put them in two bags. You take one bag with you to Pluto. Then you open the bag and see the red one, now you instantly know what marble is in the bag that stayed on Earth. This doesn't break any law so what's the problem?


The problem is that Bell's theorem (which is supported by all of our experimental data so far, afaik) states that "No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

Basically, hidden variables models can't explain the results of some of our experiments and no-one's found a way to make the two match up.

Reference: https://en.wikipedia.org/wiki/Bell's_theorem


> "No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

It's more accurately phrased as, "No physical theory of local hidden variables with observations that are statistically independent can ever reproduce all of the predictions of quantum mechanics."

This is actually a new loophole that 't Hooft is exploring in his cellular automata interpretation of quantum mechanics, ie. it has local hidden variables, but because all matter shares a common history from the Big Bang, no observations we can possibly make can be statistically independent of each other.


It also changes things from "local hidden variable" to "universal hidden variable," it's just a different implementation of universal hidden variables than the Many Worlds Interpretation.


> no observations we can possibly make can be statistically independent of each other.

Yep. When I'm reading these articles, "Everything Connects" is a pretty useful heuristic for flowing with apparent discrepancies in theories...although of course nailing down concrete theories and finding precise explanations for phenomena is important.


Do you have any resources where I can read more about this aimed towards a motivated layman with some working knowledge of QM terminology?



The free e-book linked from that article is probably the most up to date description:

http://www.springer.com/us/book/9783319412849


Great point, more people need to know about this "small" detail.


That resolution of the problem is generically called "superdeterminism". The OP mention that multiple times...

"Thus, it is conceivable that freedom of choice has been restricted since the beginning of the universe in the Big Bang, with every future measurement predetermined by correlations established at the Big Bang"

https://en.wikipedia.org/wiki/Superdeterminism


Usually you don't need to go that far back. The fact that the photon emmiter always emmits pairs with opossite spin is good enough.


The phrasing here reminds me of Godel's Incompleteness Theorem...probably not coincidental.

> "No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

GIT paraphrased to match: "No axiomatic system can produce all truths within that axiomatic system."


In the hidden variables case (yours), it was predetermined which marble was in the bag before you opened it. In the retrocausal case, the information about which marble is in the bag simply isn't part of reality until you've checked. With macroscopic objects like bags and marbles, this is ridiculous, because someone had to put the marble in the bag (a "shared cause"). However, Bell's Theorem shows that entanglement isn't a sufficient "shared cause" to explain his experiment. Thus, we either need superdeterminism (the "shared cause" is the origin of the universe!), spooky action at a distance, or retrocausality.


>>" the information about which marble is in the bag simply isn't part of reality until you've checked"

It seems to me that the universe uses lazy evaluation (1).

(1) https://en.wikipedia.org/wiki/Lazy_evaluation


It's even weirder than that, though, because with lazy evaluation you still have a complete description of the state involved; it's just a matter of unpacking it by executing the description. That's more analogous with a hidden variables theory. With retrocausality, it seems that performing an observation fundamentally adds information to that which has been observed.

If retrocausality can be likened to a computational mechanism, I'd be really interested to know what it is! Like I said in another comment, it seems like there's some element of monotonic computation involved, but the brand of monotonicity is quite unusual.


>>"it seems that performing an observation fundamentally adds information to that which has been observed."

disclaimer: I have not idea what I'm talking about.

Is it not that the "adds information" that you mention, it's like sampling a probability distribution? Of course, then your program, the universe, it's not deterministic anymore (but I think that it's already the case when quantum physics is involved).

So, your lazy function, don't do anything until is required and then it returns a sample from a probability mass function (not a pdf because we are talking quantum here).


No. See the Kochen-Specker theorem. You can choose which way to measure in ways that sort of interact with one another statistically even after two particles are far separated.


Can anything be "forgotten" once it's been determined?

For example--can two entangled particles be created, then forced to interact in a way that collapses their wave functions, then sucked into a black hole, without having been observed by anything that isn't sucked into that black hole--does their collapse matter? Does the universe forget it?

If so, then could the whole universe be a reversible computation, once it's summed up from big bang to big ... whatever happens at the end?


This makes me think of dangling pointers and memory leaks.


That can also be called "retro causality".


Imagine instead of Green and Red, it could be either Green or Red, or Blue or Yellow. If you measure it's Red/Green-ness, one will be red, and the other will be green. If you measure it's Blue/Yellow-ness, one will be blue and the other will be yellow. You can't measure the blue/yellow-ness of one and the red/green-ness of the other. As soon as one is measured, the other collapses into a particular color.


The problem is that the state of the your particle, in Pluto, changes when somebody on Earth observes it. The marbles were neither exactly green nor red before somebody looked.

It is not as simple as you'll get with macroscopic things, but yes it is not as crazy as some people imply either.


I always though the problem is with the "collapse" thingy. You don't observe Green marble. You just entangle yourself with the marble pair. And so does the guy with other marble.

In the end two marbles, you, and the other observer become a single entangled system, who has two states: you've seen green marble (and the other guy red), and you've seen red marble (...).

And now if I "observe" that system, I also become entangled, and also will have two states: "I saw you claiming you saw green", and "I saw you claiming you saw red", and so on.

From a physical point of view, in that description there's no mystery. What might be interesting is how you (as in your mind) seemingly experience only one outcome. I even suspect we don't.


> From a physical point of view, in that description there's no mystery.

There is no explanation either.


What do you need explained? This is the nature of things according to the Standard Model.


how you seemingly experience only one outcome

(the Standard Model? of particle physics?)


The correlation in the quantum case is stronger https://youtu.be/ZcpwnozMh2U?t=18m10s


There's no problem. The copenhagen interpretation is a joke, same as the "spooky action at a distance". The formula to reproduce this mess is too many PhDs with zero intuition, a hierarcical system based on seniority where status=correctness, and jokes gone wrong (ie copenhagen interpretation). The "collapsing universe" is simply the realization that the other marble is blue, and so is the "instant" information transfer.


This kind of reminds me of something I thought of a while ago when playing around with cellular automata in Golly;

With many cellular automata rules, you get chaotic universes, which slowly stabilize into some final state. Frequently there are small active regions which take longer to stabilize, and which may travel across the entire state before they eventually do.

So could our 4-dimensional universe be the final stable state in a 5-dimensional cellular automaton (or some similar structure)? This would allow every part of the universe to influence every other part without regard for space/time separation, but the rules could still be set up such that the particles in the final state adhere to causality, don't exceed the speed of light, etc...

I think it might only work if the final state represents a fully collapsed universe without superpositions. I'm not sure how realistic that is.


They say that action at a distance is "not compatible with special relativity" but retrocausality is because it's not full retrocausality (can't send signals back in time) but just a weaker kind. I think they've missed the point on that as they've just copied the argument for why action at a distance is compatible with special relativity.


That's not what they are saying at all. Your pre-judging the outcome by calling the end result 'action at a distance', but action at a distance is a proposed explanation of the outcome, not the outcome itself. That's what they are trying to explain. Retrocausality is an alternative explanation of the outcome to action at a distance.


My pet theory is when you make an observation you're narrowing down which subset of the many universes you might reside in. So the particles simply are going to behave the same even though there is no way to know what reality you'll get when you check on the first one. However once you do check on the first one it reduces the subset of possible realities you are in which includes possible outcomes for observation of the paired particle.


We are in a branching simulation. It is lazily loaded and immutable. For undetermined variables, copy on read causes a branch filled in with each possible result. So many-worlds, but within a simulation.


Have you ever read the Discworld series? There's a side plot where a group of wizards uses a computer to explore that sort of idea:

---

...The hypothesis behind invisible writings was laughably complicated. All books are tenuously connected through L-space and, therefore, the content of any book ever written or yet to be written may, in the right circumstances, be deduced from a sufficiently close study of books already in existence. Future books exist in potentia, as it were, in the same way that a sufficiently detailed study of a handful of primal ooze will eventually hint at the future existence of prawn crackers.

But the primitive techniques used hitherto, based on ancient spells like Weezencake's Unreliable Algorithm, had meant that it took years to put together even the ghost of a page for an unwritten book.

It was Ponder's particular genius that he had found a way around this by considering the phrase, "How do you know it's not possible until you've tried?" And experiments with Hex, the University's thinking engine, had found that, indeed, many things are not impossible until they have been tried.

Like a busy government which only passes expensive laws prohibiting some new and interesting thing when people have actually found a way of doing it, the universe relied a great deal on things not being tried at all.

When something is tried, Ponder found, it often does turn out to be impossible very quickly, but it takes a little while for this really to be the case–in effect, for the overworked laws of causality to hurry to the scene and pretend it has been impossible all along. Using Hex to remake the attempt in minutely different ways at very high speed had resulted in a high success rate, and he was now assembling whole paragraphs in a matter of hours.

"It's like a conjurin' trick, then," Ridcully had said. "You're pullin' the tablecloth away before all the crockery has time to remember to fall over."

And Ponder had winced and said, "Yes, exactly like that, Archchancellor. Well done."

And that had led to all the trouble with How to Dynamically Manage People for Dynamic Results in a Caring Empowering Way in Quite a Short Time Dynamically...

- Terry Pratchett, The Last Continent


I need to read that...


Isn't this just Cramer's transactional interpretation of quantum mechanics?

https://en.wikipedia.org/wiki/Transactional_interpretation


It's more general than that, and also Price and Wharton don't entirely agree with Cramer about what needs to be proved, but it's similar, yes. It's inferior to the Transactional theory in one way, which is that Cramer's exposition of his theory comes with pictures of elephants.


I've had this puzzle solved in my own head for quite some time thusly: Distance is a fiction /aka: arbitrary manifold created by our (conscious?) selves interacting with "the world," which is really just a big array of ordered values (N-tuples).

To say that "X is N distance from Y" is no more of a valid proposition than "Blue is Louder than Yellow".

We attach lots of significance to distance (e.g., 3D space) owing to its utility for "doing science." (Insert "All models are wrong; some are useful", but at a grand, multiversal scale.)

My 2¢: YMMV.

(me: Philosophy major with a good bit of sci/math as well)


So is this saying that if retrocausality is possible, then it's akin to the kind of time travel where you may have altered something in the past, but the effects would not manifest until "after" the commencement of the time travel?

Like in a person, if you feel some sort of uncomfortableness that leads you to make a choice, maybe it's because a minute ago, some time traveler traveled back a couple of weeks and planted a seed that made you start feeling uncomfortable!

Maybe all of those, "You know, I never realized until now, but it turns out I've disliked broccoli for years, I don't know why I eat it every day!" statements are a result of retro causal time travel. (In this case, George HW Bush on an anti-broccoli time-traveling crusade.)


All of our reasoning would be much easier if free will were an illusion. Perhaps we should stop fighting it.


Superdeterminism is even weirder than that though. Why would the universe conspire to make sure that I picked my observation angle just right to maintain Bell's inequality? If you use the outcome of a celestial event to determine the angle, then you've got an absolute enormous conspiracy just to make sure that Bell's inequality isn't violated. It's deeply weird.


If free will were an illusion, it wouldn't make sense to try to convince anybody of it. IOW, if you're right, your comment is useless :-)

But I'd understand why you had to write that anyway :-P


I always thought the opposite. It seems things like non-determinism and retrocausality made free will more physically coherent instead of less coherent. It is the strictly deterministic universe of Newton that makes free will illusive.


"we can see that it couldn’t be used to signal, for the same reason that entanglement itself can’t be used to signal"

Nope, you lost me. This seems too hand wavy.

Also "This isn’t true of many everyday processes – eggs turn into omelettes, but not the reverse!"... sigh.


How does this interact with pilot wave theory?

Is the article saying that the causal direction of time is undefined within a quantum entangled system? Or is that more than what it’s claiming?


Also quantum erasure.


Perhaps retrocausality is just non-locality in time.


Yes


No


Or yes, but only if you accept time travel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: