The linked article is titled "Experiment confirms quantum theory weirdness". Please don't exaggerate the title. These experiments do not show that "reality doesn't exist until it's measured".
First, "realism" [1] is not the same thing as "reality". "Realism" basically means "physical quantities have a definite value". "Reality" is that thing that determines your experimental outcomes. Don't mix them up.
Second, interpretations of quantum mechanics disagree wildly about what kind of weird you use to explain things. Some interpretations have "realism", some don't. Some interpretations have retrocausality, some don't. Some interpretations have FTL effects, some don't. Since all the interpretations give (mostly) the same experimental predictions, it's misleading to single one out and say just that particular brand of weirdness was confirmed.
We confirmed that there's weird there. We didn't distinguish what brand of weird it is. Physicists widely disagree about which brand of weird to use, with no position achieving even a majority [2]. The original title was better.
A standard case of mixing (or actually confusing) epistemology with ontology. That is to say that something doesn't exist until we get to know something about it. Or, by all means, knowing something about something is posterior to us being here AND that something being there. Here lies the danger of dismissing philosophy as a bag of words when compared with the Holy Science that works, bitches.
I've been reading Thomas Nagel's "Mind and Cosmos" recently, and while I think it's actually a really terrible book and Nagel is incredibly uninformed on the theory of evolution, I do think he makes one really good point: the scientific method presupposes that the universe can be understood/observed/measured. It seems like such an obvious axiom, but I think it makes sense why we keep brushing up with observation / consciousness when we push the boundaries of science. It's sort of tautological in a way.
It will be interesting to see as we run closer and closer to the limits of observation and consciousness if we're on the verge of another huge paradigm shift in empiricism.
Or: Through the scientific method, we have discovered something that we cannot explain, and so people are now floating wild ideas that they know MUST be true.
... Which is exactly what the scientific method was designed to prevent. Categorically stating that nothing exists without our eventual knowledge of it is the height of arrogance.
I don't think the title is a misrepresentation of the article, just the physics:
From the article: "It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it..."
The article does appear to be (incorrectly) arguing that this experiment proves one interpretation of QM correct. It's definitely incorrect - some years back myself and others used non-experimentalist interpretations of QM to get the same result - but it's not a misrepresentation of the article.
Yes, the article does quote the physicist saying that. Physicists do say things like that, without adding caveats like "assuming the Copenhagen interpretation, which is the plurality opinion and my de-facto working theory day to day".
But the article didn't include it in the title, and I think that was the right choice. And HN usually tries to keep the original title.
To be fair, the submitted title is directly culled from the first sentence of the article:
> The bizarre nature of reality as laid out by quantum theory has survived another test, with scientists performing a famous experiment and proving that reality does not exist until it is measured.
And perhaps someday we'll get to meet the great programmer in the sky the developed the simulation we are living in and we will ask him why he designed it that way and he will say something like "oh I just wanted to optimize the code so I simply excluded reality subroutines when there were no beings looking. I just never thought you guys would notice. As soon as I saw that you guys noticed the flaw, I was going to load a patch but then the confusion it was causing with the simulated beings became interesting and so I just left it in as an accidental feature of the game."
Meh, but probably not.
I always find it fascinating at just how much of our fundamental physics ends up being constraints on information movement. The laws of thermodynamics are about entropy, general relativity puts constraints on the movement of information (for example, Spooky Action as a Distance(tm) is faster than light, but you can't transmit information with it), the Uncertainty Principle puts limits on how much information you can have on a given system.
Information information everywhere you look in fundamental physics. It does make me wonder why.
I'm fascinated by such things also. I often can't help thinking of God as a programmer, though I don't know how much of that is confirmation bias of my priors as a theist and a programmer. But I've always thought the periodic table and DNA both felt a lot like code...
I think it isn't a coincidence that there is so much order and elegance behind the building blocks of our physical world and the universe but the result is chaotic, and unpredictable.
Maybe it's just our brain discovering patterns that don't mean anything.
It's hard to imagine a universe with infinite resolution. You'd effectively have an infinite number of bits in an infinitely dense space, which would potentially require infinite energy to flip.
But I suspect (on no basis whatsoever, except the fact that all previous metaphors have been wrong) that the "universe = Turing machine model" is wrong in some fundamental ways.
I have no idea what the universe is, but I'm open to the possibility that it isn't just an information processing system.
Agreed, it just feels inadequate to me, too. Information theory is a hammer ... and now everything we can think of (~ information...) looks like a nail.
Another hint that maybe the big Turing machine isn't quite cutting it might be the experiments showing that gravity isn't quite like 'just' an entropic force but seems to behave genuinely different.
Hmm, I figure it's more the the optimizer depends on code not exhibiting undefined behavior, like in C. It's not really the case that it's lazily evaluated, just the the causality gets screwy when you try to observe things in an asynchronous system without any kind of coherent memory model.
The fastest anything can travel is the speed of light, which is the speed perceived when a result is already calculated and held in a CPU cache. For everything else, it must be loaded from RAM, Disk, Network and perhaps also operated upon - which can never be as fast as the speed of light.
That would quite possibly be the worst optimization ever.
"Yeah, I could have tracked and updated n values pretty cheaply, but instead I decided to exponentiate an exponentially huge matrix and use that to update a vector containing 2^n complex numbers associated with the possible assignments of the original n bits. Also, you should use bogosort. It's the best."
Of course. I used that assumption because it was also implicit in the original claim. If we don't have any idea what the top-level rules are, then claiming quantum mechanics is an optimization artifact falls a bit flat.
If the top-level rules are classical-ish, then quantum mechanics is expensive instead of cheap. If the top-level rules are quantum-ish, then that begs the question of why we thought quantum implied simulated. If the top-level rules are totally different... then there's not really much to be concluded.
Off to be the Wizard by Scott Meyer is a very funny fiction book that is based on the concept of reality being a computer program that hackers can manually manipulate to do cool stuff. It mostly occurs in King Arthur's Court, which happens to be the chosen time and place all these hackers wind up.
I kind of love the idea that it's an optimization because the universe would have a shitty frame rate if light's quantum state wasn't lazily evaluated based on whether anyone's viewport was aimed at it.
Well, if you want to prove the simulation argument, all you have to do is simulate a reality for an individual at any time-dilation you like. With the parallel compute capabilities we are increasingly growing, simulating a small section of the universe at a large time-dilation should be within our grasp sooner than we think.
There is a limit to the total computation that can be done in the universe or with a certain limited energy/volume of space. It's caused by the slowness of signals traveling at light speed. They say probably a black hole is the best computational engine there is.
Well, the way I understand this, is that this result proves that we do NOT live in a simulation. Because if reality does not exist until measured, then we apparently cannot compute reality ahead of time. And if we cannot compute reality ahead of time, it does not exist yet.
I'm not familiar with this particular experiment, but all my pondering on the subject - as a layman - leads me to a simple conclusion.
The properties these experiments are measuring are simply bogus. They are not well defined. The answer that comes out is not some intrinsic property of the "particle", but the result of the environment in which the particle interacted with the "measurement" system, so to speak.
The particle has some other properties, but what's being "measured" is not one of those properties.
How can I explain?
Imagine someone who has never tried any Korean food, and you try to ask him/her: what's your favorite Korean food? There's no answer. So you try to "measure" it by feeding him some Korean items and recording his facial expressions. He will like some items more than others, but it has nothing to do with "his favorite Korean food", and has more to do with how the items were prepared and his mood at the time.
A "point" location for a photon is never defined; it's not a property of a photon that it exists in a point in space. When you fire a photon at a "wall" and see a "blip", you're not seeing the position of the photon at some point in time. You're seeing the rough position of the atom that had an electron that absorbed the photon's energy, and I'm not even sure the atom has a well defined point position either. The whole thing is an artifact (a side effect) of some interaction between several systems and doesn't really tell you anything fundamental about the photon (or the quantum object).
The topic outlined in the OP's article has been raised before, multiple times, once every couple of years. It was even the focus of a cult indoctrination propaganda piece called "what the bleep do we know".
Essentially, yes, it's bogus science. Quantum physics are much more complex than these articles ever bring on. But by explaining it simply, it sounds awe inspiring and so it propagates across social media. Over and over again.
It isn't that particles exist in multiple states until they are measured. It is that the mechanism by which you measure very small things affects the outcome.
However, we can achieve things using the multi-state characteristics that would be impossible without them. For example, quantum computing. We have confirmed actual speed increases for algorithms using quantum computing which could not happen if the underlying particles were not actually in multiple states.
> It isn't that particles exist in multiple states until they are measured.
I thought it was exactly that. Or rather, particles exist in multiple states, and when they are measured, either those multiple states collapse into one (Copenhagen Interpretation) or you, the measurer (who also exists in multiple states), gets entangled with them, causing each of your states to perceive exactly one of the particles' states (Many-Worlds Interpretation).
Unless Bohmian mechanics is correct, in which case no, particles don't exist in multiple states, but do depend on faster-than-light transmission of information about the state of other particles.
I'm personally a really big fan of the Many-Worlds Interpretation. Because a brief abstraction would mean every person will live as long as is physically possible. To outside observers, you may have died at any point along the way.
But, regardless. The copenhagen interpretation is from the 1920's. That isn't to say it's wrong, it's just out of date. It has been expanded upon or replaced since then so why hold onto it, what is the current understanding.
The most important takeaway here is that since measuring very small things affects its outcome, it is currently impossible to know. Articles like this one in the OP bother me because they don't know either. But it always becomes a sensation and spreads misinformation.
> Because a brief abstraction would mean every person will live as long as is physically possible. To outside observers, you may have died at any point along the way.
Though your `life' might not be pretty. It can be maximally awful as long as you can still perceive.
Intuition tells me this is like saying a droplet of water can both be a sphere and a single point because it'll pass through several holes on a net, but capillary action makes it collapse into a needle.
When people talk about hidden variable theories they still assume a definite answer exists.
I'm saying the answer doesn't exist because the question is not really valid in some sense.
Which kind of coincides with the idea that "the observable doesn't exist until measured", but I'm taking a little further and saying, it doesn't exist even when "measured" because you're not really measuring the thing you think you're measuring.
It seems to me there would still be a fact of the matter as to whether the person would enjoy a particular type of food prepared in a certain way, in a certain context, even if that situation never occurs or is considered?
That's the point. Something is the matter of fact, but it's not what the original question was. The original question is bogus. The way you perform the "measurement" influences the answer you get.
Also if you repeat the measurement multiple times, results will change.
From my personal experience, I didn't like soy sauce at first, but then I got used to it and started to really like it.
This is of course extremely interesting and probably important for understanding our universe—but how is it helping anyone to say something like, "...with scientists performing a famous experiment and proving that reality does not exist until it is measured." The statement is like a distraction at a magic show, drawing the reader to the glittery 'reality' and 'exist,' which are totally undefined so the reader's imagination can rove without limit.
Maybe this is really just a fundamental challenge to our assumptions about motion of particles or information transfer in the universe. Isn't that interesting enough without these vague, human aggrandizing assertions about creating reality?
The thing is, this isn't even a challenge to any assumptions physicists have, or even a surprising result. Every prediction of quantum theory for these kinds of atomic systems has been borne out.
All this "weirdness" is the same old story of "Is it a particle or a wave?!," when in reality, we know its neither. Quantum objects are represented by wavefunctions, or vectors in a Hilbert space, to which "particle" and "wave" are intuitive approximations in certain regimes, that makes it easier for humans to talk about in natural, non-mathematical language.
All this experiment has shown is that a object that we expect to be described by quantum mechanics turns out to, indeed, be described by quantum mechanics.
Here's the punch line, space, time and matter are components of a user interface produced through evolution. We don't take the desktop and icons of our computer UI literally and we shouldn't take our evolved UI literally either.
He dialed back the radicalism of his position for his TED Talk. He concludes that consciousness must be something other than computation in the brain, something he just teases in the TED Talk. He spends most of the talk on the less radical Interface Theory of Perception.
If I understand him correctly, he says we don't perceive brains as they really are and therefore brains are not a physical basis of consciousness. Whoa.
The press around this experiment uses misleading language that leads people like Deepak Chopra to think that we create reality through consciousness. It has nothing to do with consciousness. This article explains it:
John Wheeler's delayed choice thought experiment was already confirmed in the lab over ten years ago. It's neat that they were able to get results with baryonic matter, and it was definitely worthwhile to try to do that, but this article seems to be implying that the results could have been anything else than what they were, or that there is new physics here, and is wrong on both counts.
Also the deference to the Copenhagen interpretation is annoying - it's wrong. What they've observed is a consequence of how decoherence works, and 'observation' has nothing to do with it. Not faulting the researchers on this but seriously, it's time to stop talking about mythical 'observation' as though it's some integral part of quantum theory.
http://www.simulation-argument.com/simulation.html says: at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
What does 'post human' mean? Unless we go extinct, we will always be humans, regardless of what species we evolve into. Our current species is "Homo Sapiens Sapiens", not "Human". Ergo, if we evolve into amorphous blobs of space goo, we will still be Human but not "Homo Sapiens Sapiens". Which means it is physically impossible for us to ever reach a "post-human" stage.
Proposition (2) is just flagrantly, blatantly bullshit. The amount of computational power needed to simulate even just the bits of the universe we conscious humans happen to be observing at any one time is so stupendously humongous that you'd get more utility out of repurposing your cosmic-scale computers to play video games.
Erm, doesn't that mean you actually agree with proposition (2)?
> (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)
Errr, yes, I misread the phrasing. Yes, I think a decently posthuman civilization is not going to be able to defy the laws of physics in such a way that would make it economical to run ancestor simulations.
Right. This way of describing it is much closer to "zero-worlds" or relational quantum mechanics, and to my mind is a far simpler, deeper way of understanding what's really going on - reality is simply all entanglement.
What really still gets me is the way it's not simply that the atom wasn't interacted with, but that if the information about the interaction never leaks to the outside world - if it's "erased" after the interaction takes place - then the system still behaves as if the interaction never took place.
It undermines not just the concept that matter really exists, but time as well.
Do you have a source for this "information leaking" concept. I remember hearing about an experiment confirming such an idea, but I can't recall what it was.
I'm not sure the exact experiment performed in the article (which sounded similar), but I've always thought the delayed choice quantum eraser was about the best example of this, once you spend some time understanding the experiment and results:
Can you elaborate on the similarities? I took a course on medieval philosophy and theology (Anselm, Aquinas, Avicenna, and so on) so I am earnestly interested, but the connection isn't jumping out at me.
I've always wanted to read (or write?) a book about us determining we are in a simulation, but we find subtle flaws like this we're able to exploit in weird ways. Kind of like breaking out of a VM through register flaws or something. Anyone know of a story along those lines?
I've definitely read one where we're an experiment and the speed of light limit is to stop us from escaping all over the lab equipment. I can't remember what it was called now.
I feel as if fiction where we're in a simulation is relatively common, but one where the hacking of reality is actually done well and plausibly rather than handwaved would be pretty new.
I'm reading that book right now, and so far it's good. One can also read: Anathem, Snow Crash, http://qntm.org/ra, or (extremely NSFW) The Metamorphosis of Prime Intellect.
All of them explore the nature of reality and consciousness in some way.
To the parent commenter, please write your story. The world always needs good fiction.
I just finished reading Permutation City a couple weeks ago. Good fun. The core conflict is between two different approaches to modeling, via mimicry of existing systems versus really interesting cellular automata. The automata version doesn't need to 'cheat' in the same way, and is thus a bit harder to find the edges of... (Hopefully that doesn't count as spoilers.)
I think Charlie Stross did a particularly interesting one that happened within a simulation where the characters ultimately discover the enemy they were fighting already won and their simulation ran under a larger simulation. Can't remember the name...
Maybe you should approach the revelation and "breaking through" by a philosophical rather than logical approach..if we are a simulation the logic we work with is artificial and arbitrary anyway. So in the story, get to the "real reality"...by taking the ideal of what reality should be and comparing it to what is observed to be and somehow "crack" into ultimate reality by doing so.
> Kind of like breaking out of a VM through register flaws or something
The difficulty is keeping track of the context while you attempt the exploit. You would have to be able to plant code outside the simulation without causing it to crash.
OTOH, from within, you'd only see a successful attempt to escalate privileges. Imagine being able to edit reality.
Once you read Teller and Hanrahan's SIGGRAPH paper on potentially-visible set computation, you instantly grok what quantum mechanics is for. At least in an "I want to believe" sense.
How do we go about determining if we are in a lossless compression or not?
I wonder if there is some ordering of "conservation of ---" laws that is strictly enveloping/hierarchical, such that you could choose a level at which to simulate/design a universe.
Ah, but how do you know the simulation doesn't just slow down the simulated flow of time so it can keep up? Think of a bullet-time like concept. The universe can be simulated as fast as the underlying hardware can keep up.
But if you're enmeshed in the simulation, you would never notice the slowdown! The only reason the video game is annoying is because we experience time outside of it.
I find it fascinating how the speed of light, quantum indeterminacy and Planck's length could all be seen as allegories for computational optimisations in a simulated universe.
If I'm standing here, all the stuff that the atoms in my body could conceivably interact with have to be backfilled for me to interact with them according to this experiment, but since I'm not special to the universe, atoms three billion light years over, they are still interacting with each other, just not with me. So am I decohered to them? Is this like a divergent timelines theory such that coherence is defined as when different possibilities converge while following possibilities, reducing them and then so must necessarily collapse as other possibilities drop off? So what's the difference between cohered and decohered reality then? It sounds then like it would just be one of those paths, the one that we happen to be on that we only notice because we're conscious so that's our arbitrary (to the universe) observation point. That's the only way I can make sense out of this without attaching significance to human observation.
The philosophical interpretation of Quantum Mechanics is a very hard, unsolved problem. There are many reasons why quantum mechanics is theoretically good and philosophically terrible, and not just in a subtle, esoteric way. The reason is that in Quantum Mechanics there are 2 main types of entities: particles, "things" that evolve according to the Schrodinger equation, and "observers". Observers are what deliver to us the results of randomly sampling from the probability distribution defined by the squared amplitude of the wave function by "collapsing" it, according to the Copenhagen interpretation. However, there is no particle that acts as an observer, they all just follow the Schrodinger equation, but nothing that exists isn't a particle. How could "observers" exist and interact with particles then? The Copenhagen interpretation is philosophically terrible. And it really pisses me off that this article title seems to hint that they've really confirmed it. In a lab, an experimenter can just point to his apparatus and say, "that's the observer". Or, being more formal, they can say a thermodynamically irreversible process plays the role of an observer. But this is not really a satisfying explanation because how could it be possible to generate these large, discontinuous motions we call "collapse" on the macroscale if it is impossible on the microscale? There are the multiverse theories that you seem to describe, but they have their own problems. Rae's Quantum Physics: http://www.amazon.com/Quantum-Physics-Illusion-Reality-Class...
goes over a lot of them without getting to messy in the math. Other interpretations of QM are in the book as well. There are many:
> Observers are what deliver to us the results of randomly sampling from the probability distribution defined by the squared amplitude of the wave function by "collapsing" it, according to the Copenhagen interpretation. However, there is no particle that acts as an observer, they all just follow the Schrodinger equation, but nothing that exists isn't a particle.
This is why proponents of the Many-Worlds Interpretation claim that Occam's razor favours it: you don't need to posit "observers" or "collapses"; there's just the evolution of the wave function.
Occam's razor is not a scientific principle, it's just a rule of thumb. I have yet to see a proof that given a number of equal strength explanations for a phenomenon, the simplest one is always true.
One other thing that's often forgotten is that in Occam's opinion the simplest explanation for everything was God.
Let us not forget about Pilot-Wave theory, where it has both particle and wave-like properties simultaneously. In fact, I don't quite get why people are so enamored by these fanciful interpretations when Pilot-Wave is so much more down-to-earth.
I'm not really qualified to talk on a technical level about the matter, but besides the distaste some people have for Aether Theories, there are other reasons some scientists don't like PWT, especially as it's demonstrated by the behaviour of oil droplets.
The physical, conceptual differences between any quantities describing droplets on one side and the wave function on the other side are clear. The former are observable – you may actually measure what the shape of the droplet looks like; you can't measure the wave function by any apparatus, at least not in a single repetition of the experiment. The former has an objective interpretation; the latter has a probabilistic interpretation, and so on. The wave function just encodes all the probability distributions for actual observables – but the wave function isn't and can't be one of them.
I came here to say the same thing - this behaviour is readily explained by a pilot wave formulation of QM, and requires no spookiness. Increasing number of physicists are re-evaluating Bohm and De Broglie's work - I for one think we've been down a 60 year dead-end, which has yielded models, but no plausible mechanism.
It can be - did a graduate paper on the topic some years ago, looking at it through the Dirac sea - and appears others are taking an interest in the topic too - here's a nice paper which gives a good overview:
Something like this should be the top reply for every "quantum weirdness" Hacker News link.
Schrödinger's cat thought experiment was meant to highlight the absurdity of stochastic thinking by taking it to the extreme, not as a description of reality.
It addresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects. As Einstein wrote: "It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do"
armchair musings of a layperson physicist/philosopher follow:
the non-interference pattern is the optimized result of a deterministic universe that requires the observation to occur. The measurement didn't reach back in time, the results were specifically determined by the same causal chain that determined an experiment would be performed.
I always found the notion that "If you measure it, it behaves differently" to be really confusing. When considering the double slit experiment - In both cases (whether we are observing the particle going through a single slit or otherwise observing the interference pattern left afterwards), we are in fact simply measuring 'observable effects' in both cases - It's just that we are measuring them in different ways.
If you didn't measure both cases, you wouldn't be able to compare their outcomes.
It seems that it is not about whether or not the event was measured/observed but about HOW and WHEN it was measured.
In neither case do we actually 'witness it happen' - In both cases, we are just observing effects of those events.
The light which allowed us to 'directly observe it' is as much a byproduct of the actual event as the interference pattern left behind on the surface.
In the time of Empiricism, the philosophers George Berkeley and John Locke proposed something similar. I believe it was called it immaterialism, or the idea that nothing exists without being perceived. Berkeley went further saying that objects only exist in the mind, or something like that.
One of my favorite authors, Jorge Luis Borges, wrote a short story, "Tlön, Uqbar, Orbis Tertius," based on Berkeleyan philosophy. I thought again of this passage when I read the headline:
"Things duplicate themselves on Tlön; they also tend to grow vague or 'sketchy,' and to lose detail when they begin to be forgotten. The classic example is the doorway that continued to exist so long as a certain beggar frequented it, but which was lost to sight when he died. Sometimes a few birds, or a horse, have saved the ruins of an amphitheater."
They're not saying "nothing exists without being perceived", because the probabilities of things at the quantum level all do exist.
It seems sort of unfortunate that physicists would call these quantum probabilistic behaviors "not reality", because they are just as real as anything else.
Right I'm aware of that. I just thought it was an interesting parallel.
Locke based his works off of the physics known at the time (Newton, etc). His theories were later definitely refuted by advancements in physics. Though it wasn't his intended meaning, it is interesting at least to see similar language being brought back by physics.
"If one chooses to believe that the atom really did take a particular path or paths then one has to accept that a future measurement is affecting the atom's past, said Truscott."
Is that equivalent to "reality doesn't exist until it is measured"? Because I don't see the latter claim (which is the headline on HN) anywhere in the text?
Also, didn't Feynman explain in q.e.d. that it's not either a wave or a particle, it's always a particle and the probabilities for the path the particle takes behave like waves? (Something like that, I am foggy on the details).
The jokes above that say that this was done by our simulators to save on computation are amusing but wrong. Quantum computation is famously harder than classical computation.
That's assuming that the simulation is running on classical computers. The simulation could be running on quantum computers or who-knows-what. Quantum computers might not be so inconvenient to build to make in the host world's physics.
So, the Sun doesn't exist when experimenter is unconscious? I see.
It seems like at least Philosophy 101 should be mandatory to quantum physicists.)
Reality doesn't depend on an observer (who distorts it by his observation). Reality just is.
There are light and other temporary states of what we call "energy". That's it. Time, space, relativity are human concepts - the hard-wired modes of perception which conditions our experience. From a Photon's perspective none of these exist.
Photons can't have "temporary states" because photons don't experience time. That's where the word "temporary" comes from, the Latin "tempus" meaning "time".
Reality is just a giant whirling layer of infinite View Master-like multidimensional slides, overlaid one upon the other. Every slide locks into place to be seen as we are looking in whichever direction, at whatever time. I'd wager time travel is never discovered by humanity, but that some sort of time or dimensional goggles are created, eventually.
Essentially, we're all watching our own multi-dimensional, multilayer, composite TV channels and seeing shadows of each other across our screens. Allegory of the Cave meets 3D Ray Tracing and such.
That's nothing, though. Not compared to the realization that every face is a mirror, and we're all stuck here until we can treat each other as ourselves.
Maybe it's not true, maybe it's insane ramblings. But it does explain the crazy doods muttering to themselves on the corner and those people in your life who "just aren't watching the same channel as the rest of us."
Exactly. I'd recommend the other 100 commenters or so who posted a similar reply to learn some basic quantum mechanics. Contrary to popular belief it's not that hard if you have a grasp of math.
Absolutely! As is perspective. Time just translates to a whole series of indexes for which planes of existence, in which dimensions are aligning into the measuring perspective. It's the pointer for some such reflection, and angle to see it.
Those planes of existence are these slides I mention. Whirling in time and subject, but always a representation of some spherical center whole that is the perspective of another. We're all just holes looking into this swirling miasma of time/space/parallel-dimensions/etc, seeing shadows of holes.
You are assuming a multidimensional universe. There may just be a single dimension, with just a now, a single state solution, t=1, d=1. But physicists like to ignore that solution because it doesn't get the juicy book deals.
Guys I'm a total quantum physics noob but when I read this post I couldn't help but think of Alan Watts and the Taoist definition of the theory of time (quoted below). HN in your opinion how does this theory hold up to the to new finding (if at all)?
Big thanks to anyone taking the time to humour me, I really appreciate your time.
'So when you have this happening the other illusion that a Westerner is liable to have is that it's determined in the sense that what is happening now follows necessarily from what happened in the past. But you don't know anything about that in your primal ignorance. Cause and effect? Why, obviously not! Ha ha ha! Because if you're really na•ve you see that the past is the result of what's happening now. It goes backwards into the past like a wake goes backwards from a ship.'
For me the only way I can make wave-particle duality and associated observations believable is to think that light and similar entities always travel as a wave, but sometimes interact like particles.
This understanding is in line with this experiment without believing that future affects the past. It's just that atom is never a particle until it reaches detector at the end. Only there it displays ability to exchange momentum with one other single atom as if two billiard balls hit. Throughout the whole experiment it travels as a wave, both paths, grating or no grating. It just either interferes if there was second grating or not if there was no second grating.
Quantum Bayesianism [0] treats such effects explicitely as Bayesian belief updates of the experimenter. This leads to an interpretation of quantum mechanics free of such paradoxical/counterintuitive statements about reality. I'd like to hear other physicists' opinions on minority interpretations like this.
But , who was measuring before the big bang , and the millions of year before we appear ? what is so special about measurement? I thought that our brain was merely a quantum computer ?
Most physicists or, depending on who you talk to, nearly half of physicists, would appear to agree with you.
I'm convinced that the Copenhagen interpretation remains popular because by making observation itself an integral part of the theory, you allow us to postulate that there is something special about human brains. But 'mysterious observation' is the luminiferous aether of quantum mechanics.
photonic29: I would like to continue our discussion, but HN has some kind of stupid rule where I can't make more than five posts within a (I think?) 12 hour period. I don't know if this is a general rule that applies to everyone, or simply one of the innumerable passive-aggressive account handicaps our gracious mods will afflict us with if we catch them on a bad day.
At any rate, I can't post anymore for now, so our discussion about MWI and Copenhagen interpretation can't happen. Sorry.
Is that really the case though? An observation collapses a wave function. The Copenhagen interpretation suggests that the collapsed state arises from a probability distribution, but it does not address the "fundamental" origin of that distribution. MW attempts to address it by suggesting that the rest of the reality just went elsewhere, rather than disappearing or never having existed at all, but it still does not explain why "we" get "this" reality. Both rely on observations to collapse the wave function, and neither specifically calls out a conscious agent as necessary for an observation to occur. Observation is a measurement, whether intended and registered by a brain or not.
Not if the MWI is true. Talking about wave function collapse presupposes that the Copenhagen interpretation is true. In the MWI, there is no collapse; it's unitary evolution all the time.
MWI has more than its fair share of physics woo: "QM tells us that everything happens!" and so on. Of course all that's irrelevant to what the MWI interpretation actually says. And likewise with Copenhagen: No respectable physicist thinks that consciousness has any physical effect on quantum systems. Measurements can be taken by machines.
Regardless, the nature of observation really is mysterious. A measurement projects the wavefunction onto an eigenfunction in accordance with Born's rule. MWI does not adequately explain why or how, and Copenhagen simply inserts it as a postulate. Neither is especially satisfying. So the measurement problem is unresolved.
You can make an observation of a system whose state is undetermined. By interrogating the system for its state, a state becomes determined. Suppose, for example, that you have a flipped coin and sent it rotating in space, never hitting the floor. Is it heads or is it tails? Until it is looked at, the question doesn't really make sense. And for this case, we'll define "looking at it" to mean sticking out your hand and catching it. There is a probability that it's landed heads up in your hand, and a complimentary probability that it's landed tails up. Once it's in your hand though, you can confidently say which state it's in.
Now, you could definitely take issue with this example, because you could argue that the rotation of the coin is well described, so with initial conditions, you can predict its position at any given moment. But imagine a microscopic quantum system, and, for the sake of this simple explanation, believe that its "rotating through the air" state really does not have any precise heads or tails definition. Until something gets in the way of that system, creating an interaction that exchanges information about its observable state, it's not meaningful to say that it's in one of the observable states at all.
A superpositition of states, as such, is essentially the representation of a state in terms of a basis set of observables. In the case of the coin, heads and tails are the two observable states, they are orthogonal, and they fully represent the state space of the coin. You could flip the coin, and put its state vector into the form of sqrt(2)/2 * Heads + sqrt(2)/2 * Tails. This state isn't observable, but it can be described in terms of observable components, where the coefficients represent the probabilities that a given observable state will be measured upon observation.
> You can make an observation of a system whose state is undetermined. By interrogating the system for its state, a state becomes determined.
For QM, this is not correct, although it's a common misstatement. The correct statement is this: you can make an observation of a system which is not in an eigenstate of the measurement operator you are using. After the measurement, the system is now in an eigenstate of the measurement operator--i.e., the act of measurement changes the state.
Note that this is only true on a collapse interpretation, like Copenhagen. On a no-collapse interpretation, like MWI, the "observation" is just an interaction that entangles the state of the measuring device with the state of the system being measured--it's all just unitary evolution.
> You could flip the coin, and put its state vector into the form of sqrt(2)/2 Heads + sqrt(2)/2 * Tails. This state isn't observable*
Yes, it is; but it isn't observable by a simple method like looking to see if the coin is heads or tails. But according to QM, every state is an eigenstate of some operator, so there will be some observation that will distinguish sqrt(2)/2 * Heads + sqrt(2)/2 * Tails from the state that is exactly orthogonal to it, which is sqrt(2)/2 * Heads - sqrt(2)/2 * Tails.
>The correct statement is this: you can make an observation of a system which is not in an eigenstate of the measurement operator you are using.
I should have distinguished better, but what you're more rigorously calling an eigenstate of a measurement operator, I'm calling an observable state. There is something lost in translation to an audience unfamiliar with terms like eigenstate, but that was my attempt. Would you suggest a better one?
>Note that this is only true on a collapse interpretation, like Copenhagen. On a no-collapse interpretation, like MWI, the "observation" is just an interaction that entangles the state of the measuring device with the state of the system being measured
The greater point being addressed is that MWI is no more deterministic than Copenhagen.
>Yes, it is; but it isn't observable by a simple method like looking to see if the coin is heads or tails. But according to QM, every state is an eigenstate of some operator
Some hermitian operator? But more to the point, if looking at the coin is the only operator at our disposal in the simple example, then its eigenstates are the ones we care about.
> what you're more rigorously calling an eigenstate of a measurement operator, I'm calling an observable state.
Yes, but "observable" here is relative to the measurement you are making. If you make a different measurement (i.e., realize a different operator), then the set of "observable states" by your definition is different, because the set of eigenstates of the operator is different.
> The greater point being addressed is that MWI is no more deterministic than Copenhagen.
But this isn't true. The MWI is completely deterministic, because wave function collapse never occurs, and wave function collapse is the source of all the indeterminism in the Copenhagen interpretation.
> Some hermitian operator?
Yes.
> if looking at the coin is the only operator at our disposal in the simple example, then its eigenstates are the ones we care about.
If all you're interested in is that particular experiment, yes. But here we're discussing claims that must apply to all possible experiments and all possible measurements, not just the particular one in the example you chose. So we have to consider all possible operators and all possible sets of eigenstates, not just the ones in your example.
>But this isn't true. The MWI is completely deterministic, because wave function collapse never occurs, and wave function collapse is the source of all the indeterminism in the Copenhagen interpretation.
For what useful definition of deterministic? If a measurement comes with decoherence into multiple non-inteferring branches, then certainly the state evolves in a predictable way from "god's eye", but not from the perspective of the experiment occupying any given branch.
The definition that says the future state is entirely determined by the present state. That's the only definition I'm aware of.
>the state evolves in a predictable way from "god's eye", not from the perspective of the experiment occupying any given branch.
The entire "god's eye" state is the one that appears in the dynamical laws of QM (unitary evolution), so that's the one that's relevant for assessing determinism.
> the state evolves in a predictable way from "god's eye", but not from the perspective of the experiment occupying any given branch.
This "apparent randomness" of measurement results is equally true of chaotic classical systems; it's not something that only appears in QM. Basically, it's just a consequence of the fact that individual "observers" will in general not have complete knowledge of the state. That doesn't mean the state doesn't evolve deterministically; it just means the observers don't have complete knowledge.
>This "apparent randomness" of measurement results is equally true of chaotic classical systems; it's not something that only appears in QM.
Is that a fair comparison? Yes, in either case, the experimenter is limited in his predictive capability by the information available to him. But in a chaotic system, your predictive power can be improved arbitrarily by surveying more information with greater precision. As I understand it-- and hopefully you can clarify if this is accurate-- decoherence forbids a measurement from receiving information from a branched outcome, so even if you take a measurement with arbitrary access to information now and repeat the same measurement in the future, there becomes a set of information that is fundamentally off limits to the observer in a given branch.
I think so. Perhaps it will help if you look at it this way: you repeat some measurement multiple times, and get a sequence of results that looks random. Is the randomness because of classical chaos, or because of quantum "indeterminacy"? From the measurement results themselves, in many cases, there will be no way to tell. The only case in which there would be a way to tell would be if you specifically made measurements on entangled quantum systems in order to test the Bell inequalities; if those inequalities are violated, the measurements can't be due to classical chaos. But that just underscores my point: looking at "apparent randomness" of measurement results is not sufficient to tell whether they are due to "quantum indeterminacy.
> decoherence forbids a measurement from receiving information from a branched outcome
Once again, this is a misleading way of stating it. What is happening, again, is that the observer evolves into a superposition, corresponding to the superposition that the measured system is in. Decoherence just means the branches of the superposition don't interfere with each other. But the system is still in a single state; the "branches" are not separate states or separate entities, they're parts of a superposition.
(Note, also, that decoherence does not guarantee that the different branches will never interfere with each other. Decoherence is not a fundamental limitation; it's just a recognition of what happens in the usual case, where no special measures are taken to isolate the system or to facilitate interference. According to the MWI, there is in principle always a way to cause the different branches to interfere, i.e., decoherence is never absolute.)
> there becomes a set of information that is fundamentally off limits to the observer in a given branch
According to the MWI, the observers in different branches are not different observers; they're different terms in a superposition that the observer is in. Thinking of them as "different observers" with access to different information implicitly assumes something like the Copenhagen interpretation.
For us, Reality by definition, is what we perceive. Our mind/brain/consciousness interprets the perceptions, draws conclusions makes up reality by interfacing an Electromagnetic medium. When trying to explain Popular Quantum Physics, one should avoid using the word Reality loosely.
I think most people's definition of reality would be precisely not that it "is what we perceive", in that two people can and often do perceive the same underlying reality differently. That there is an underlying reality at all may be an incorrect assumption, but it is the intuition most people have.
What does this mean? If there was a supernova eons ago in another galaxy and I'm the only human who had been hit by a cosmic ray, does that mean it didn't exist or happen until ... ?
Does measurement have to include an agent? Could measurement mean interaction with other atoms?
>Does measurement have to include an agent? Could measurement mean interaction with other atoms?
Indeed it can. Roughly put, if information about the state left the undetermined system, a measurement has been made. One of the most frustrating interpretations of literature such as this is the idea that there is something spooky, special, and reality-making about a conscious mind. Lots to think about there philosophically, but the physics happens at lower levels of abstraction.
The trouble with questions like yours is that we're the ones asking them. The results of an unobserved measurement cannot be known, so there's no way to completely remove agency from our experiments.
Personally, I think the simpler and less egocentric view is that all measurements produce wavefunction collapse, even when there's nobody looking.
Is it not possible that it could have been observed no other way? Is observation driving the behaviour or behaviour driving the observation(or an external factor driving both)?
Um, this doesn't prove reality doesn't exist until it is measured. It just proves that measuring and "decision" which state to appear in happens simultaneously.
It could mean that time on quantum scale doesn't differentiate past/future.
Sure it does because, while certainly quantum scale doesn't carry the same sense of time, we do.
Everything we see, what we call reality, is made up of the Lego blocks of electrons, protons and neutrons, and yet even at that level it makes no sense to talk in concrete terms about, say, an electron's spin. We would expect that the spin exists and we find out what it is when we measure it, but it's not that. It's that talking about its spin is meaningless until you measure it. What you can reason about the Lego blocks of everything we see, smell, touch and taste, what gave birth to us and what will kill us, is indescribable until it's asked to be described. Whether that's because it's locked-in to time or not is irrelevant because it means the same thing: reality is not there until it's there.
You can only conclude that "Physical reality doesn't exist until observed", if you imply nothing can move faster than light.
Also hidden variable theory aren't yet categorically disproven. A fractal theory could explain why we keep getting different results when measuring reality in different ways.
I will happily imply that energy/mass cannot move faster than light.
You know, I think a hidden variable theory is a crutch. To me, there's simply a Planck's constant foundation that prevents infinite regression just as much in terms of information as it does with radiation, etc.
If you're measuring N-S electron spin, you will get 100% N spin or 100% S spin but never any E-W spin. Period. If you're measuring E-W electron spin, you will get 100% E spin or 100% W spin, but never any N-S spin. You can word smith it how you wish, but electron spin is directly tied to the spin you look at. To speak of the "reality" of the spin when you're not measuring it makes no sense.
In my previous comment I didn't mention relativity.
> If you're measuring N-S electron spin, you will get 100% N spin or 100% S spin but never any E-W spin.
Ok, but not arguing against it. I'm arguing that there are alternative interpretation to that event other than "reality doesn't exist until we look at it". There are other explanations like: retroactive causality, many world, informational based, etc.
Basically, just read the top comment, it summarizes my thoughts on the matter.
A lazily computed simulation would lazily compute entities not seeing the laziness. Laziness should have no measurable effect within the simulation, unlike quantum mechanics.
As an analogy, consider Hashlife [1]. It works very differently than your typical game of life implementation... but it still agrees on all the intermediate states! The board won't behave normally for most implementations, but end up in states spelling out "we know we're being cached hierarchically!" when you simulate with hashlife.
You made me remember a comment I made some months ago about the speed of light being a performance optimization on a post asking if reality was a computer simulation.
Which is another way of saying that reality is invented.
"Measuring" is applying our sensors onto the external reality and producing a map of what is observed in the form of thoughts, ideas, imagery - 'perception'.
But since the sensors are also part of reality, which does not exist before sensing it, it can be postulated that what is perceived is not a consequence of reality hitting the sensors, but the result of a new thought about reality being perceived, or simply - an invention.
That is, reality (including your body, brain and you) is the product of a thought process, but (here's the interesting part) the thinker is you and not you at the same time, or rather - the thinker is you and every other being.
That thinker is called God. Or Universe or whatever you want to call the thing or being that is the eternal recursive loop of self-invention / self-perception.
Of course very hard to put into language, but easily grokked under psychedelics.
It's interesting that science (and math) is slowly pointing towards this conclusion too, a thing that many great scientists arrived at intuitively.
First, "realism" [1] is not the same thing as "reality". "Realism" basically means "physical quantities have a definite value". "Reality" is that thing that determines your experimental outcomes. Don't mix them up.
Second, interpretations of quantum mechanics disagree wildly about what kind of weird you use to explain things. Some interpretations have "realism", some don't. Some interpretations have retrocausality, some don't. Some interpretations have FTL effects, some don't. Since all the interpretations give (mostly) the same experimental predictions, it's misleading to single one out and say just that particular brand of weirdness was confirmed.
We confirmed that there's weird there. We didn't distinguish what brand of weird it is. Physicists widely disagree about which brand of weird to use, with no position achieving even a majority [2]. The original title was better.
1: https://en.wikipedia.org/wiki/Na%C3%AFve_realism#Realism_and...
2: http://www.preposterousuniverse.com/blog/2013/01/17/the-most...