Hacker News new | past | comments | ask | show | jobs | submit login
A physicist who bets that gravity can’t be quantized (quantamagazine.org)
268 points by theafh on July 10, 2023 | hide | past | favorite | 369 comments



> It’s become dogma. All the other fields in nature are quantized. There’s a sense that there’s nothing special about gravity — it’s just a field like any other — and therefore we should quantize it.

I keep on citing Stephen Hawking here on HN, but it again seems very appropriate:

> It would be rather boring if this were the case. Gravity would be just like any other field. But I believe it is distinctively different, because it shapes the arena in which it acts, unlike other fields which act in a fixed spacetime background.[0]

[0]: https://arxiv.org/abs/hep-th/9409195v1


I'm definitely into the "it's an emergent phenomena" camp. I think it's inherently relational, as that's a more efficient way to encode geometry rather than space itself being a quantized "thing".

Afaik this theory is the "leading"/imho most promising theory of "quantum gravity", that being the "gravity = entanglement" conjecture and the related ideas of "Complexity= action/volume/whatever" that susskind and many others have been developing for the past 20 so years.

All that said I'm nowhere near a physicist and am probably just spewing a total misunderstanding of the situation from my armchair.

That said, I've been incessantly watching lectures in this space to try to beat an understanding into my dumb dumb brain because it's super, super fucking cool.

- https://youtu.be/6_7aKoEx_kk

- https://youtu.be/6OpAreb779U

- https://youtu.be/9crggox5rbc

- https://youtu.be/OBPpRqxY8Uw

- https://youtube.com/@isqg423

I'm also fascinated by the idea of phase transitions, which seems to be how the laws of physics have "evolved" so far. It's crazy how much quantum computation is coming into play with this stuff, ex last year's Nobel prize with the bell inequality. That said I'm sure being a programmer I'm biased to think the universe is inherently computation/math.


It's not relational. There's several rigorous proofs against Spinoza's relational theory. Namely that of Kant and Einstein:

"even if space is composed of nothing but relations between observers and events, it would be conceptually possible for all observers to agree on their measurements, whereas relativity implies they will disagree" [0]

0: https://en.wikipedia.org/wiki/Relational_space


Unless one of the observers accelerates for some reason. Now his observation is privileged.


Doesn't relativity mean there's no difference between that observer accelerating in one direction and everything else accelerating in the opposite direction? So wouldn't everyone's observation be equally privileged? I'm asking honestly as am (obviously) not a physicist.


You would think so, but no. There is something magical about acceleration, specifically how much spacetime you increasingly or decreasingly traverse.


There are some pretty strict limitations on emergent phenomena, like Weinberg–Witten theorem. It doesn't rule out emergence of gravity but it makes it less likely


To my understanding that's only if you're trying to quantize the graviton as an emergent particle. I'm personally of the belief that spacetime itself is an emergent phenomena

https://en.m.wikipedia.org/wiki/Composite_gravity


If gravity is not quantized, would that not mean that it has infinite information (the accuracy to represent its values to infinite digits of precision), and thus cause a black hole due to such high information density?


IIUC, quantized doesn't mean finite, it just means discrete. Energy being quantized in bound states in quantum mechanics means that the eigenvalues are discrete, but a state can still be any linear combination whatsoever of the eigenstates.

And since spacetime is continuous, it's determines by its values on a dense subset, in particular a countable sense subset, which would make both sets of possibilities, gravity and quantum, have cardinality something like R^N.


What determines the amount of information? Is it the total values the gravity could have had with unlimited mass? The number of values that are less than the one it is observed to be? All the values in the probability distribution for some uncertainty thing?

If you have a carbon atom, does it have unlimited information because it theoretically could instead have been some other number of atoms?


> What determines the amount of information?

Noise. Your ability to distinguish between states.

> If you have a carbon atom, does it have unlimited information because it theoretically could instead have been some other number of atom

You don't even need that. A single hydrogen atom has an infinite number of bounded energy states. In principle, you could store infinite information by putting an electron in the nth energy state and keeping it there.

In practice, you can't, due to noise.


Is that not what it does?


General relativity requires singularities, QM prohibits them. It’s a puzzle.


Keep in mind in GR singularities don't really make sense either: i.e. when you pass the event horizon of a black hole, under GR the rules say that you must always been traveling towards the singularity. But the corollary of this is that it's not actually possible - mathematically - to arrive at the singularity (because then you'd be moving parallel with it rather then towards it).

So while we can define what happens very well under GR for the "most" of the inside of a black hole, we can't actually explain what happens to objects which reach the center. There has to be some type of discontinuity whatever happens - i.e. if the object disappears, that's still a discontinuity even if it would resolve the paradox.

EDIT: Which is of course where GR and QM need to reconcile as well - as you get infinitely close to the singularity, the size scale is getting small enough that QM should be well in play.


Can they reach the center? And how long would that take? I ask because time will slow down for them as they accelerate towards the center.


Your first question is the problem: they must always be moving towards the center - which implies they must be getting closer to it. Because if they can never actually reach the center, then in 4D spacetime they're no longer moving towards it - they'd be traveling parallel with it.

Which would imply that singularity isn't a singularity - i.e. the "hole" it makes would in fact be highly curved spacetime in one dimension, but completely flat in another (i.e. it would be a cylinder).

Which creates a hole lot of weird infinities in the system: i.e. a black hole becomes a finite bounded volume on the outside, but contains an infinite amount of space on the inside (since no matter how much you distort spacetime, the molecules of whatever falls in can just pull themselves together more tightly provided the distortion doesn't happen too quickly).


Time doesnt slow down for them. They reach the center in their time as normal. We never see them reach the center.

You can use kruskal coordinates and other tools to understand this.


Right, time doesn't slow down for them, but it would appear so for an outside observer. So would the "experience" of a particle sent in to be that it circles closer and closer for some time on the order of years and at some point when it is arbitrarily close to the center it just pops out some billion or trillion years later once the black hole has evaporated? Assuming it isn't destroyed and somehow can experience it's own time.


It would be more than a few trillion years I think, but that is the thinking unless you are a certain contentious Berkeley astrophysicist.

The particle does see itself “reach” the black hole though. And so would an outside observer if they had infinite time.

The black hole dissipating sure would make some of the experience odd. I would need to revisit my notes to make a claim about that, but discontinuities like that arent uncommon.


Singularities aren't believed to be real by physicists to my understanding. Which if true, would mean the existence of them in GR is an error of the formulation or an error of comprehension.

If you look at the tangent lines facing toward the middle of the torus they would point to a single middle point, a singularity. But if you follow those lines around the surface they would never get to that extrapolated point of singularity. Probably nonsense, but the 2d creatures can't see in 3d and all that philosophy stuff.


Singularities arent real in what sense? You can deal with some with penrose diagrams and other tools is what you mean?


They aren't real in the physical sense. Physicists do not believe that the singularity is a real physical phenomena at the centre of a black hole. They are only an artefact of the incomplete maths. When a singularity appears in the math of a theory, it is seen as an error or incompleteness in the theory. GR breaks down and fails to describe what happens at the centre of a black hole. It's like how a "divide by 0" doesn't actually equal "infinity", it's answer is undefined.


> They aren't real in the physical sense.

In the spirit of Oppenheim's CQ (classical gravity, quantum matter) work which is discussed in the fine article at the top, I'll say that your first sentence is a bit too strong. Curvature singularities (in the Kretschmann (or other curvature scalar) divergence sense) aren't ruled out by astronomical observation. Indeed, practically no observational data sheds any light on the question. What is easily ruled out is Schwarzschild black holes (known black hole candidates all have significant angular momentum) and Kerr black holes (Kerr & Schwarzschild are eternal without beginning while our universe appears to have a finite age; they exist in an energy-free and non-fluctuating vacuum rather than in a local region full of at least CMB photons but also gas and dust and nearby stars; and they exist in a larger volume filled with stars, black holes in their host galaxy, galaxies with other black holes, and so on).

At the following link is Roy Kerr giving a 2016 invited talk where he points out that despite the success of the metric that carries his name, especially for round spinning bodies (planets, stars) and even the then-current evidence favouring Kerr as a good description of large black holes (evidence since then has also been supportive), nobody should feel comfortable about the Kerr interior solution because there will be matter inside an astrophysical BH that is for practical purposes Kerr for outside observers. <https://youtu.be/nypav68tq8Q?t=2884> (around the 48 minute mark).

This is not that General Relativity is wrong, merely that GR itself depends on a solution to the Einstein Field Equations (EFEs) for a given spacetime, and black hole spacetimes commonly known by non-specialists (including physicists who aren't relativists) have features which are unphysical and which actually matter. There are lots of somewhat recondite solutions to the Einstein Field Equations (EFEs) which look like Kerr or Schwarzschild black holes to families of observers, but which have very different geometrical structure (these model black holes might have formed a finite time in the past by collapse of matter, for instance, or they might couple to a more realistic expanding spacetime with gravitational radiation from distant souces sloshing around).

Known black holes (rather, things that in telescopes etc are for all practical purposes black holes) are in such a complicated environment by comparison that we simply do not have an exact solution for the Einstein Field Equations for any of them. We are stuck with approximations, and those approximations often don't even solve the EFEs (even numerically) but rather some cut-down version.

The Raychaudhuri focusing theorem and similar results make it pretty clear that singularities form rather generically in fairly generic curved spacetimes equipped with matter. Penrose has raised comparable arguments. Our actual spacetime is not really generic on the whole, so we look at small pieces at a time and hope we are right in our guesses about how we can assemble multiple small pieces into an improved approximation of our universe. Those small pieces are calculated to be riddled with singularities for reasons related to Raychaudhuri, but that's an artifact of the construction of the small pieces as solutions of the EFEs, how we truncate those solutions, and/or how we stitch them together (e.g. Darmois-Israel).

Maybe what singularities arising in commonly-used black hole solutions of the Einstein Field Equations tell us is that our understanding of matter is off. Indeed, the whole article at the top is about a programme to study how quantum matter influences the gravitational field ("back-reaction"). For all we know, real matter that remains outside a black hole and does things like inverse Compton scattering (as well as matter that plunges inward) blocks the formation of a singularity inside. Part of the motivation for quantum gravity (CQ being a flavour thereof) is being able to extract observables that can answer that question.

> Physicists do not believe ...

I'm pretty sure many physicists hope that nature doesn't produce singularities mostly because that makes it harder to ask how parts of the universe evolve (solutions to the EFEs become infinite at a singularity, but we can sorta work around the supposed singularity with Bowen-York punctures, dynamical excisions, adaptive mesh refinement and other techniques). I'm also pretty sure that anyone with a background that supports the formation of such a belief (or its opposite, or some third choice) knows that today we just don't know how to prove much (even in principle) about the interior of black hole candidate objects in our sky.

Conversely, I do not believe there are singularities in those astronomically-observed objects, but that's because there isn't much evidence one way or another. In practice it really doesn't matter because the interior doesn't really matter much in practice, and if singularities are somehow observed we will cope with them. As to "somehow observed", quantum gravity phenomenology researchers will tell you that we would be lucky to catch the leading order quantum correction to GR with present-and-near-future observational ability, while theorists will then tell you that the singularity-or-not answer probably depends on the next-to-the-next-to-the-leading order. Consequently we have practically no way to distinguish among quantum gravity theories (including CQ) which are known to reproduce the successes of General Relativity in the neighbourhood immediately around our own planet.

(Indeed, the gravitational field of Earth barely needs General Relativity, let alone quantum corrections to that, and there are lots of things we don't know about the deep interior of our own planet. As far as anyone can tell using a post-Newtonian formalism one only needs the leading order (1PN) to fully describe all present measurements of gravitation around us. Earth's gravity essentially a barely-perturbed Kerr (exterior) metric (see e.g. Soffel & Frutos 2016). And that's only thousands of metres to thousands of kilometres away. So not knowing about the deep interior of things at minimum thousands of light-years away doesn't really disturb me, even as we go up to 3PN+ (see e.g. Clifford Will's 2011 "On the unreasonable effectiveness of the post-Newtonian approximation") based on our long-distance observation of the glowing and/or opaque stuff outside of them).


General relativity requires infinite time for singularities to form. Fully formed singularity is a mathematical construct which is assumed into existence, it doesn't correspond to anything physical.


> it’s just a field like any other — and therefore we should quantize it.

So, trade dogma for tradition!


I believe you are misparsing that sentence. The dogma and the tradition are the same thing.


Dogma and tradition are two very different concepts. Dogma is forced down from "above" and tradition is consensual. Tradition is "organic", dogma is proscribed.

Tradition: "We chase a wheel of cheese down Coopers hill in Gloucestershire and invite severe injury, for a laugh" - won by a somewhat concussed Canadian lass this year, well done her.

https://www.theguardian.com/food/2023/may/29/woman-wins-coop...

Dogma: "This is what we believe". Here is an example - https://www.catholicnewsagency.com/resource/55423/the-four-m...

Dogma is a religious thing. Originally Catholic but seems to have spread further.

Tradition is something that is done because that is how it was done. "Was hal" - be you hale ... that's where "wassail" came from.


Skinner_ is saying that the description which nomel interpreted as describing "an alternative to the thing being called dogma" and as "tradition", was a description of the thing being described as "dogma", and not, as nomel interpreted, as an alternative thing to the thing that the quote was considering "dogma".


I would offer a slight alteration to your definition of "dogma". It's a Greek word and it was the name of wooden cubes that served as the markers of the edge of someone's property. It's more accurate to think of dogma as "boundaries" than as proscriptions.

I am saying this as an Orthodox Christian, so this is probably a different view than the view of the Roman Catholic church that you are probably more familiar with. They have definitely gone more towards the "top-down" approach than the Orthodox Church since the Great Schism in 1054.

What I mean by this is that the Roman Catholic church has a LOT more dogmas and they are much more specific than the Orthodox Church. The Orthodox Church has very few dogmatic statements and a much different view of the canons of the church.


Well we could go down a bit of an unproductive thread here! But let's not.

I've never heard of these wooden cubes and that needs looking into. I do know that in Germany, property boundaries are formally marked out or at least they were when I lived in West Germany on and off in the 70's-80's. In Britain we have red lines marked on really shit plans, registered with the Land Registry. Even so, it works.

I'm a Protestant (CofE). I was Confirmed by the Bishop in Jerusalem - we were stationed in Cyprus at the time (1996). Mind you it has suddenly occurred to me that there are probably several of them (denominations) and it was a long time ago ... (search) - https://en.wikipedia.org/wiki/Anglican_Diocese_of_Jerusalem I think it must have been this bloke: https://en.wikipedia.org/wiki/Samir_Kafity . Both he and his predecessor are/were Palestinian Arabs.

Well there's a thing. I recall a lovely man with a massive smile and his robes and mitre were coloured red, orange and yellow - really bright and very striking. I doubt our local lot could carry off that look!

My wife is a Roman Catholic. We married in a CofE church. Our Priest's wife very kindly sang Ave Maria for us. Neither of us suffered any calamity, you'll be glad to hear.

I'm not a fan of dogmatism of any sort. I will grant you that property demarcation is a great idea - it avoids arguments later on 8)


Dogma is tradition formalized.


In General Relativity, matter curves spacetime, so instead of saying that all the “other fields” act within spacetime, perhaps “the metric [tensor] is determined by the matter and energy content of spacetime?” If geometry is described fully with a (tensor) field, perhaps we’re really just looking for a gauge theory but then that would be QM. Now I’m trying to imagine a tensor computing some infinite curvature supposedly necessary in GR.


Why would gravity behave different than Higgs field?


Any reason you are specifically picking the Higgs field in your question? I am asking, because this sounds a bit like you are riffing on a common misconception that the Higgs field has something to do with gravity, which is not the case. The interaction with the Higgs field is the reason some (only some) of the particles have a mass, but explaining gravity does not need to have anything to do with the Higgs field.

But yeah, it is fair to ask why gravity should behave any differently than any other quantum field -- in the context of Quantum Field Theory (one of the two incredibly successful theories of physics) that is a great question. One handwavy reason it seems different is that in General Relativity (the other incredibly success theory of physics), gravity has to do with the geometry of space and time, not with what other fields exist in that space and time (and as such is explicitly different than the other fields).


My understanding is that the Higgs field should be simpler than gravity because it's just a static value whereas gravity is SU(1)? At least to me it seems logical that if the Higgs is quantized, surely more complicated fields would be as well?


SU(1) is the trivial group. SU(n) is the group of unitary nxn matrices with determinant 1. There is only one 1x1 unitary matrix with determinant 1, and in fact there is only one 1x1 matrix with determinant 1 period, namely, the 1x1 matrix whose only entry is the number 1.

So, to associated something with a symmetry group of SU(1) would be effectively the same as not associating it with a symmetry group.


the way the quote reads it sounds like this physicist is saying that gravity is not a field at all.


Perhaps it’s a fictitious force just like centrifugal motion and otherwise. :)


(Not a quantum physicist. Please correct me if I am misunderstanding.)

From what this article says, his assumption is that gravity is classical, but "fuzzy" or probabilistic: you can't precisely measure the gravitational field of a sufficiently small object.

In the last years we've seen progressively bigger objects being put in quantum superposition. The theory from this article is incompatible with this process continuing indefinitely. When and if we create a sufficiently big object that we are able to entangle it with something else via gravitational interaction, this would immediately disprove this theory.

So the good news is that this theory is clearly falsifiable, possibly even without creating a particle accelerator the size of the solar system.


I’m a quantum physicist and yeah you’re totally right. It’s already obvious that a quantum theory of gravity is needed because we need a way to talk about superpositions of spacetime curvatures. Either QM is wrong in general or we need a way to treat spacetime that is in a quantum superposition

Also my old supervisor actually suggested an experiment to observe this gravity induced entanglement but it would require extremely low temperatures to not degrade due to thermal noise


> It’s already obvious that a quantum theory of gravity is needed because we need a way to talk about superpositions of spacetime curvatures.

But it doesn't seem to be that way, at least not necessarily, and that's what the article is about. What makes it obvious to you that there is no room for unification theories (of GR and QM) that don't involve a quantized version of gravity?

Note that I'm not claiming gravity is not quantized. I'm just saying it cannot be ruled out yet.


The only way it "doesn't seem to be that way" is if we determine that it's actually wrong that QM tells us that we can have entanglement between spatially separated objects. If QM is right about this, then we need quantum gravity. So either QM is wrong in some sense and/or we need to limit ourselves to a specific interpretation, or gravity must be quantum


>> It’s already obvious that a quantum theory of gravity is needed because we need a way to talk about superpositions of spacetime curvatures.

Is that because of things like the double slit experiment they mention? A particles could be monitored via its gravitational effect on spacetime to determine which slit it went through. What if the particles mass behaves as a mass distribution in such an experiment? Does that save classical gravity?


The problem is that it doesn't act like a mass distribution, it acts as two non spatially overlapping possibilities. I think that the only way to save classical gravity would be superdeterminism. If quantum states correspond to anything other than our ignorance, i.e. if superposition states are actual physical states of reality, then gravity will need to be quantum.


I attended a talk by Jonathan Oppenheim a while back on this subject, and my understanding based on that is that in his model something (very) roughly like this happens. You put your massive object in superposition, it interacts with space-time and "tries" to put space-time in superposition but since space-time is fundamentally classical (in his model) what happens is space-time ends up in a probabalistic mixture of the different states rather than a superposition. Then the interaction between space-time and the massive object end up pushing the object from the superposition state you tried to produce into a mixed state just like the space-time.

Essentially from the point of view of the massive object, the interaction with space-time acts as some decoherence process.


> the interaction with space-time acts as some decoherence process.

Well, the question is, can we use gravity in the same way that we use quantum processes, i.e., in a coherent way? If we can, e.g. using it to establish entanglement, then the theory you describe cannot hold.


Thats definitely a good question, and one that people have thought about before. I think its amazingly difficult to test it experimentally, even compared to something like directly measuring to see whether a massive object in superposition is forced to decohere faster than we expect, which is already incredibly difficult.


I was taught that choosing a QM interpretation is a matter of taste. Am I understanding correctly that we do have proposed quantum gravity experiments that can falsify various QM interpretations, it's just that they are all very hard to execute?


This isn't about a QM interpretation, rather it's about a non-quantum model of gravity with some features that would make it distinguishable in principle from any quantum gravity.

Edit: I maybe semi take this back, it actually may be the case that this particular weird model of gravity would provide a way to distinguish between the interpretations of QM because of how grossly it violates the usual assumptions of QM. It seems too implausible for me to really dig into but I don't entirely grok how they propose melding the proposed inherent but non-quantum randomness with the rest of quantum physics in a way that still makes things line up well in the face of observation/collapse (or the equivalent in many-worlds). They might lean enough on assumptions about how that happens to break the observational equivalence?


I don't know about quantum gravity experiments but there are already (proposed) experiments that don't involve gravity which might help distinguish between different interpretations:

https://www.scientificamerican.com/article/this-simple-exper...


> I was taught that choosing a QM interpretation is a matter of taste

You were taught wrong


> Then the interaction between space-time and the massive object end up pushing the object from the superposition state you tried to produce into a mixed state just like the space-time.

So we'd be evolving from a pure to a mixed state?


Seems like this would show up as an equivalence principle violation, no?


That isn't obvious to me - the equivalence principle works fine in classical GR, I'm not sure why it would break in Jonathan's model, but I'm very far from qualified to speak authoratively about his work!


Spacetime observing massive objects.


I'm quite happy to admit that I don't know the classical-quantum CQ programme well, and certainly not well enough to hazard a genuinely informed opinion that might take the form "it's a candidate for a fundamental theory" or "it's a candidate for a better EFT". I think I can say it isn't obviously not one of those, though. Also, I'm not ready to sloganize the work beyond "it's complicated".

A good entry point to CQ is Oppenheim's [hep-th] https://arxiv.org/abs/1811.03116

"... invariant under spatial diffeomorphisms ... consistent ... completely positive, norm preserving, and linear in the density matrix ... the metric remains classical even when back-reacted upon by quantum fields ... the dynamics here, while stochastic, can leave the quantum state pure -- it is rather the classical degrees of freedom, which gain entropy."

It's a heavy couple dozen pages (part V is a mountain for mountaineers (I am nowhere near the summit) and the foothills are also hard work, though most of it shouldn't pose technical comprehensibility problems anyone who's done QFT (GKSL is prominent [short intro <https://arxiv.org/abs/1906.04478>], and the Lindbladian is the generator of time-translations) and who has encountered Lagrangian and Hamiltonian formulations of gravitation, especially if they've glanced at Birrell & Davies. A crash read or refresher of Wald's QFTCS https://arxiv.org/abs/gr-qc/9509057 might be useful if like me you've mostly spent time in one of the two fundamental silos and want a bit more meat than what's in the first couple pages of Oppenheim 2018's part I. The general relativistic constraint equations are also important, and there is a good overview at https://link.springer.com/article/10.1007/s41114-020-00030-z

The meat, for me, is the potential for a better approximation than semiclassical gravity when quantum fluctuations are large: "[even if not fundamental, CQ gives] the ability to consistently study back-reaction effects in cosmology and black-hole evaporation ... [but] care should be taken, since an effective theory might violate our assumptions of Markovianity or complete positivity at short time scales or when the gravitational degrees of freedom have not fully decohered.", "This theory serves as a sandbox in which to understand issues around the quantisation of the gravitational field. After all a probabilty density \rho and the Liouville equation have a lot in common with the wave function \psi and the Heisenberg equations of motion. [and the rest of that paragraph just before (eqn 2)]".


When you say superdeterminism are you referring to something like Pilot Wave theory, where what appear to our measurements as probabilistic yet random interactions are merely expressions of a more complex yet non-random underlying system that we cannot, as yet, measure? (I don't even know if that's the proper description of the hypothesis.)


Superdeterminism is simply the idea that all quantum experiments results could have been known before performing them, assuming a perfect knowledge of the state of the universe.

It's one of those things that could be true and explain all of QM but also is kind of a cop out. "Of course your two detectors are giving correlated results, they were tightly coupled 13.8 billion years ago and now they are forever linked like all things."


> Superdeterminism is simply the idea that all quantum experiments results could have been known before performing them, assuming a perfect knowledge of the state of the universe.

That's just determinism. Superdeterminism additionally posits that the results of those experiments are all correlated so as to make it appear to us as if local hidden variable theories were false. Pick the settings on two polarimeters for a Bell-type experiment by measuring the spins on photons emitted 10 billion years ago from two different galaxies, and you'll find (or rather, won't find, because it's being hidden from you) that they were arranged, long before the Earth was formed, just so as to trick you. The universe is conspiring against us, and the whole scientific project is a farce.

That, or superdeterminism is false.


Alternatively, non-local correlations (in the style of anyons) persist from the early universe.

And we’re just measuring those.


How is it a cop out?

We know two things:

- universe was small enough everything was tightly coupled

- non-local phenomena occur

Insisting that our beliefs reflect an ideology (eg, you can segregate off portions of reality to study in isolation) which seem contrary to observed reality (eg, the points above) is religion — not a scientific investigation of the universe.


Superdeterminism sidesteps interpretational questions by arguing against reductionism, which is pretty much the core of all physics. Where is this hidden information stored? What causes the specific outcomes in a destructive measurement?

To say that QM is in fact a local hidden variable theory because everything was once touching billions of years ago, without positing any mechanism for it is a cop out.

At least instrumentalism admits that there are interpretational questions, even if we don't need to answer them.


Not just because it was touching:

- because it was touching;

- and because we observe non-local information stored in braiding of the wave equation; even when we separate the constituent (quasi)particles.

The belief that you can reduce a system by decoupling a particle or system from outside influences is at odds with that second fact. We’re storing the hidden variables in those non-localities; when you measure a system where both particles are part of a non-local phenomenon, they’re coordinating through that non-locality.

What has literally no proposed mechanism is suggesting such non-local quasiparticles disappeared — by what specific mechanism did primordial anyons dissipate?


>- non-local phenomena occur

No. Also superdeterminism doesn't have non-local phenomena. The idea is that measurements are predetermined to look like quantum.


First, as a religious person: no, contrary to what you seem to be saying, dismissing superdeterminism isn't "religion".

Second, dismissing superdeterminism is not, as you seem to say, "contrary to observed reality".

The belief that one is capable of forming ideas that bear any resemblance to reality, is justified on account of, in order for one's beliefs or lack-thereof to have any use, it would have to be true.

Things like the no-speed-up theorem (which I admit I'm not super familiar with), and other things of that sort, lead me to expect that, while I don't have a full argument for this, that it isn't possible for computational complexity reasons, for the early universe to be such that it, in effect, encodes predictions of what future measurements people will make, in a way that makes signals determining what measurement directions get used, correlated in the way that superdeterminism requires.


Except that, we know such macro correlations exist — for the two reasons I outlined.

We know that the universe was once highly correlated, due to its size, and we know that such correlations form super-macro structures, from galaxies to filaments. We further know that knots in the wave equation are a non-local phenomena, such as with anyons. The minimal assumption is that any particle we encounter is part of such a non-local phenomenon.

To assume that we can isolated regions of reality uncorrelated to those two phenomena is to assume an ideological belief, unsupported by evidence.

That you didn’t discuss my actual objection to dismiss it with generalities is very telling.


Superdeterminism is fine tuning, not a mere correlation. A mere correlation is hidden variables theory.


Superdeterminism is a loophole in Bell's theorem that allows local hidden variables, so there could just be a fully deterministic theory that determines everything, including your detector settings in advanced. Classical physics has generally assumed that we at least have free choice to choose what to measure. If we don't, we can trivially retain classical gravity


These distributed "particles" are passing through both slits simultaneously before somehow transferring their energy at a particular position at the target screen. If gravitation is non-quantum, I suppose I'd expect that a gravitational measurement would show it passing through both slits, half of its total mass through each.


Do you know if Bohmian mechanics with a particle ontology (still with non-local "hidden" variables ofc) or GRWf has potential to save classical gravity as well?


To my knowledge, it does not. There are various items to address with only item 4 being currently with no clear direction:

1) Bohmian mechanics seems to require some kind of simultaneity. Various proposals have been put forth for making natural foliations, possibly using the wave function to do so, that allow one to evaluate the positions of all the particles at a given time so as to know which configuration point to use in obtaining the velocity of the particle. This is possible, but it does not feel philosophically satisfactory yet. GRWf does not require a foliation to be relativistic which is a nice feature.

2) Quantum field theory naturally models particle creation and annihilation. Most QFTs are presented in a mathematically incoherent way. While conclusions can be made via renormalization, etc., driving an actual dynamics is tricky from that stuff. By not doing perturbations but rather defining the operators by taking into account the shifting of the probability from one sector of n particles to another of n+1 particles, QFTs can actually be made to make mathematical sense. This has been accomplished in some of the simpler models. It is not a problem whatsoever to define a Bohmian evolution where particles appear and disappear in a probabilistic fashion. The difficulty is purely in having a properly defined wave function evolution and that is on its way to being solved.

3) One needs to define wave functions, their evolution, and the particle evolutions in a curved space-time. This is not a problem whatsoever. It is very easy to translate and interpret what we need into differential geometric language. QM has issues with the usual observables/operators translating (such as a momentum operator), but since these are derived concepts in Bohmian mechanics, no fundamental difficulty arises.

4) In general relativity, the mass distribution is part of the evolution of the space-time metric. The mass is based on where the particles are. To date, the wave function tells the particles what to do, but the particles do not have any impact on the wave function. The wave function is impacted by the space-time metric. Also, there are some suggestions that mass might be entirely a part of the wave function and not associated with the particle, i.e., the particles are really just undecorated points moving about.

Basically, gravity and the wave function need to work it out and the particles will then be guided by both as the space-time metric is what takes the gradient of the wave function or, for Dirac style motion, something analogous to a square root of the metric is floating around. Bohmian mechanics does have the advantage that it just has to be concerned with the evolution of the particle lines and not having to figure out how to define various observables.

In other words, a known space-time metric interfaces just fine with Bohmian mechanics, but figuring out how to evolve the space-time metric is still an open question.

At the current time, I am not aware of any novel ideas coming from a Bohmian or GRWf point of view towards resolving the problem of gravity.

If interested in a good discussion of all these things and much, much more, I highly recommend the recent book Foundations of Quantum Mechanics by Roderich Tumulka which covers what the title says, but also discuss Bohmian mechanics and other interpretations.


Thanks so much for your enlightening comment!

> The difficulty is purely in having a properly defined wave function evolution and that is on its way to being solved.

Interesting, could you provide some references?


Here are a some papers:

Avoiding Ultraviolet Divergence by Means of Interior-Boundary Conditions https://arxiv.org/abs/1506.00497 This is perhaps the first of the papers and so may be a good place to start.

Bohmian Trajectories for Hamiltonians with Interior-Boundary Condition https://arxiv.org/abs/1809.10235 This is the Bohmian part of the story.

Multi-Time Wave Functions https://arxiv.org/abs/1702.05282 This explains the trickiness of having interactions with multi-time wave functions (space-time suggests having multi-time wave functions and no single time).

Creation Rate of Dirac Particles at a Point Source https://arxiv.org/abs/2211.16606 This seems to suggest that there is a kind of spiral approach from/to the point of creation/annihilation.

---

The authors have done a variety of papers on this. A key phrase they use is Interior-Boundary Conditions.

They also released a book based on a course covering some of these ideas: Multi-time Wave Functions, An Introduction https://link.springer.com/book/10.1007/978-3-030-60691-6


Thanks so much!


What would such an experiment look like? If it makes it easier, perhaps an experiment that's even harder (or much, much harder) to realize but easier to explain.


It's pretty simple. You basically have two suspended mirrors on wires and you shoot lasers at each mirror. Due to gravity there will be an interaction between the mirrors. There are then two things you can probe. First, the correlation between the two reflected lasers, and if you can achieve the extreme temperature requirements, entanglement between the two reflected lasers caused by the gravity. Now, the former doesn't really imply that gravity is quantum, it just shows that quantum correlations can be mediated by gravity, but this may also be true classically, however, the much more difficult to show entanglement would definitely require quantum gravity. The first step is still good though because no one has yet managed to demonstrate quantum correlations via gravity. The experiment would also have two mirrors two turn the laser beams into cavities, increasing the interaction strength. The setup would look a bit like this:

             _   _
             !   !
    ---|-----|   |-----|---
where | are mirrors, ! are suspension wires, and - is the laser beam


You might also be interested in Bose et al. (2017), "Spin Entanglement Witness for Quantum Gravity" https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.11... aka [hep-th] https://arxiv.org/abs/1707.06050 which proposes a way to "certify gravity as a quantum coherent mediator, through simple spin correlation measurements".


Those reflected beams would at best be very very weakly entangled, right? I'm not sure what's the name for it, but if you arranged the quantum state of the two beams into a 2x2 matrix then the determinant would be just a tiny bit non-zero.


Yes


> possibly even without creating a particle accelerator the size of the solar system.

Sure, you just need to entangle two objects big enough to exert a noticeable amount of gravity on one another but somehow do not interact gravitationally with the rest of the set up.

Anyway, let's try a cat sized object first, then we'll finally know if Schroedinger had a point.


If they are close enough to each other and far enough from everything else, then why not.

The smallest objects for which we measured their gravitational interaction weighed just 90 mg (https://arstechnica.com/science/2021/03/researchers-measure-...).

The biggest object put in quantum superposition weighed around 1 mcg (https://physics.aps.org/articles/v16/s45).


So 90,000 times smaller. At first glance you made it look 90x smaller because I couldn't tell if "mcg" was a typo for "mg" or not (without clicking the link). I've never seen anyone use "mcg" to mean "microgram" before and I feel like it's misleading. Use "ug", or "μg" if you're feeling fancy.


Use of "mcg" for micrograms is common practice in medicine. I always assumed it was because it's easier to type than μg, but it seems a medical body recommends it because μg is too easily mistaken for mg:

https://www.ismp.org/sites/default/files/attachments/2017-11...


> μg is too easily mistaken for mg

Is that some joke about medical writing?

Anyway, I just noticed that table doesn't have rules for nanogram. There is also no mega-anything.

(On a serious parenthesis, I think I actually understand their rationale; but changing the abbreviation of only one of them is still confusing.)


Before Unicode, 'μ' was typically entered as an 'm' in a Greek font. This can go wrong in several ways, like if you converted the document to plain text, or if your laser printer didn't have the font and it substituted a regular font, you're suddenly off by a factor of 1000. 'mc' is ugly but safe.


What time period, system and region are we talking about? For some characters using a specific font was certainly necessary but for µ? It is in the original IBM PC character set from 1981 and could easily be put into plain text files. I would guess that most extended ASCII character sets contain it but I don't know for sure. Every [German] keyboard I ever used had it on the M key.


I remember having to get a µ that way using FrameMaker on SunOS 4 in the mid-90s.

Mac had a µ from the beginning. And some PC code pages did. But PCs in Canada (where I'm from) used code page 850, which had an Á (for French) where µ would be. Just the sort of thing that would hose you if you tried printing a Mac document on a school printer.

Here's a howto article that suggests using an 'm' in the Symbol font in Word: https://www.officetooltips.com/word_2016/tips/how_to_insert_...


Interesting. There's that too.

I assumed it was because of the phonetic similarity.


Yes, I meant micrograms.

90'000 times sounds a lot, but just a few decades ago the biggest objects in superposition were individual particles and atoms, and the crystal from the new study is 10^16 times bigger than that.


That's not it. The mass difference of entangled states should be over the 90mg, suggesting much, MUCH heavier entangled object.


I think the comparison you're giving here is quite misleading. You're implying that we "only" need to scale up the objects we put in quantum superposition by tens of thousands of times. But that's not all there is to it. You'd have to put it in a kind of superposition that affects its mass enough to be measured. For instance, having the whole object be in superposition of one position, or another position several centimeters away. Or being in an extremely high energy state vs extremely low energy state, with an energy difference on the order of 90mg. Given E=mc2 that's an insane amount of energy.

Seems to me that such an experiment is borderline unthinkable with present day technology.


I do understand that there's more to it than just closing the gap of 90'000x. That said, the fact that these numbers are on roughly the same order of magnitude makes me cautiously optimistic.

If anything, to me both measuring the gravity of 90 mg weight and putting a crystal that you could see with your naked eye into quantum superposition already seem borderline unthinkable.


90 mg... that is beyond gobsmacking


I honestly can't tell if you're impressed we've measured the gravitational interaction of something that small, or staggered by our inability to measure a feature of things so large that you can easily hold them in your hand.


"The current best estimate for the mass of Earth is M⊕ = 5.9722 × 10^24 kg" (wiki) so ~6 × 10^27 grammes, so 90mg ~ 28 orders of magnitude smaller. So very, very much the former.


Assume a point sized spherical cat...


I think if it's point sized, you're allowed to imagine any shape of cat. :D


For a uniform gravity field it is essential that the cat is spherical.


It's also crucial that the point-sized spherical cat is not spinning, because that's a quick way to get a ring-shaped cat.


All black holes I've run into are cat-shaped, yet they have uniform gravity fields around them.


That's strong evidence that these are spherical cats. I take it that you are unaware that cats creating a non-uniform gravity field will deviate from their normal spherical state. For instance, cats on planet earth will have all kinds of non-spherical bits sticking out and this in turn gives rise to our distinct and non-uniform gravitational field.


I wonder if LIGO’s mirrors would be useful for testing this type of theory. They’re big, and if they couple classically to the massive objects around them, I would expect an effect that could be detectable.


I don't know but I doubt it. That said, just now at the QG2023 conference a member of the Aspelmeyer group at IQOQI Univ. Vienna is presenting their work on an experiment where a mirror is mounted on a Cavendish torsion pendulum, and in order to do calibration and exclude sources of noises they have borrowed some protocols from LIGO. (Their lab is in central Vienna, near a couple of tram lines, and a 90-tonne tram passing 70m away induces a gravitational attraction on the same order as their test mass! When they test at night, they can also see the night busses in their spectral data.)

So there are similar issues with noise, but the mirror they're using in Vienna is small and light enough to suspend from a thin wire.

There are different things being detected: LIGO is interested in gravitational waves from the quadrupole moment of fairly-quickly-mutually-orbiting astrophysical binaries with many-light-second separations; this experimentalist (who just disclaimed deep theoretical familiarity as opposed to focusing on developing experiments) is looking for measuring the gravitational influence of very small masses (~microgram, with a move to a less-busy test location) and across very short distances (and his collaborators are interested in putting ever larger (but still << milligram) masses into quantum states).

PS: the speaker was Hans Hepach, a PhD student in the Aspelmeyer Group.


> somehow do not interact gravitationally with the rest of the set up.

And how would you do that?

If we had a way to shield a region of space from gravitational influence from something else we'd have a very useful technology


No force can be shielded fully, but that's fine, because we can make the other forces smaller using distance of time averaging or averaging over many repetition etc like we do in other physics experiments.

A scattering experiment in which we send entangled masses to fly by each other and measure their interactions sounds very possible, even if it is very hard technically.


Even for small particles superposition can span arbitrarily large distances. I think it will be difficult for classical gravity to gloss over such distances. And the rationale is strange: information paradox is pure math unrelated to reality, it doesn't warrant a new theory.


In the decades since the establishment of these theories, both the continuous (classical) spacetime of general relativity and discrete matter of quantum mechanics, the world has changed in rather significant ways.

One of those has brought forth advances in technology which led to creating virtual world geometry with continuous function derivation which then gets converted into discrete voxels in order to track state around free agent interactions.

Often, to save memory these systems only convert to voxels when the free agent is observing or has interacted with the relevant geometry.

We sit in a massive universe that we can observe but cannot interact with over 99% of because it's expanding away from us faster than the speed limit of local information from us to it.

Within the local area where we can interact with things, they behave as if continuous until free agents interact with them when they appear to collapse to discrete units, but if the information relating to those interactions is erased, they go back to behaving as if continuous.

Maybe the relationship between apparently continuous spacetime and quantized matter is much simpler than it seems to those ignoring what's currently being built within the world they are studying so closely.


> In the decades since the establishment of these theories, both the continuous (classical) spacetime of general relativity and discrete matter of quantum mechanics, the world has changed in rather significant ways.

It was essentially a historical accident that the first systems where quantum mechanics was studied extensively (black-body radiation and atomic spectra) were ones in which quantum effects ended up discretizing something which was continuous in classical mechanics. Quantum mechanics does not generally impose, require or even match having things be discrete.

You can very easily have quantum systems where the relevant quantities are continuous, rather than discrete (wiki link: https://en.wikipedia.org/wiki/Continuous-variable_quantum_in...). Simple theoretical examples like the particle living on a 1d line or in 3d space are easy to understand, there are many (many many many) more complicated examples.

TLDR: there is no fundamental link between things being quantum-mechanical and things being discrete. Quantum mechanics makes some things discrete, but not everything.


>virtual world geometry with continuous function derivation which then gets converted into discrete voxels in order to track state around free agent interactions

What?


Video games basically, you only render what's user is looking at.


all quantities in virtual worlds are always quantized.


A simulation produces philosophical zombies, yet you are not one.


You are asserting that there is something fundamentally "unsimulatable", un"real" about consciousness.

If we are going to give up and turn to religion for our understanding of the universe then I pick "last tuesday-ism". It physically cannot be distinguished from any other compatible interpretation of the data.


> "unsimulatable"

Is so at least until it is known what it is. Opinions differ on whether that is knowable.

> un"real"

As the only thing you and I will ever have direct access to, consciousness is certainly real. Opinions differ on whether it is alone in that.

> turn to religion

I personally prefer philosophy, but you do you. To me religion seems about as capable of getting to truth as physics (that is to say, not very much when taken just by itself).


The idea of a philosophical zombie is logically incoherent. The only way to even get the idea is to make unscientific assumptions such as souls or a metaphysical life force, which is itself incoherent as it can't even decide what is “real”.


Not true. Property dualism and panpsychism are neither of those things. There are other ideas as well that don’t involve souls or life forces.


Cartesian dualism vs. naive materialistic monism is false dichotomy.


I think it would be funny if we discover that refraction is caused by the slowing of light in close proximity to mass. That is, one of the most common and observable phenomena in physics is a quantum gravity phenonema! (The usual explanation for refraction is that light as an EM wave causes sympathetic vibration in the electrons (and the protons, a little) which slows it down. But what if light's proximity to protons were caused a multitude of miniature Shapiro delays [0]?)

0 https://en.wikipedia.org/wiki/Shapiro_time_delay


This can be refuted by observing that the refractive index isn't proportional to density, but is accurately predicted by charge mobility. Also, how refractive index changes with wavelength is related to the resonant frequencies of electrons.


Yeah, I was thinking about it and also lasers wouldn't work if it was 100% mini-Shapiro delays. I was thinking that maybe there's a small GR component to refraction, but the experiment would be tricky. You'd need to pin the electrons in the refractor down in a strong magentic field and then shoot some high energy photons at it, probably at least X-rays, and see if they still refract. (The high energy photons would be necessary to bring their wavelength closer to the diameter of the nucleus).


Refraction is well understood. It is caused by interactions of the incoming wave and electron clouds of atoms.

https://en.wikipedia.org/wiki/Ewald%E2%80%93Oseen_extinction...


Ah, perhaps you are inclined to stop reading when you come across open parenthesis?


Ok, I confess that I didn't read the entire comment. But you have a second problem, as you should explain why the conventional explanation does not hold or is canceled by your theory.


I'm already convinced[0] the effect is small, if it exists at all. However I thought of another experiment that might be easier to perform, to see if there is any effect at all: take two crystals, as similar in depth as one can make them, with two different isotopes, and measure the difference in index-of-refraction. Silicon (28 and 30?) would probably be good for this, as would laser interferometry. (Maybe the Gravity Probe B people have some extra pure isotope wafers they'd be willing to lend?) If I'm right then the Si-30 sample will have a slightly larger index of refraction than the Si-28 sample. Someone needs to do the math though because if the effect is far less than like one layer of atoms, or a few impurities here and there, then the experiment isn't worth doiong.

0 - https://news.ycombinator.com/item?id=36671605


Isotopes are not 100% chemically equivalent and they form chemical bonds with different vibrational resonances because the vibrating masses are different. You can see those differences in spectroscopic line shapes including in refraction (which is related to absorption by the Kramers-Kronig relation). My guess is that those effects would swamp out any relativistic ones by a lot, and because you can't tune the isotopic mass you wouldn't be able to isolate a gravitational effect.

In fact it appears that the isotopic dependence for silicon isotopes is known,

https://arxiv.org/abs/1105.0822

and the refractive index of Si 30 is lower than Si 28 in the IR, because of the different absorption lines.


Perfectly applicable paper, thanks! I skimmed it, but I didn't see an explanation for the lower refraction index for the heavier isotope. It had the lower RI at all wavelengths they looked at. I think it's useful to ignore the "phonon resonance" wiggle in the middle of their tested range on page 4, around 600nm, if looking for some other effect. (it's crazy that an electron that weighs like 10^-31 kg could actually jiggle a crystalline lattice to the point you can see it at all, but I suppose there's a lot of electrons, and Tacoma Narrows bridge reminds me of how powerful resonance effects can be).


The off-resonance curves look exactly like what you expect from the Kramers-Kronig relations

https://en.wikipedia.org/wiki/Kramers%E2%80%93Kronig_relatio...

which have very far reaching effects in term of refraction (the real part of the permittivity decays as 1/w).

In term of resonances, it's important to note that the nuclei themselves are not neutral either. In the IR, the time dependence of the field is slow enough that the nuclei can follow it resonantly. Those fields are much below electronic resonances so the electrons aren't even contributing much beyond making the crystal stable!


Now I'm curious: does the refraction that happen within our eyes has anything to do with the processing and the consequent experience of sight in a way that it utilizes quantum mechanics that we don't understand?


In theory it could but in practice there isn't much about the optical portion of the eye that we do not understand, the boundary of that understanding is well behind the optical nerve. What sort of an effect is it that you are getting at?


If I understand you correctly, Ive long thought about this as well!

Note you should clarify that by "slowing of light" you mean a lowering value of the absolute speed of light.

While Einstein assumed it was constant, yet said speed and time we're changing. It's actually simpler to say the opposite; speed and time change because speed of light (c) changes .


It certainly feels wrong for spacetime to be quantized, because it’s fundamentally no linear and geometric in a way that other things aren’t, and quantized theories like string theory tend not to be inherently background independent, which seems really wrong.

At the same time, we know that a basic semiclassical + Everettian model of the world does not correctly model reality - if we run a quantum-linked cavendish experiment, we don’t observe the gravitational effects of superposition. I.e. each branch of a quantum superposition behaves as if it is in its own spacetime. We can even run a Bell experiment linked to gravitational detectors and observe that the results are consistent with gravity being quantized.


> So if gravity is quantized, that means space-time is also quantized. But that doesn’t work,

But... How could space-time not be quantized? That would imply the existence of infinities in the structure of the universe. It is like the ultraviolet catastrophe but in space-time.


I think this entire comment thread is using the wrong definition of the word "quantized". There are two relevant meanings of this word

1. Something split up into discrete chunks (like integers vs real numbers)

2. Something which acts quantum mechanically (i.e. with superposition and entanglement and all that stuff)

In this context "space-time is also quantized" means space-time is acting like a quantum system.

Whether space-time is discrete or continuous is a wholly seperate question. There is no particular evidence that it is discrete, but if you make the discretization scale small enough it theoretically could be. Most things work more nicely if it is continuous though.


People talk about how space time can't be quantized as if the universe was made up of tiny little cells like minecraft because there is some evidence against that hypothesis.

An alternate hypothesis, that a particle is "virtual" and its position/momentum "vector" contains a finite amount of information, is actually both extremely plausible and explains odd paradoxes like the Heisenberg uncertainty principle. That would result in "quantized" spacetime for the same reason floating point numbers are imprecise.


I was told that this is exactly what the Bekenstein bound ultimately means. It seems reasonable to me, but then why is that not conclusive?

Also if anyone who knows wants to explain why the Bekenstein bound is even a thing, I'd love to hear that too.


Exceeding the Bekenstein bound would mean having less-than-equilibrium free energy. See https://arxiv.org/abs/1802.07184


Hmm so that would be a system being out of equilibrium but in a "negative" direction, which is nonsense since any amount of non-equilibrium is a positive amount of free energy.

So, I don't understand any of the math in that paper but is there any easy/intuitive way to explain why a higher information density would require negative free energy?

I guess I need to understand the relationship between information density and free energy first. Hmm.


> So, I don't understand any of the math in that paper but is there any easy/intuitive way to explain why a higher information density would require negative free energy?

Not really, no. If you don't have the equivalent of a good undergraduate education in physics very little about QFT is going to be accessible to you.


I guess I'll have to wait for the PBS SpaceTime video then...


If space is not quantized, but everything that applies into it is, there is no reason for a catastrophe but the behavior is still different from a quantized space.

Just like the ultraviolet catastrophe was solved by quantizing the photons, not the energy levels.

(What I don't know is if there is some space-space interaction that can't be quantized at the "interaction" level instead of the "space" one.)


For the layman, what exactly is going to theoretically "explode" if spacetime is continuous?


The article discusses this question actually, about the apparent incompatibility between classical and quantum systems. Basically, there's a fundamental inconsistency where you can detect a particle's position gravitationally as it passes through a slit in the double slit experiment, which destroys the quantum properties of "passing through both slits" that leads to interference patterns.


Oh no I get that, but two comments above inciampati was suggesting that we will end up in a situation where some variable will have to have an infinite value. Like, famously, with the ultraviolet crisis: https://en.wikipedia.org/wiki/Ultraviolet_catastrophe

Detecting (something about) the particle through gravity would at best remove the interference, not create an \infty somewhere in the model, implying that the model is self inconsistent.


> Oh no I get that, but two comments above inciampati was suggesting that we will end up in a situation where some variable will have to have an infinite value

The black hole information paradox from that article presumably fits. The conclusion from GR is that no information can escape, which is ultimately incompatible with QM, and that conclusion ultimately depends on the infinite density of the singularity.

I think the more charitable reading is that we'll find situations where either no sensible calculation can be done, or the sensible calculations we do churn out nonsense. Divergence in the UV catastrophe was an example of that, and Baez covered more here:

Struggles with the Continuum, https://arxiv.org/abs/1609.01421


Problems 1-3 there are caused by point particles, not by space. They mention ultraviolet catastrophe was caused by continuous space, but it was caused by non-quantum radiation, so there's no classical electrodynamics with discreet space, instead there's quantum electrodynamics with continuous space.


I'm not sure what "problems 1-3" are referring to here, but all of these problems fundamentally trace their roots to issues with the foundational assumption of continuity.


Per numbered titles:

1 Newtonian gravity

2 Quantum mechanics of charged particles

3 Classical electrodynamics of point particles

They explicitly mention they use point particles. Problem in 1 Newtonian gravity is especially egregious example. They model planets(!) as material points and see how they come infinitely close to each other. The author does mention these problems are pure math unrelated to reality.


Zeno would win. And though he is thousands of years dead at this point, I'll be damned if I'm going to let that happen.


But as far as we know there is just such a singularity inside every black hole. (We don't know there really is, but unlike the ultraviolet catastrophe we have no evidence there isn't.)


So, it's like a bubble in a pressured bottle of water.


Doesn't the plank distance imply quantization of space? At least in relative terms.


No, Planck units have no intrinsic physical significance. They're just convenient to work in because they set the numeric values of several different physical constants to 1. The planck distance also happens to be roughly the length scale at which we expect quantum-gravitational effects to become significant, but roughly here means within a few orders of magnitude.


Oh, really? I totally misunderstood then. I thought I knew physics a little better than that! Thanks.

EDIT: Hmm, so doesn't the fact that this particular value makes the math simplified not also imply some kind of meaning?


> Hmm, so doesn't the fact that this particular value makes the math simplified not also imply some kind of meaning?

No. There are plenty of other sets of constants you can choose to set to 1, inducing other length scales. Only dimensionless constants have physical meaning in isolation; dimensioned quantities are meaningful only with respect to each other.


Thanks for the explanation. I had to google "dimensionless constant" lol. Maybe it is all just math at the foundation after all!


Fine structure constant, famous alpha ~1/137. It also means that you could multiply or divide what we know as Plank length and it still would be a "valid" unit to be a minimum length entity


Planck.


Right, sorry.


Wouldn't random noise work instead?


But space is quantised, that's what Planck's Constant is.

The oddity we see with things at relativistic energies is simply because - as with all discretised representations of continuous-time systems - things get a bit fucky close to Nyquist, and the maths breaks down.


That's not what Planck's Constant is. Planck's Constant connects energy to length (in space or time). It's just a way of shifting units around.

You're thinking of the Planck Length, which is the result of combining the Planck Constant with the speed of light. It also happens to be the wavelength of a photon so short that its energy makes it a black hole, connecting it to gravity.

It's often presented as if it were the fundamental unit of length, but that's a guess at best. There's no specific reason to believe it, other than the fact that it happens to be a way to connect length, charge, and gravity. That's evocative, but hardly proof. There's neither evidence for it nor a well-supported theory behind it.


[flagged]


I'm afraid I don't get the joke, but that's on me. I apologize.


I think it didn't land because so many people are in the comments seriously arguing almost exactly what you were saying as a joke. Tough environment for parody to thrive in, alas...


I'm currently spending a lot of time in La-La-Laplacian Land, so maybe my sense of humour is currently a bit warped. A bit like my filter responses, really.


HN as a forum is no better than a pub or bar in a wealthy city. You cannot spout off an extremely common layman misconception and expect people to recognize that as a joke. Most of us are laymen in this area.


There is zero evidence that space is quantized, and this would contradict the Standard Model, as well as General Relativity. The Planck constant gives the relation between the energy of a photon (or more generally a harmonic oscillator) and its frequency. It has nothing to do with the structure of space.


How exactly does quantized space-time contradict the Standard Model and General Relativity?


It would break Lorentz symmetry, for one.


[flagged]


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's against what this site is for, so we have to ban accounts that keep doing it, and we've already had to ask you this more than once:

https://news.ycombinator.com/item?id=36449159 (June 2023)

https://news.ycombinator.com/item?id=34194236 (Dec 2022)

https://news.ycombinator.com/item?id=33501516 (Nov 2022)

Nothing Phone (1) - https://news.ycombinator.com/item?id=32086641 - July 2022 (97 comments)

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


You’re thinking of Planck scale which is just a mathematical abstraction not some physical thing we’ve observed.


> But when they tried to quantize gravity, they ran into unnatural infinities that had to be sidestepped with clumsy mathematical tricks.

Maybe they run into unnatural infinities because all of our formalisms in physics are still fundamentally continuous rather than discrete. Uncountable infinities are baked right into the foundations of how we use reason about these systems, so infinities will naturally result. Physics has repeatedly had to tame infinities by elaborate tricks, or by eliminating them entirely [1].

Some people are increasingly looking to discrete formalisms, and I think this is a promising way forward, both for mathematics and physics.

[1] Struggles with the Continuum, https://arxiv.org/abs/1609.01421


I can't help but suggest "Information, physics, quantum: The search for links" [1] by the one and only John Wheeler. This philosophical paper is bold, bolder than most physicists ever would be.

> Abstract: This report reviews what quantum physics and information theory have to tell us about the age-old question, How come existence? No escape is evident from four conclusions: (1) The world cannot be a giant machine, ruled by any preestablished continuum physical law. (2) There is no such thing at the microscopic level as space or time or spacetime continuum. (3) The familiar probability function or functional, and wave equation or functional wave equation, of standard quantum theory provide mere continuum idealizations and by reason of this circumstance conceal the information-theoretic source from which they derive. (4) No element in the description of physics shows itself as closer to primordial than the elementary quantum phenomenon, that is, the elementary device-intermediated act of posing a yes-no physical question and eliciting an answer or, in brief, the elementary act of observer-participancy. Otherwise stated, every physical quantity, every it, derives its ultimate significance from bits, binary yes-or-no indications, a conclusion which we epitomize in the phrase, it from bit.

[1] https://philarchive.org/archive/WHEIPQ


Digital physics seems so obviously the correct approach from a philosophical perspective, it's a shame it blows up the math.

Real numbers are the most ironically named thing ever, and infinity is a thought experiment - not a real thing. Any model of the universe based on such constructs should be heavily suspect outside the range of established observations, and taking predicted limit behavior seriously is just foolish.


Speaking as a physicist I think its very far from obvious that digital physics (or anything else) is the correct approach from a philosophical perspective.

Real numbers are so ubiquitous in physics because space-time (and other quantities) look really continuous.


Rational numbers can produce "continuous" values to a precision way beyond what any equipment you could get your hands on could differentiate without physically ludicrous postulates.


That is certainly true. In fact you don't even need rational numbers, it is entirely possible that there are a finite number of positions on the universe, for example.

Nevertheless physics seems to work very nicely when expressed in the language of calculus. Everything from Schrödinger's equation to the Einstein field equations, and from classical mechanics to the standard model of particle physics are expressed in the language of calculus. This all looks like a wild and strange coincidence if fundamentally we are living in a relm like the rationals, where calculus doesn't really make sense.


Unless the resolution of the rational model is fine enough to be indistinguishable from the continuous solution.


That would be very fine indeed since the CMB doesn't eg seem to be pixelated and we don't observe numeric instability anywhere.


I wouldn't even assume that discrete theories must have a resolution that can be detected.


Any amount of quantization should have very noticeable indirect effects somewhere. Try implementing game physics and you'll see it.


Agreed that digital physics is far from obvious, but the use of real numbers as our default model of the continuum is at least in part historical accident. We could have for instance easily ended up with locale-theoretic foundations instead, though I doubt the finitist crowd would find that any more satisfying.


I think naive approaches to digital physics have been inadequate, but newer works have made good progress on important questions. Arguably one of the biggest tools in the physicists' toolbox are symmetries, and there's now a good account for those:

A Noether Theorem for discrete Covariant Mechanics, https://arxiv.org/abs/1902.08997


> Physics has repeatedly had to tame infinities by elaborate tricks, or by eliminating them entirely

or In case of black holes by actually interpreting infinity as a real place in the universe.


If I were a betting man I'd say black holes have no singularity, but rather a core of extremely dense exotic matter (probably formed from top/bottom quarks) which we haven't detected because it decays quickly under less extreme circumstances.


I think it’s true that once an event horizon forms no physical force (known or hypothetical) can stop a singularity forming.

There may be some form of very dense matter that stops large stars from collapsing to the point where an event horizon forms in the first place, but that doesn’t seem to apply to super massive black holes.

For super massive black holes the event horizon grows too fast.


Singularities are a deficiency in GR, they don't really exist.


It takes infinite time for event horizon to form, before that it's just dense. It's time dilation that stops formation.


I have always been fascinated by the problem of quantum gravity but, well, it somehow happened "too late", after I got also very deeply into "computing" (mostly HPC, GPGPU, rendering). Does anybody know if there is a way for somebody with my background to actually help/contribute in advancing this field?


Go work for a university in the physics department, they're always looking for help writing software ... of course it only works if you're willing to work cheaply ;)


I have a personal theory that we have not been able to crack quantum gravity and similar because it's too much for human brains in the same way that we have a job visualising 5d space and dogs don't understand relativity. In which case AI going beyond human brain limits might be the way to crack it. Go build it!


I always wonder if we could draw some parallels between an universe and a simulation.

Time -> reason why universe is expanding

Lots of quantum mechanic problems -> something that actually in theory could be simulated - but "lots of stuff is happening at the same time" (tons of fields and particles flying around, so things are "unpredictable")

Particles are not particles but manifolds

Time slows down around heavy objects like black holes - the same way servers slow down when they have this massive battle of thousands of ships in Eve online -> it is difficult to run a simulation

So the question would be, can we escape one layer up -> to the virtual machine what is simulating our reality? Same way some program can escalate privilege and see what is inside the system. What would be a good way to escalate privileges and break free from a virtual machine? And by this, I mean the theoretical machine that runs our reality?


Article is 404 right now, for me at least.


> Error 404. This page doesn't exist. At least not in this universe.

I like the error message though.


Lose the V1 at the end of the URL.


I wonder if the recent news about the background gravitational hum of the universe has any bearing on this:

https://www.sciencealert.com/breaking-news-physicists-have-d...

It essentially means that gravity is always in flux everywhere, at all frequencies of gravity waves, so there is noise in gravity. This is in addition to the background EM radiation in the universe. Gravitational and EM fields are infinite are they not? Maybe this is the source of gravitational randomness that Oppenheim is looking for. Although he seems to be positing that gravity itself has some inherent randomness.


Yeah the "gravitational hum" would not seem to be the same as gravity itself having some inherent randomness. On the flipside, in a very real sense if the cosmological history of our universe is such that we exist in a universe with an always present indeterministic hum, then for our intents and purposes this is the same as gravity itself being random... unless we can devise experiments to study gravity in other universes with different cosmological initial conditions.


The hum is merely the change in the shape of space time by all the gravitational events like block hole creation, mergers, and novae. In that sense it's not random certainly not in the sense of quantum measurement say of position or momentum.


Do we actually know that the fields are infinite? I thought it was just the easiest way to model what we know and can observe.


I think the question would be, what happens at the edge of the universe? If the universe is finite, though expanding, then the waves can not be infinite unless they reflect at the edge, although one definition of the size of the universe might be that it extends as far as gravity and EM waves have traveled since the big bang. In which case I think the answer would be that if waves define fields then fields are not currently infinite nor will they ever be infinite, but they will always define the edge of the universe and keep traveling forever and over infinite time they will keep traveling infinitely. Which means they are essentially infinite, but not actually currently infinite, but that difference might not be much of a difference, even mathematically. It would suggest that EM radiation and gravity become more dilute in any one place over time though, as the universe gets bigger and the waves spread out more.

There is also the speed of light problem. If EM radiation and gravity can only travel at the speed of light, then when new fields are created they do not propagate to infinity instantly, it takes time. So there is a kind of localness to EM/gravity but given the age of the universe a huge number of waves have propagated very far, which may be close enough to infinite we can't tell the difference.

Matter can be created and destroyed so there can be new gravity fields created? Although I believe current thinking is that energy also exhibits gravity so maybe all the gravity that will ever exist is already here. That would be a difference in the nature of gravity versus EM waves since new EM radiation can be created while perhaps new gravity can not. Is a wave a perturbance that travels at the speed of light in a field that propagates instantly? Or is a field the propagation of a wave that travels at the speed of light? The idea of infinite fields suggests instant propagation, but gravity waves and EM waves certainly do not seem to propagate instantly. Maybe the idea of fields makes no sense and there are only waves?


Page is 404 at the moment.

Here's an archive.org link that works [1].

[1] https://web.archive.org/web/20230710160259/https://www.quant...


Then it seems like gravity isn't one field, but a conglomeration of fields yet to be discovered.


High integrity move for quanta magazine to allow space for this opinion.


Out of curiosity, what's the quantum angle limit? If position is quantized and all fundamental particle are isotropic, there must be a minimum "angle of turn" for all objects, and by extension, a "maximum angular accuracy" for something like, say, orienting the face of a macroscopic object to face a particular direction. How many of those fit in a circle?


I don't think position is likely to be discrete, but,

while I also don't really think that angles are discrete, angular momentum *is* discrete, coming in multiples of hbar, the reduced Plank constant.


Between space quantization (Planck length) and the size of the visible universe, the minimum angle is fantastically small.


The Planck length does not mean anything like space quantization. It's just a length that pops out when you combine several physical constants. The values of those constants mean that the Planck scale is likely to be roughly where gravitational and quantum effects are both relevant but other than that it has no physical significance.

I have no idea where this idea that space is fundamental discrete, and divided up into chunks of Planck length voxels or something comes from, but there is absolutely no evidence for it.


>I have no idea where this idea that space is fundamental discrete, and divided up into chunks of Planck length voxels or something comes from

Like everything else in quantum physics, this comes from various people who don't even know algebra listening to people who work all day in algebraic fields and number theory try to explain concepts that don't make sense without those mathematical foundations and actual experience in physics papers.

No, a quantum computer isn't "trying all combinations at once" No, the planck units aren't fundamental measurements of the universe No, quantum entanglement doesn't allow you to transmit information faster than light

etc.

I'm plenty guilty of this, I'm no physicist and I graduated after linear algebra, which I didn't do amazing in. The examples I reference are just the ones I was lucky enough to learn from people who managed to demonstrate and describe the underlying math enough to convey meaning. Plenty of educational and ostensibly knowledgeable physics and science youtube channels for example still run with those misconceptions.


Physicist best humans can’t find some way to write something down.

This isn’t saying physicist beta Moon will explode unleashing a kaiju.

Physicist cannot visualize what has yet to visualized. Blah blah blah. The primates are so full of themselves.


What if gravity is the creator’s v2? She finally managed to figure out an elegant solution without all the weird quantum edge cases but never got around to applying the pattern everywhere.


> J. Oppenheim

I wonder what influenced him to become a physicist.


I'm just waiting for a person with the surname Oppenheimest now.


Fields aren't quantised, that's the problem, that's why gravity can't be quantized. It's a field!

Immediately, dozens of people are about hit the reply button and say something like: "But all of modern physics is based on quantization!" or something to that effect.

The issue is that there's two ways to think of quantization, and for almost all practical experiments, there's no way to distinguish between the two.

Essentially, you can think of either the fields being quantized, or the interactions with fields being quantized. Unfortunately almost all experiments measure only interactions with fields, so there's few practical ways to distinguish between the two. The mathematics is largely equivalent as well, so physicists "picked one" of the two options, and forgot about the other, equally valid option.

This is similar to the Veritasium video titled "Why No One Has Measured The Speed Of Light".[1] From inside the universe, it's basically impossible to measure the one-way speed of light, all experiments measure the two-way speed of light. So... we just assume that it's the same speed both ways. Right now, that works well enough. If it stops working, then we need to revisit that assumption instead of devising ever more complex mathematics to explain away our faulty assumption.

One difference between the "fields are quantized or not" issue and the "one-way speed of light" issue is that the former is testable. It's just that most experiments don't happen to test it.

Any experiment that uses atomic orbitals is conflating the quantization of atomic orbital levels with field quantization. There are experiments that don't involve orbitals, such as free electron lasers or radio waves. All such experiments show no quantization of fields.

Unfortunately, those experimental results are cheerfully ignored and hand-waved away. But stop and think about it: how exactly would you model a five kilometre long radio wave quanta as a point? How could something like that be instantaneously absorbed? How would the "rest of the wave" know that the tiny detector had made it vanish? It's madness, clearly, but that hasn't stopped thousands of physicists using this flawed model of quanta buzzing around in free space, because for short wavelengths like UV light you can make the mathematics work without paradoxes.

The channel Huygens Optics has some great videos[2] on the topic.

Another aspect of quantization is that for any wave in a continuum, you can model its behaviour in several ways mathematically that all arrive at the same numerical result, but "paint a different picture" in the imagination. For example, rendering something like light waves or sound waves bouncing around in a room can be done in two distinct ways: Either using local simulations with little oscillators at each volumetric point interacting only with their neighbours, OR as points moving around the space, bouncing around like particles, and carrying properties around with them such as intensity, wavelength, and phase.

The former is the "wave pool" approach, the latter is the "Monte Carlo" approach.

All of modern computer raytracing graphics uses the latter, not the former. Why? Because it requires less memory. The former scales a x^3 as the side-length of the volume 'x' goes up, the latter requires x^2 memory to accumulate the rays on the surfaces of the simulated volume, ignoring the intermediate states in the middle.

Richard Feyman's QED is famously successful, and in large part it is practical because it uses the more efficient Monte Carlo simulation approach, treating fields as little particles bouncing around. This has entrenched the "fields are made up of little quanta" in the minds of entire generations of physicists.

It's just a mathematical trick! An efficient way to do integration! It's not the One True Path, an insight into the truth of the universe.

We all need to take a step back and revisit our assumptions, and try to get away from thinking of integration tricks as having explanatory power in and of themselves.

[1] https://www.youtube.com/watch?v=pTn6Ewhb27k

[2] https://youtu.be/ExhSqq1jysg?t=283


> Essentially, you can think of either the fields being quantized, or the interactions with fields being quantized. Unfortunately almost all experiments measure only interactions with fields, so there's few practical ways to distinguish between the two. The mathematics is largely equivalent as well, so physicists "picked one" of the two options, and forgot about the other, equally valid option.

That's basically what Quantum Field Theory is though. Fundamentally everything is treated as a field, and we focus on the nature of interactions within that field, which results in particle-like behavior.


> Unfortunately, those experimental results are cheerfully ignored and hand-waved away. But stop and think about it: how exactly would you model a five kilometre long radio wave quanta as a point? How could something like that be instantaneously absorbed?

Well there are are manu acausal things in quantum world right? So it would work same as any other "spooky action at a distance". Somehow it turns out fine because it can't be used to transmit information.

As for the rest of the comment I'm not well-read enough to have an opinion.


Photoeffect shows quantization and doesn't depend on qantization of orbitals.

Quants are absorbed in a finite time, not instantaneously.


does this mean there's an ultraviolet catastrophe for gravitational waves?


IANAP, but...

> if these hybrid theories are true, there must be some minimal amount of gravitational noise.

... this already sounds suspiciously Heisenbergian, reminiscent of quantum foam, the Casimir effect and all that jazz.


Isn't this similar to the view of Penrose?


My understanding is that Penrose does believe that gravity is fundamentally quantum in nature. But his proposal is that gravity is connected to the collapse of the wavefunction. In his view, it is the exchange of a graviton that precipitates the wavefunction collapse. But this is still a fundamentally quantum theory because it posits that the gravitational field is quantized (and hence gravitons exist).


The one question that matters: is it testable


And now for something not completely different: an introductory lecture in Perturbative Quantum Gravity (PQG) by Prof. John Donoghue. The lecture is one in the Basics of Quantum Gravity online school sponsored by https://isqg.org/ the International Society for Quantum Gravity. PQG stands in contrast to the approach in the fine article at the top ("TFA") and to semiclassical gravity.

By coincidence a retweet this morning re-surfaced a tweet from a couple of weeks ago by @BahramShakerin linking to these three 2023 lectures in Perturbative Quantum Gravity given a few weeks ago at École Polytechnique Fédérale de Lausanne. This material may be useful to anyone genuinely interested in constructing GR as a "second quantization" QFT, in contrast to the Oppenheim classical gravity - second-quantized quantum matter (CQ) approach whose surface was scratched in the quantamagazine link at the top.

As there are many comments which raised questions or made ... uh ... assertions about things like gravitons, there is probably some interest here, thus this comment. Additionally, problems in semiclassical gravity and in perturbative quantum gravity motivated the work discussed in TFA.

Lecture outline and links to materials:

https://blogs.umass.edu/grqft/

https://indico.cern.ch/event/1279592/ (pqg PDF links at lower right; links to the lecture notes of related lectures are a little above that).

Lecture notes

https://arxiv.org/abs/1702.00319

Three videos of zoom lectures

https://www.youtube.com/watch?v=ik0C-xTnK-k

https://www.youtube.com/watch?v=lYc6oicfo3U

https://www.youtube.com/watch?v=UOsRyYTwOgs

and finally

Professor Donoghue's page:

https://blogs.umass.edu/donoghue/

Extra credit, `t Hooft's PQG lecture notes: https://webspace.science.uu.nl/~hooft101/lectures/erice02.pd...

The twitter thread is:

https://twitter.com/BahramShakerin/status/166874044067626189...

PS: QG2023 (by ISQG) runs yesterday (10 July) until 14 July, see https://www.youtube.com/@isqg423 for live streams and starting here https://indico.imapp.ru.nl/event/106/ for info about the conference.


The "Maybe Bigfoot is Fuzzy" interpretation of gravity?


Fun fact: Bigfoot has no gravity because he doesn't exist in this slice of the multiverse.


Putting it here, as it seems relevant.

Veritasium: Parallel World Probably Exist: Here's Why

https://www.youtube.com/watch?v=kTXTPe3wahc


The word "probably" there is completely meaningless.

There are many interpretations of QM and there is no compelling reason to choose one over the others. In particular the many worlds interpretation makes no testable predictions, it's just a story. There's absolutely nothing "elegant" about it. There's no math, no experiment, no explanation. A story like you read in some religious text or science fiction novel. It has nothing more going for it than that.

Practically, the most "likely" situation, if you want to define likely as what most researchers think and how they behave, is that most people are fans of "shut up and calculate". QM is what it is. That's it. We don't need crazy interpretations, particularly ones that don't contribute anything to our understanding.


The "popular science" takes on QM have generally been an enormous failure, focused on irrelevant fluff like this. I mean, I get it, it's a subject that you can't seriously understand without a degree in physics or chemistry. I had plenty of trouble with the math even at the undergrad level.

But surely there's more interesting stories to tell of eg particle accelerator research, rather than fantasies.


More importantly, if you don't have 4 years worth of college math, quantum mechanics is EXTREMELY BORING from a layman's perspective. It's a bunch of extremely dense and specific math with very loaded concepts and terms and has no meaning to anyone who doesn't have a strong understanding of classical physics.

"If you make a blackbody radiator nearly absolute zero, it should emit insanely high energy radiation based on our current empirical equations for radiated energy. This can only be fixed if energy can only be radiated in discrete sized packets, including a minimum size of packet"

"Whats a black body radiator?"

People don't understand quantum physics, its ramifications, or why that's interesting to physicists because people can't even be assed to do simple mental math at a grocery store to get themselves good deals before they go back to their algebra teacher and complain that they will never use math in real life


Shut and calculate is putting your head in the sand and avoiding that physics should be telling us what the world is. You’re wrong about MWI in that it’s a more elegant interpretation because it adds nothing extra to the wave equation and treats the universe as fundamentally quantum with no arbitrary dividing lines for classically scaled objects.


> You’re wrong about MWI in that it’s a more elegant interpretation because it adds nothing extra to the wave equation and treats the universe as fundamentally quantum with no arbitrary dividing lines for classically scaled objects.

That's how it's often presented, but this is wrong. In fact, it does add something to the theory, and that's a measure of how many "worlds" there are after a quantum measurement, which helps translate the wave function values into testable probabilities (the Born rule).

In the CI, after a measurement, the wave function collapses into a single value, leaving the single world in a single classical state, with some probability that's equal to the modulus of the wave function amplitude of that state (or is there some squaring involved as well?).

In the MWI, after a measurement, different versions of the observer observe different states, and the number of versions of the observer observing each state corresponds to the modulus of the wave function amplitude of that state. Then, via simple probability, we can say this count corresponds to the actual probability that any one version of the observer will notice one particular state, even though in the actual multiverse all of the states actually happen.

As you can see, the two interpretations require the same amount of extra postulates above and beyond the wave function itself. Also, the MWI has to somehow define a formal notion of an observer/a classical world, which runs into questions of scale just as much as the measurement postulate of CI.


> and that's a measure of how many "worlds" there are after a quantum measurement, which helps translate the wave function values into testable probabilities (the Born rule).

All interpretations need to induce a measure over observations (not "worlds") to produce meaningful results. Without that all you have is an abstract mathematical object.

> As you can see, the two interpretations require the same amount of extra postulates above and beyond the wave function itself.

CI isn't really a single thing. Some people use it to mean "shut up and calculate", which requires no postulates by virtue of making no meaningful claims. Some people use it to mean various sorts of subjective probability anti-realism, which is similarly not really competing for the same territory as MWI. And some people use it to mean objective collapse, which requires actual modifications to the formalism.


> All interpretations need to induce a measure over observations (not "worlds") to produce meaningful results. Without that all you have is an abstract mathematical object.

Agreed, but the MWI in particular does so by applying frequentist probabilities over all versions of an observer, the so-called worlds. The argument goes that there is an apparent non-deterministic process from the point of view of every individual observer, and that we can compute its probability based on how many observers would see a particular state versus the total number of observers.

For some reason, many MWI adherents want to claim that this is not an additional postulate, that MWI only needs the Shrodinger equation, but it clearly is a postulate in addition to that equation, just as much as the collapse idea in other interpretations.

You are right about the CI though, it's not really a useful term.


> For some reason, many MWI adherents want to claim that this is not an additional postulate, that MWI only needs the Shrodinger equation, but it clearly is a postulate in addition to that equation, just as much as the collapse idea in other interpretations.

I think you're misinterpreting them (us). The claim is that MWI requires one additional postulate, whereas collapse interpretations need at least two: they both need a way to make distributions into probability distributions over observations, but collapse additionally requires some way of dodging Wigner's Friend type scenarios: an objective classical transition, extra state beyond the wavefunction, outright antirealism, etc.


It's not really anymore than tables and chairs not being part of fundamental physics means they're additional postulates to physics. Because table and chairs emerge from the underlying physics. Same thing with measurement, observers and probability in MWI. Or to put it another way, the baggage of classical physics being quantized misleads us into thinking measurement and probability need to be fundamental postulates.


> That's how it's often presented, but this is wrong. In fact, it does add something to the theory, and that's a measure of how many "worlds" there are after a quantum measurement, which helps translate the wave function values into testable probabilities (the Born rule).

The distribution of worlds/branches is determined by the wave function. A more likely outcome means there are many more worlds with that outcome. You can calculate the percentage of worlds that have that outcome.

> Also, the MWI has to somehow define a formal notion of an observer/a classical world, which runs into questions of scale just as much as the measurement postulate of CI.

Measurement, observer and classical shouldn't be part of a physical theory. The answer as to why things appear that way to us is decoherence.


> A more likely outcome means there are many more worlds with that outcome. You can calculate the percentage of worlds that have that outcome.

First of all, this is a new postulate of QM, you can't derive it from the Schrodinger equation. It is perfectly equivalent with the measurement postulate.

> Measurement, observer and classical shouldn't be part of a physical theory. The answer as to why things appear that way to us is decoherence.

This contradicts the other part, where you were talking about a notion of worlds that can be counted. If they can be counted, they have to be defined as classical worlds. Decoherence only explains why worlds can't interact with each other, it doesn't help define what they are without appealing to measurements. Even the notion of "the environment" is somewhat ill defined if we go down to the philosophical level.


Measurements are interactions that result in entanglement. Atoms radioactively decaying in your body are interactions. The environment is the entire universe as it becomes entangled with a quantum event.


Entanglement is not enough to give you the properties of measurements. While it is often presented as "when a spin-0 particle decays into two particles, one must have spin +1 and the other spin -1", that is not what the math actually says.

What the math actually says is that the spin of each particle is a linear combination of |+1> and |-1>, and that their sum must be 0. For example, one of the entangled particles could have state 1/3|+1> + 2/3|-1>, and the other would have 2/3|+1> + 1/3|-1> (or something similar, I'm a bit fuzzy on the exact math). There is nothing that says that particles must be in pure states from their own perspective. And yet, we only ever observe pure states (with some probability) in our own perspectives. So, there must be some additional law that favors these states.


Decoherence from entanglement with the environment interferes with the coherence of the supposition, so we only see the pure state as I understand Sean Carol's argument. At this point in the video below he goes over the spin states.

https://youtu.be/LGtimjuA5gA?t=2545


> the MWI has to somehow define a formal notion of an observer/a classical world

Yes: in MWI, those things don't exist. The world is quantum all the way up and all the way down, observers are simply (other) quantum system that get to interact with the quantum system under the consideration. An observation, then, is simply an interaction between two quantum systems and follows all the usual rules so instead of the wave-function collapse leaving you with the observed system in pure state X and the observer is in pure state Y, you get a huge superposition of "the observed system in pure state Xi, the observer is in pure state Yi" states in the end. Those substates, in a sense, are multiple worlds.


You're missing the point I was highlighting. Quantum mechanics makes very specific, very precise predictions about the probabilities of an observer seeing any of those states, which we have confirmed are correct to extraordinary precision.

The problem is explaining what is the relationship between these observed probabilities and the wave function amplitudes. If we say that all possible quantum states are realized in the universal wave function, we the need to explain why different states have different probabilities to an observer. The only way to do that in the deterministic world of MWI is to add a new postulate: one that says that for any state Xi, there are N observers in state Yi, where N = |psi(Xi)|, so that we can compute regular frequentist probabilities of an observer seeing a particular state over the amount of observers.

This is perfectly reasonable, but it is just as much an extra postulate as the measurement postulate.


> Shut and calculate is putting your head in the sand and avoiding that physics should be telling us what the world is.

It's how physics is done. PhD students don't sit in physics departments getting ideas from their interpretation of QM. The interpretations people have are so meaningless and useless they never appear in physics publications.

You don't like it, but it's what physics is.

> You’re wrong about MWI in that it’s a more elegant interpretation because it adds nothing extra to the wave equation and treats the universe as fundamentally quantum with no arbitrary dividing lines for classically scaled objects.

That's the thing about most QM interpretations, and in particular the many worlds interpretation, and why they aren't science just stories: you're right, they add nothing. So they are untestable!

What does untestable mean? It means that they make no predictions about our world at all. They are stories. Their impact on physics or the universe we live in is the same as that of Winnie-the-Pooh. They're a waste of time.

> avoiding that physics should be telling us what the world is.

What can I say to "should"? I can only report what physics is in the real world. That's like saying, biologists "should" be working on building Jurassic Park because that's what you feel is the goal of the field. It's not. That's not what they do.


> It's how physics is done. PhD students don't sit in physics departments getting ideas from their interpretation of QM. The interpretations people have are so meaningless and useless they never appear in physics publications.

They do if they're going into foundations of physics. There's a sociological problem where the Copenhagen interpretation won out in the past and resulted in the shut up and calculate orthodoxy in teaching physics. But there are physicists like Sean Carol who think that's a historical mistake and based on failing to think about QM correctly.

https://www.preposterousuniverse.com/podcast/2023/06/26/241-...

> That's the thing about most QM interpretations, and in particular the many worlds interpretation, and why they aren't science just stories: you're right, they add nothing. So they are untestable!

Some make testable predictions like the collapse theories. For the others, it's more about the proper way to understanding what the mathematical formalism and experimental results are telling us about the world, and how we can use that to advance future physics.

> What can I say to "should"? I can only report what physics is in the real world.

And there have always been physicists like Einstein, Everett, Bohm and Bell who pushed back.


> They do if they're going into foundations of physics

People don't do PhDs on this topic because it's not productive. Famous/bored/tenured/rich people pitch in arguments for fun. Anyway, the vast majority of work in foundations of physics has nothing to do with these interpretations https://link.springer.com/journal/10701/volumes-and-issues This is a niche inside a niche in a mostly irrelevant field.

No advance in physics has ever been made based on QM interpretations. No Nobel has ever been won. No phenomena have ever been discovered this way. It's just not what people work on and it's not how physics is done.

You're determined that your view of the world is right, and that of actual working scientists is somehow blind orthodoxy that's missing the point. There's nothing I can say to fix that. It's like arguing with an antivaxer. You think you know more than the people doing the work. Bye stranger! But.. it's worth considering that here and elsewhere in life your probably don't know better.


I'm just repeating arguments that physicists like David Deustch, Sean Carrol, David Wallace, Sabine Hossenfelder have made, because they sound like good arguments to me. Better than ignoring the issues you get with shut up and calculate. Maybe you should take the issue up with them? Perhaps you think throwing out the term antivaxer makes you smarter than Einstein, Schrodinger and Bell?


This is like watching Star Trek and reading a book on the science of Star Trek where people make wild conjectures and coming away thinking that "Oh wow, scientists are working hard on warp drives!".


> In particular the many worlds interpretation makes no testable predictions, it's just a story.

Yes, that's what "interpretation" means. If it made predictions that distinguished it from other interpretations, it would be a competing theory to quantum mechanics - and so far all such attempts have failed.


I watched it a long time ago. I just put it here because it presents the idea visually, not because I support or even understand it :)


This. So much this. And also https://xkcd.com/1240/


>> Jonathan Oppenheim, who runs a program exploring post-quantum alternatives at University College London, suspects that’s because gravity simply can’t be squeezed into a quantum box.

So... he is still part of the Orthodoxy. He is challenging ideas, but as an employee of University College London he is hardly any sort of outsider to the physics community.


This sort of objection only arises because titles say baity things like "challenging orthodoxy". There's no information there—not in the bait and not in the objections the bait triggers.

The solution is to have a more substantive title. Fortunately the article supplies one of those in its HTML doc title, so we've switched to that (plus the s/the/a/ debaiting trick).

Let's focus on the article content now please.


You have to be trained in physics to theorize about it. Thinking otherwise is the realm of the crackpot. https://www.youtube.com/watch?v=11lPhMSulSU


I’m a quantum physicist and you wouldn’t believe how many people try and sell me on their theory that solves everything that is literally just an idea in their head and can’t even produce any quantitative predictions. The levels of self delusion that these people display is astounding and we need to mock it more openly imo


The newage people are the worst at this.

Beware, you may want to gouge your own eyes out: https://www.watkinsmagazine.com/quantum-life-of-healing-crys...


There is this thing that I like to call 'the cult of stupid' where people are somehow proud of their lack of knowledge and they are not afraid of putting it out there. They tend to laugh when asked pointed questions because in their world 'truth' has no value, it's all about belief. One family member of mine wears healing crystals because they keep her cancer free. It's really sad and it irks me that given the excellent educational opportunities we have that so many people choose willingly not to avail themselves of those opportunities. But they do have opinions on all that stuff that you need to work at to learn.


if i have a crackpot theory and i’d like a real physicist to look it over and am willing to pay for their time where would i find one? asking for a friend



Agreed. Those who confuse “astronomy” and “astrology” are confused.


What do you intend to achieve by mocking these people?


To make them feel bad (I am salty)


I love Dr. Collier's videos. Her method of story telling/video essay is extremely unconventional but very effective.


Me too bestie, I haven't taken physics since undergrad but I appreciate that it's a layer of understanding above popular science (that never covers any of the math) while remaining accessible.


Genuine question here, when was the last time a total outsider made a significant contribution to physics?


https://en.wikipedia.org/wiki/STEVE

https://en.wikipedia.org/wiki/Subauroral_ion_drift

Not exactly the discovery of a new quark, but it triggered realworld research into the cause of the phenomena. That qualifies as contribution imho.


Ask yourself; why are they outsiders in the first place?

It's like asking why hasn't anyone who is not a player in major league baseball ever won a world series?


Those are disanalogous in the case of theoretical physics where you don’t need access to any expensive equipment. There’s nothing in principle preventing someone from learning math and physics on their own or from a traditional university and then coming up with new theories alone. And in fact this happens all the time and most of those people get called crackpots.


A better analogy is asking why someone who is bad at baseball isn't on an MLB team


When was the last time an insider made one?


Somewhat off topic, and somewhat related: I don't think there actually is such a particle as a graviton.

General relativity says that you can't tell the difference between being unaccelerated, and being in free fall. But in one case you have no gravitons coming in, and in the other you have gravitons. I can change whether gravitons are there or not by a (general) relativistic transformation.

But that's not possible. Either the gravitons are there, or they aren't. There's not two sets of reality of what particles exist for two different coordinate systems.

Therefore gravitons don't exist. The bending of spacetime is what's really going on, and there is no quantum version of the gravitational field. (Except possibly that the bending of spacetime could be quantized, but that's not what we mean by a graviton.)

I am very open to being shown to be wrong here. Can anyone do so?


> But that's not possible. Either the gravitons are there, or they aren't. There's not two sets of reality of what particles exist for two different coordinate systems.

No, this is completely false. Particle number is coordinate system and frame dependent.

Quantum particles aren't little billiard balls bouncing around, and it's really best to not think of them as 'particles' at all. They're just a calculational tool to describe excitations in quantum fields.


Can you provide an example of a Lorentz boost that changes the number of particles?


It's pretty general. Just draw the worldlines for some particles bouncing off eachother, and then perform a Lorentz boost, which means choosing a new tilted spatial surface to intersect the worldlines. If you perform a boost, then the spatial surface intersect fewer or more worldlines, and worldlines which were for particles in one frame can become antiparticles in another.

Consider an electron absorbing a photon at spacetime point x, and then emitting a photon at spacetime point y, where (y0 - x0) > 0.

An observer in another lorenzt boosted frame would then say that they see in their boosted coordinated system (x' and y' with rapidity v), that (y'0 - x'0) = cosh(v)(y0 - x0) - sinh(v) (y1 - x1)

For large enough v, then you can have y'0 < x'0, so the observer in the boosted frame would see the events in a different order. One observer sees a negatively charged electron moving from x to y, and the other observer sees a positively charged positron moving from y to x.


If the events in frame 1 are causally connected, then to see the events in frame 2 in reverse order takes a boost of velocity greater than c, which is not a valid boost.

Sure, you can do all kinds of things with a boost like that. It's not physically realizable, though.

Can you show me an actual experiment to the contrary?


Compton scattering with a space-like separation for the absorption and emission points are a well known phenomenon in quantum field theory. Classically these events would be causally disconnected, but in a QFT the propagator is non-zero and instead has an exponential suppression in the space-like interval.

This is a phenomenon with experimental consequences and uses:https://arxiv.org/abs/1301.3819 https://www.nature.com/articles/s41567-019-0774-3


Interesting. I did not know that.

Still, we were originally talking about gravitons. "Exponential suppression in the space-like interval" means that it cannot apply to gravitons (or at least, it cannot apply to gravitons on astronomical scales).

So, backtracking the argument a ways: Do you have any examples of the numbers of particles being different in different frames or coordinate systems that works at all distance scales?


What makes you think this cant apply to gravitons, or more generally, what makes you think that gravitons are relevant over large distance scales?

> Do you have any examples of the numbers of particles being different in different frames or coordinate systems that works at all distance scales?

Sure, it’s called the Unruh effect. Any accelerated frame (or observer in a gravitational field) will see a different particle vacuum than observers in a inertial frame.

This means that if you start in empty space and then accelerate, the space will suddenly not look so empty and will be at a higher temperature with more fluctuating particles.


That example I think is tangential. The better example is free fall in a gravity well v. being accelerated up like in an elevator in space far, far away from gravitational sources


They asked for an example with Lorentz boosts, so only inertial frames.


> There's not two sets of reality of what particles exist for two different coordinate systems.

Look up Unruh radiation for an example of such particles. Gravitons wouldn't be the first.

https://en.m.wikipedia.org/wiki/Unruh_effect


The question of how can the Equivalence Principle work in the presence of gravitons comes up surprisingly often!

You will enjoy this discussion https://physics.stackexchange.com/questions/589074/why-can-t...

They go over several reasons why gravitons don't break the equivalence principle. In particular the answer about virtual particles is very important.


Saying one theory must be wrong because the other must be right is begging the question. Where general relativity and quantum theory disagree, we don’t know which one is correct (or maybe neither is).

Highlighting a point of disagreement between the two doesn’t, by itself, resolve the disagreement.


> General relativity says that you can't tell the difference between being unaccelerated, and being in free fall.

Just to be clear, GR only says this for a very specific case, and a very specific observer. In fact, one of the easier exercises when first learning how to work with metrics is to show how to measure the difference.


In relatively gravity is not a force like Electromagneticism etc. Instead, space time is curved and as a result the object itself is travelling in a straight, in accelerated line and only looks like it is accelerating to an observer outside of the curved space.

So relativity has no gravitons and instead curves space.

In quantum physics the opposite is true: space is flat, and gravity is like other forces, the result of particle exchange.

This is one way that the two systems are incompatible.

Again, happy to be corrected...


This is incorrect. A classical limit of the graviton picture fully reproduces general relativity. The incompatibility lies in the non-renormalizability of people's attempts and quantization of the Einstein Hilbert action, meaning that all the quantum loop corrections have unfixed coupling constants which can't be derived from the classical theory (unlike things like electromagnetism).


"General relativity says that you can't tell the difference between being unaccelerated, and being in free fall"

I think this is a misunderstanding. Special relativity says you can't tell the difference between free fall (in a gravity well) and inertial acceleration with the example of a guy in an elevator typical here. Both cases have acceleration.

My question: does an accelerating elevator due to motors and pulleys curve spacetime too?


Surely gravity must be quantized at Planck scale. The question is whether that remote scale is the only relevant one and whether there might not be interesting phenomena governing the interplay of the quantum universe with gravity and geometry at larger scales, lower energies.

The footprint of fundamental forces across scales is definitely not uniform. Strong and weak forces are confined to the quantum regime: no long range manifestations. Electromagnetism differs in that it has both a quantum regime and a classical field limit. Gravity may exhibit its own quirky behavior across scales.

The enormous intellectual effort to unify all fundamental forces into a grant scheme may have been a case of premature optimisation.

The most exciting would be if we could break with endless paper universes and device experiments probing the interplay of gravity with quantum.


If you believe in the Everett Interpretation, then he is wrong.

The Everett Interpretation, aka Many Worlds, holds that both observer and observed are quantum mechanical systems. From this it follows that the act of observation does not produce a "collapse", but it does separate the observer into multiple observers that can't interact with each other again.

So Schrödinger's cat is in a superposition of alive and dead before the box is opened, and after it is opened the observer is in a superposition of one who saw the cat alive, and the other who saw the cat dead. It feels bizarre, but there are no contradictions. And it is what quantum mechanics predicts.

For those who believe in this interpretation, there is a simple test of quantum gravity. Set up 2 Cavendish experiments to measure gravity. Based on whether there is a click in a Geiger counter, choose which one to put a cannon ball next to. Measure both.

This has been done, and we only see the gravity from the cannon ball that we placed, and not the alternate location it might have been placed at.

But believers in other interpretations of quantum mechanics will disagree that this experiment tests anything at all.


I’m a “believer” in the Everett Interpretation (on the grounds that it’s the simplest interpretation) but I can’t see how such an experiment could prove anything. This experiment seems to assume we can keep the whole apparatus including the Geiger counter and the mechanism to move the cannon ball from becoming entangled with the environment (decohering). This seems very far outside the realm of current technology.


It shows that when there is a superposition of quantum mechanical states, you also get a superposition of two different space-time structures.

The reason why the Everett Interpretation is needed for the experiment to be valid is that according to it decoherence results is a superposition of quantum mechanical states, and not a collapse. If you believe in another interpretation, there has been a collapse, and so the experiment demonstrated nothing.


Once decoherence has occurred I don’t think it’s meaningful to say that the system exists in a superposition of states. The multiple decohered branches of the wave function can no longer interfere with each other and so there’s no way for them to affect each other, which is why we call them separate worlds. Within each decohered branch there’s an experimental apparatus and a human that sees one specific outcome. This was Everett’s main insight that each observer in each branch experiences what looks like a collapse but is really just their brain becoming entangled with one or other of the measurements.


According to quantum mechanics, they can no longer interfere with each other.

But we are bringing gravity into the mix. Gravity works differently than the other forces. Do space-time structures also exist in a superposition of states? Or does QM work against an essentially classical gravitational space-time background?

Without a theory combining quantum mechanics and gravity, the answer is not obvious. The fact that the superpositions do NOT interfere gravitationally means that either there was no quantum mechanical superposition (collapse actually happened) or that gravity can also exist in superimposed states. If you believe in the Everett interpretation, then there must have been a quantum mechanical superposition, and this tells us something non-trivial about a quantum theory of gravity.


> This has been done, and we only see the gravity from the cannon ball that we placed, and not the alternate location it might have been placed at.

Doesn't this presume that your brain deciding where to place the cannon ball was an uncertain quantum event?

Doesn't it also presume quantum gravity does not exist?


Your brain is not deciding where the cannon ball is, it is predicting where it is and them making that prediction a certainty. That is the collapse of the wave function, turning a probability into a certainty.


No:

> Based on whether there is a click in a Geiger counter, choose which one to put a cannon ball next to.


In Everettian Mechanics, decoherence occurs long before the box is opened. The photons in the box interact with the cat. That entanglement with the environment which causes the branching of the wave function occurs then.

The observer that opens the box is already either in a branch where the cat is asleep (to borrow from Sean Carroll) or awake.

The interesting thing to come to understand is that probability doesn't actually seem to exist as a real feature in the universe. Rather, probability is a measure of what observers believe they will or are seeing. Probability, in Everettian Mechanics, is about perception of the universe, not a feature of the universe itself.


The cat is all entangled with the other stuff in the box, but by assumption the stuff inside the box is not entangled with the stuff outside the box.


But the observer is in all “probabilities” no?


There are observers in all probabilities of the experiment, but each observer only observes their world because decoherence interferes with the superposition of the other worlds. Worlds being the entanglement of the entire experimental setup spreading to everything else.


> If you believe in the Everett Interpretation, then he is wrong.

If.

> And it is what quantum mechanics predicts.

It is an interpretation of what the equations predict. It is very much not the only possible interpretation, so it is not "what quantum mechanics predicts".

> But believers in other interpretations of quantum mechanics will disagree that this experiment tests anything at all.

Within other interpretations of quantum mechanics, this does test nothing at all.


Where would energy come from for creating each universe?


I think this question stems from a fundamental misunderstanding of the Everett interpretation...the idea of "many worlds" (a phrase I do not favor specifically because of this very misunderstanding!) evokes this imagery of an entirely new universe BURSTING forth dramatically upon the collapse of the wave function, and as an entirely new universe is "created," the amount of extent energy is literally doubled (or multiplied by many times more). Indeed, where does this energy come from?

What the Everett interpretation suggests is that in the same way you are comfortable with the function f(x)=y having (infinitely) many values (you don't HAVE TO choose an x, ie the function doesn't collapse), the wavefunction simply is what it is and doesn't collapse either. What we observe as the collapse of a function is simply an artifact of being a conscious being that can only observe one value of the of the wavefunction at any particular moment.

None of these unique values (observables) of the wave function can interact with each other (in the same way the value of x=2 doesn't "interact" with the value of x=8 for f(x)=y), so energy cannot be gathered or duplicated at any particular point of the wave function, so this energy creation/duplication is no issue.


The question is a misunderstanding. In the pure wave (Everett) interpretation, there are no universes being created when the wave function branches. Rather, different regions of the wave function become separated (decohered) from each other. https://physics.stackexchange.com/questions/41588/many-world...


Sean Carroll, following Everett, puts it in the most concise form: (i) systems are described by wave functions, (ii) wave functions obey the Schrödinger equation [1]. "Many-worlds", universes, observations, observers, and so on become just entia multiplicanda [2], superfluities.

[1] https://www.youtube.com/watch?v=nOgalPdfHxM, one set of rules in quantum mechanics at 36:00; at 1:13:14 where does the energy come from? energy of the set of all universes is conserved.

[2] https://en.wikipedia.org/wiki/Occam%27s_razor


Maybe the implementation uses copy-on-write to minimise the amount of work required? ;-)


Why would it necessarily require any energy?

Energy is what we have to expend to effect change within a universe. There may be no requirement for energy to split into a branching multiverse.


"Where would energy come from for creating each universe?"

Why does it have to come from somewhere? What if it just is?


Among other problems. It's just bollocks on the level of "but you didn't say it couldn't be this, nanananana."


I think because of the abuse of "many worlds" in science fiction media (especially as of late) as a convenient plot device, people develop this idea that it's an unserious proposal wrt the foundations of physics. As far as I have read, it strikes me as perhaps the most parsimonious interpretation of QM out there.

Genuinely curious, what do you mean by "but you didn't say it couldn't be this, nanananana...?" The Everett interpretation isn't some fantastical notion spun up by a scifi writer...it is simply the consequence of removing collapse of the wavefunction as an objective event from the picture. And if it turns out our observations wouldn't be changed by removing this feature, then perhaps it was an extraneous feature in the first place!


Assuming many worlds, and many here means: enormous amounts, far exceeding the number of particles in the universe, is everything but parsimonious. There happens to be a model that fits some data, but that's it. It's a grotesque assumption to avoid a conflict in a man-made theory. It's a funny thought, but no more than that.

There's also nothing special about observing. Our consciousness isn't super-natural, so the idea is in desperate need of some other underpinning.

And probabilities: if this is one of many, many worlds, the distribution of events as we can observe them is heavily skewed. The next observations should follow a radically different pattern, unless you also assume that each split influences the probabilities of future events.


> Assuming many worlds, and many here means: enormous amounts, far exceeding the number of particles in the universe, is everything but parsimonious.

"Many worlds" is a misnomer. MWI is just wavefunction realism + unitary evolution. There's only one world, and really only one dynamical object: the wavefunction. It evolves according to some unitary operator, and that's the whole story. No splitting, no collapse, no objective classical transition, just quantum mechanics taken at face value.

> There's also nothing special about observing.

Yes, that's the MWI position.


Is it possible to observe anything at all in MWI? If yes, what does it mean to "observe something" in MWI? If not, is there any physical content in MWI?


Observations in MWI are just ordinary physical interactions. Specifically they're perturbative-regime interactions between the thing being observed and large thermalized systems (e.g. humans). From the perspective of the thermal bath, you get exponential suppression of everything but the eigenstates of the interaction Hamiltonian, which is why our observations "look classical".


Will other large thermalised system (e.g. rocks) also experience observations that "look classical"?

Another MWI proponent here is talking about how the "look classical" thing is "simply an artifact of being a conscious being that can only observe one value".

Is there something special about "large thermalised systems" (and/or humans)? How large have they to be to allow for "our observations"? Where is the boundary between the thing being observed and the observing thing?

MWI seems to still face most of the difficult questions - it not all.


> Will other large thermalised system (e.g. rocks) also experience observations that "look classical"?

The inner life of rocks is somewhat beyond our reach, I'm afraid. But they'll induce decoherence in the same way as a human, yes.

> Is there something special about "large thermalised systems" (and/or humans)?

Aside from being large and thermalized? No.

> How large have they to be to allow for "our observations"?

It's a continuum. The more internal degrees of freedom you have, and the more thoroughly they're mixed, the faster you'll decohere things.

> Where is the boundary between the thing being observed and the observing thing?

At the level of fundamental physics, there isn't one. "Observation" is an approximate and thermodynamic notion.


> But they'll induce decoherence in the same way as a human, yes.

From the perspective (?) of the (non-human) thermal bath, will that decoherence result in a single (diagonal, mixture) state or in a particular state of those N separate states that would "look classical"?

Decoherence doesn't make things "look classical" by itself - at least until you define what "looking" is.


Depends on whether you're talking about the "perspective" of the whole joint |Rock>|Interaction Eigenstate 1> + |Rock>|Interaction Eigenstate 2> ... system or just one of its components.

> Decoherence doesn't make things "look classical" by itself - at least until you define what "looking" is.

Of course; but that's true of every scientific theory. Decoherence solves the preferred basis problem, not the hard problem of consciousness.


> Decoherence solves the preferred basis problem, not the hard problem of consciousness.

Ok, I guess I misunderstood the scope of "There's also nothing special about observing."

It's not clear to me if you (MWI) would say that rocks had defined positions when nobody was observing them - or whether the question of things having definite positions (and the very existence of those things) wouldn't even make sense in the absence of consciousness.


> Ok, I guess I misunderstood the scope of "There's also nothing special about observing."

There's nothing special about the physical processes constituting a scientific experiment. They're unitary evolution like everything else. Whether there's anything special about conscious experience (for my money: obviously yes) is outside the scope of physics.

> or whether the question of things having definite positions (and the very existence of those things) wouldn't even make sense in the absence of consciousness.

If you want to be perfectly precise, the MWI does not contain discrete things at all, any more than the Earth objectively has discrete continents and seas. But the most accurate map is not always the most useful one.


If the MWI doesn’t contain the Earth, the seas or the molecules of water it may not be providing a very useful map until we add a lot of things to it.


Correct, it is an enormous amount...Under Everett, the "number of worlds" practically exists on a continuous spectrum, so basically an infinitude of worlds. But this doesn't violate parsimony in terms of building a model of reality. I don't have the greatest comprehension of the history of science, but I imagine the behavior of water and gas was not initially explained as being the collective behavior of moles (6.02x10^23) and moles of individual molecules. The parsimony of thermodynamics is measured by the simplicity of the underlying equations that describe their behavior, not the staggering number of atoms involved.

As for observation, I didn't suggest there was anything special about observing. In fact, this is part of what makes Everett among the most parsimonious interpretations: While some (not all) other interpretations are tasked with explaining the nature of observation (what counts as observation, how quickly does collapse propagate, etc), under Everett, there is no notion of observation at all. My point about consciousness is that, being conscious, we are forced to have a point of view which depends on where we exist in/on the wave function. I wasn't suggesting that consciousness has any actual effect whatsoever on the wavefunction.

I admit that the nature of probability is among the most difficult parts of Everett to wrestle with, though I'm not sure I understand why subsequent observations should follow a radically different pattern? It's still the same wavefunction with the same distributions as before...Schrodinger's equation is the same wherever you are on the wavefunction.


Many worlds just falls out of superposition and entanglement. Those happen in quantum systems, and we are made of quantum systems. To avoid that you have to stipulate something additional that avoids putting macroscopic objects like devices, cats and brains into superpositions.


> it is simply the consequence of removing collapse of the wavefunction as an objective event from the picture. And if it turns out our observations wouldn't be changed by removing this feature, then perhaps it was an extraneous feature in the first place!

> What we observe as the collapse of a function is simply an artifact of being a conscious being that can only observe one value of the of the wavefunction at any particular moment.

If we simply assume that our observations are as if a wave function happened we can indeed have our Schrödinger cake and eat it too.


Where do the eggs come from when you cut a cake?


This is a good summary. I like to put it the following way as an alternative: instead of imagining that you are the researcher, imagine that you are the cat.


Why would anyone believe such a thing?


Because it’s very straightforward and has less assumptions and weird features than most other interpretations, and it has seen the most advancements in quantum fundamentals research


Complete layman here -- does that theory say that every time a quantum effect does or does not occur, in fact both possibilities happen but in different worlds?

The entire universe instantly splits up in two for every quantum event anywhere in it?

Edit: I read more comments and no, it's not literally that.


Yes, it does mean that


These are just mathematical tools we use to make predictions. Assuming they represent objective reality and somehow prove an infinite number of universes doesn’t seem very scientific to me.


That's way too instrumentalist. The math also describes how atoms work and many other things. Modern physical understanding is based on those descriptions. Cosmology and nuclear physics would be useless without an understanding.

How do the predictions work if they aren't in some way modeling the way reality is? Why does the technology based on them work? Instrumentalism gives no answers to those questions. Science is about understanding the world, and using that to make predictions.


They work because they use human logic which has evolved to understand the universe just enough to “work” and where that logic doesn’t work we use probabilities to ensure we’re only calculating the odds and deriving from there. You can devise probabilistic systems of equations to describe stock prices and account for when you could be wrong, but this is not you describing some objective set of equations the market emanates from.

It working doesn’t mean it in anyway reflects some objective truth (if such a thing even exists.) It just breaks things down into small enough parts you can then model those parts and make predictions. This does not mean those small parts are “real” just that they are convenient ways to describe the flowing of energy for our purposes.

> atoms work and many other things

Atoms are still just a human created mathematical abstraction we use to describe and make predictions about matter. All there really is is energy and atoms are just a mathematical tool we use to study energy.

This is quite literally the oldest debate in natural science and it seems like one half of the debate just decided they’re right a few centuries ago and stopped caring about the fact that’s not something they have (or can) ever proved.

They’re tools, don’t get me wrong they are very useful ones, but the second you start pretending they’re anything else or that your interpretation of them is how reality really works you become just another religious zealot and that goes against everything science is supposed to be about.


> Atoms are still just a human created mathematical abstraction we use to describe and make predictions about matter. All there really is is energy and atoms are just a mathematical tool we use to study energy.

We can see atoms with electron microscopes. A mathematical tool to study energy doesn't say anything. A tool describing how chemistry arises from atomic structure does.

> This is quite literally the oldest debate in natural science and it seems like one half of the debate just decided they’re right a few centuries ago and stopped caring about the fact that’s not something they have (or can) ever proved.

Not proved, but imagine using this sort of argument against evolution being a true account of life's history on Earth. Of course it won't get everything right, but in general there is simply no other way to explain how life forms changed over time. Similarly, there's no other way to understand microscopic physics than quantum mechanics. Maybe a more complete theory combining gravity and QM would be more true, just like Newtonian physics was incomplete and superseded by relativity.


> This is quite literally the oldest debate in natural science and it seems like one half of the debate just decided they’re right a few centuries ago and stopped caring about the fact that’s not something they have (or can) ever proved.

Yet earlier you asserted one side of the argument without proving it nor even acknowledging that there is a debate


I mean, that's obviously a huge matter of debate, but most of us physicists at least hope that our theories reflect objective reality in some way. Also, there are of course explicitly ontological aspects of quantum mechanics, such as PBR theorem and Bell's inequalities


Because that's what quantum mechanics in it's purest form tells us. Avoiding the many worlds requires tagging on extra stuff not in the equations.


Quantum mechanics tells us that, in order to predict the outcome of a measurement, we have to compute a specific probability based on the amplitude of the wavefunction.

We can explain this probability as some kind of collapse, or we can explain it as some measure of the number of observers making the measurement in parallel "worlds". Neither is inherently closer to the math.


> Quantum mechanics tells us that, in order to predict the outcome of a measurement, we have to compute a specific probability based on the amplitude of the wavefunction

Even this is already wading into interpretational waters. The math says nothing about whether a given POVM should be thought of as a measurement or an interaction (or both, or neither).


Sure, it's not part of the math, but it is part of the physics. There is a very clear distinction in the theory between "interactions", governed entirely by the Schrodinger equation (or its relativistic versions, or QFT), and a "measurement", where you have the apply the Born rule not only to predict what the measurement device will show, but also to correctly predict what will happen in a subsequent experiment on the same particle.

Without the Born rule, QM makes nonsensical predictions that don't explain what happens. Decoherence helps explain certain things directly from the math without appealing to the extra Born rule, but not fully, and not quantitatively (not the specific probabilities).


> Sure, it's not part of the math, but it is part of the physics.

"the physics" here is a specific interpretation

> Decoherence helps explain certain things directly from the math without appealing to the extra Born rule, but not fully, and not quantitatively (not the specific probabilities).

You can derive the Born rule using decoherence


> "the physics" here is a specific interpretation

No, "the physics" here is the way we are supposed to tie the math to physical observations. You can't explain all of our experiments if you don't make a distinction between interactions which preserve quantum properties and measurements which lead to classical observations. For a specific example, you can't explain the double-slit experiment with a detector at one slit if you don't make this distinction.

> You can derive the Born rule using decoherence

No, you can't. I have actively looked for such derivations and found none (at least none that are considered credible), and there is literature that explicitly says that this hasn't been convincingly done, e.g. [0]:

> My main conclusion is that there is no way to derive the Born rule without additional assumptions. It is true both in the framework of collapse theories and, more surprisingly, in the framework of the MWI. The main open question is not the validity of various proofs, but what are the most natural assumptions we should add for proving the Born rule.

[0] https://pages.jh.edu/rrynasi1/HealeySeminar/literature/Vaidm...


> the way we are supposed to tie the math to physical observations

Again, that's a matter of interpretation. The different quantum interpretations say different things here.

> No, you can't.

Yes, you can. Look up Quantum Darwinism.

PS. throwing "nuh uhs" back to a professional in the field is not appropriate. Get your respect in order.


> Again, that's a matter of interpretation. The different quantum interpretations say different things here.

In my view, a physical theory has two parts: a mathematical model, and a series of rules to relate the model physical experiments. For example, the theory of Newtonian mechanics isn't just F=m×a, it also includes the observation of how to measure the mass of an object and its speed etc.

Without these extra rules, you just have a mathematical theory, not a physical theory.

> Yes, you can. Look up Quantum Darwinism.

I did, and as far as I understand (e.g. from here [0]), it is an exciting candidate for explaining the Born rule based only on the other equations, but it's not fully accepted and it has its own limitations so far. Still, very interesting, thanks for pointing me to it.

> PS. throwing "nuh uhs" back to a professional in the field is not appropriate. Get your respect in order.

I did not throw a "nuh uh", I explained what I looked for and why I came to my conclusion, including a citation from a professor in the field. I think that's about as far from disrespecting you as I can get, while still disagreeing. Not to mention, I didn't know you were a professional in the field.

[0] https://www.quantamagazine.org/quantum-darwinism-an-idea-to-...


Why should we not just take it as a probability based abstraction for something we don’t have a true understanding of and move on? Anything else seems like weird theology to me.


Sure, it's perfectly valid to say we don't understand it - but the ambition of physics is to actually understand things like that. So if we agree that it is something that we don't know yet, then it remains a target for new theory development or new experiments.

Physics is ultimately not about predicting the outcomes of experiments. It goes farther than that: it is about coming up with a model of the world that humans can understand. The experiments and predictions are just there to validate that the model actually corresponds to the world, but they are not an end in itself.

And all of this speculation about what the Born postulate actually means has in fact lead to new experiments and new theories. If we had only ignored it, we probably wouldn't have discovered Bell's theorem, and we probably wouldn't have discovered decoherence.


1) Because physics has had an ontology, what there really is, since like Aristotle. Even if ontology changes, posing one has worked for roughly 2500 years to partially guide science. It really does inform all kinds of experiments, even thought experiments. BUT, you might be right the trade off you propose is more worth it

2) Reichenbach's principle says any correlation must have a cause. Even without an idea of an ontology like in 1) this looser principle is hard to give up as well. It explains the mystery behind Simpson's for example. And why without the causal model behind correlations, you could easily take the wrong drug for you specifically.

Again though, these are guiding principles and trade offs I am curious to see how we could progress without them


I really think it’s best to leave such metaphysical speculation out of it (or rather leave that to metaphysics.) Sure it could be right, and reality is definitely weird as hell, but it just seems like theology to me when you start pushing such things as fact when nothing says it is.

Don’t get me wrong I have weird beliefs about the universe too but I would always stress they’re my weird beliefs and not something we’ve proved. The many worlds interpretation is pretty mainstream at this point and it leads the masses to believe it’s either true or that scientists are batshit crazy. Not a good look imo.


As opposed to tagging on a literally incalculable number of extra universes for every possible quantum interaction.


The basic misunderstanding is that "things" exist as particles, and the Planck limit, IMHO, proves that everything is a wave and the Planck length is the smallest wavelength, or resolution, that any particle (certainty) can exist.

Fundamentally, all matter is uncertain.

The problem physicists have is that they are trying to align classical psychics with quantum physics when it is quantum physics all the way down. You cannot use quantum physics to describe classical physics because classical physics is only our minds collapsing the quantum universe so we can, well, exist.

To me, gravity is a side effect of quantifying. A byproduct of our brains collapsing wave functions.

A wave is a probability of a particle. Our minds create certainty out of a probability, that is, our minds collapse the wave function. We need to be certain of gravity, since it is a risk to our survival is we do not.

Space-time itself is a collapse of the quantum field and it is through this collapse that gravity is "observed" or "felt". With out particles, no gravity, with out wave collapse, no particles. Therefor wave collapse creates gravity.


I posted a response to this. It went through, but the board said I was posting too quickly. Bug dang about making it visible.


I wish any of you with the down vtes would at least have a conversation with me...


There is absolutely no evidence that the Planck length is anything like the smallest possible length or wavelength. Its "just" a length scale that pops out when you combine several physical constants. That combination of constants means that at the Planck scale quantum effects and gravitational effects are likely to both be relevant, which is cool, but other than that it doesn't really have any fundamental meaning.


I’m giving you a hypothesis about the reason why the Planck length exists as a constant.

Below Planck length there is no length, time, mass, or temperature.

All of those things are given “reality” by our mental process.


It's not a constant, any more than the meter or the foot. It's just part of a unit system that happens to be convenient for carrying out certain calculations.


I have questions about your understanding. Or maybe it is a problem with my communication. There is a planck constant.

https://en.m.wikipedia.org/wiki/Planck_constant


Planck's constant is not the Planck length.


The Planck length ℓ P is defined as: ℓ P = ℏ G c 3 ≈ 1.616 255 ( 18 ) × 10 − 35 m , where is the speed of light, G is the gravitational constant, and ħ is the reduced Planck constant.

I know they’re not the same. But they obviously depend on each other, no?


> But they obviously depend on each other, no?

Sure, but so does twice the Planck length. Also half the Planck length. Also 86.173491 times the Planck length. And so on.


The Planck length and Planck's constans are both named after Max Planck. That doesn't mean they're the same thing. The Max Planck Institutes are also named after Planck, but they are neither lengths nor constants.


That is certainly a hypothesis, but there is no evidence for it.


Hey, I’m no Einstein, but even he had hypothesis that took decades to be shown evidence to prove they were true.

I don’t know how someone is supposed to come up with evidence before a hypothesis, but if you have a new understanding of the scientific process Please let me know.


> I don’t know how someone is supposed to come up with evidence before a hypothesis

Sure, this is usually how science works. A good example is the famous Michelson–Morley experiment which was performed in 1887 and was a key piece of evidence directly inspiring Einstein's theory of special relativity in 1905.

The way science usually progresses is that there is some piece of evidence which current theories can't explain or deal with. That prompts the development of new theories, which make new predictions, which are then tested in further experiments.


What are ridiculous thing to say. The Michaelson Mosley experiment was sent experiment based on the hypothesis that something called the Luminiferous aether existed.

So someone had a hypothesis, and they tested it. I have a hypothesis, go out and test it if you wish.


> our minds collapse the wave function

While I know there are many sources available of people saying this exact thing, this is a common misunderstanding of the Copenhagen interpretation of quantum physics. There is nothing special about our minds.


First, thank you for responding.

I’m not saying there’s anything special about our mind, it’s just a function of our mind.

And is it misinterpretation or is it my interpretation that you disagree with?

How can one rationalize the fact that a photon can be a wave and a particle at the same time?

I do not believe, for example, that the moon exist, only because I look at it. I think it exists in a probabilistic state in that probabilistic state turns to certainty when I look at it. But that is only my certainty. How do I share my mind’s interpretation of what the moon looks like with someone else? Exactly? I can’t, it’s impossible. The moon both exist, and does not exist at the same time. And what I mean it does not exist. I mean it does not exist in any sense of certainty. Hence the moon is a wave function. The fact that my brain in your brain can collapse the way function in an apparent 100% similarity provides no proof that the moon exists in a single state outside of our own minds.


> How can one rationalize the fact that a photon can be a wave and a particle at the same time?

Photons are not particles or waves. They are similar to classical particles in certain respects and similar to classical waves in other respects, but what they're really like is a localized excitation of the electromagnetic field. Because that's what they are.

Physics by analogy simply does not work. If you understand the math, analogies can occasionally be a helpful signpost. If you don't, they're worse than useless.


Depends the interpretation. Bell thought the most straight forward conclusion was that quantum systems are both quantum waves and particles, where the wave guides the particle (Pilot Wave/Bohmian Mechanics). In MWI, particles are superpositions of every possible quantum value. We can understand that as branching worlds thanks to entanglement and decoherence.

Saying we can't understand the mathematical formalism is a kind of Copenhagen interpretation.


It was Wigner who speculated that consciousness causes collapse. But Neils Bohr had a Kantian approach to science, so he did think it mattered how our minds understood the world, and what we can and can't say about physical formalisms.


I'm curious why you think the commenter is interpreting according to the Copenhagen interpretation?

It sounds more like they are working within a view similar to the Von Neumann–Wigner interpretation.


Yes, and I feel that it will be proved to be right.

But I will disagree with them on one point, that I don’t think that consciousness is a thing that causes wave collapse, rather, the brain has a function that calculates the highest probability of a particle, and provides that to consciousness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: