There's something distinctly unsatisfying about this sort of paper, because the number of potential solutions to the problem they're trying to solve is in principle vast, and you can arrive at any number of them if you laboriously work backwards and fit your equations to the data. Without very rigorous efforts towards empirical validation, in novel ways if required, this sort of thing is just another wholly speculative theory to add to the large and growing pile.
The interesting thing, as far as I'm concerned, is the size and shape of the answer space. How many theoretical solutions can be coaxed to fit the cosmological data? It can't be infinite, but it seems as though it's a rather large number.
I agree with your point in general, but in this case specifically I strongly disagree.
The idea worked out by Oppenheim and his colleagues/students is a fairly concrete theory which aims to marry quantum mechanics and general relativity. They're working on a bunch of approaches to turn their model into testable predictions in different regimes.
They have a bunch of different papers with different ways their model could be falsified. None of them are at the level where you could falsify their model with current technology but they're seriously working on the theory required to make their ideas potentially testable.
I went to a talk a couple of years ago at a conference by Oppenheim and he is very clear that what they're looking for is to make concrete predictions which can be tested.
Edit: you can see here for another paper where they try to push their theory towards producing some testable predictions
To be fair, everything that astrophysics comes up with will be massively under constrained. For all intents and purposes we work with a single datapoint and try to infer dynamics.
Exactly. A few "data points", i.e. unexplained effects, and then Occamm's razor, parsimony, etc.
I really like the parsimony of this paper. If a very small detail of space-time, i.e. stochastic behavior at the Planck scale, results in an emergent effect that explains something on the cosmological scale, that is explaining a lot with a little.
Stochastic might not even mean random or non-deterministic (as I understand - and as the paper suggests, i.e. non-quantum), it could just be chaotic. Which would be very parsimonious.
So many seemingly different effects we observe are the result of very simple things, under stochastic, statistical, entropy, thermodynamic type situations.
Correct me if I am missing something, but it seems like the speculative bit was Milgrom noticing that coincidence in 1983, and now they are trying to explore the consequences of assuming that it's not just a coincidence... hopefully with testable predictions coming out of this ?
Coincidentally (or not) those also seem to be the words spoken during computer programming right before you figure out something you never understood before.
I'm far from an expert, and I'm still struggling my way through this paper, but from what I can see they are only intruducing one new free parameter, the same number as lambda CDM.
I think you're missing the point: you can say this about any theory. At least ones that attempt to deal with something significant and difficult.
It's therefore not a very perceptive critique to make to say the problem with this specific theory is that it is one of many possible ones. It's more meaningful to compare such wide-aperture theories like this to their actual alternatives, rather than distrust them on principle because the space of potential hypotheses in which they carve out a specific model is so large.
Funnily, your observation could be said to be an instance of itself: in that it is an overly general appraisal unrooted in pertinent specifics, therefore mostly divorced from explanatory power - akin to what you're misjudging this theory to be.
I don't know if this theory will pan out, but it seems more interesting than you say. While different to the paper, I have the following related thoughts to offer: I've often thought gravity makes sense, not as a fundamental force, but as an emergent metric, rooted in randomness (perhaps the informational content of matter, for which mass is, normally but not always, useful proxy?), similar to how temperature is merely an emergent property that we can measure that arises from (and reflects) the microstates/ensembles of gigantic numbers of particle interactions, owing to their energy content.
I wonder if a "time" or "age" factor could be needed in revised calculations of gravity to account for the plausible way in which information increases over time (if we assume, without trying to understand, that the history is recorded somewhere, somehow), and if that's why these very old and very distant galaxies bork our normal (non dark matter fudged) calculations of gravity, because they are so old that the time accrual of information is an effect which begins to come into play.
> I wonder if a "time" or "age" factor could be needed in revised calculations of gravity ...
There's a cosmological theory out there that does this, dubbed the Timescape cosmology, and you can read all about it on the theorist's university research webpage:
http://www2.phys.canterbury.ac.nz/~dlw24/
Clarkson-Bassett-Lu test of the Friedmann equation is proposed as a test of this model, using data from the recently launched Euclid satellite:
> I think you're missing the point: you can say this about any theory. At least ones that attempt to deal with something significant and difficult.
Hardly. In fact, just the opposite is true: You can say it about virtually no theories outside a niche within modern physics.
The great theories of 20th century physics were empirically validated almost immediately. Even their thought experiments were subjected to intense scrutiny and debate. Now those theories are fodder for engineers who tinker with material devices like MRI machines.
In other disciplines, e.g. biology or chemistry, it's effectively forbidden to work backwards from data and come up with a "solution" that fits but can't be validated in any way. Chemical space is vast; if you have a complex molecule, there are any number of ways retrosynthesis will work; finding one such way, without empirically validating it, is of near-zero value.
> I've often thought gravity makes sense, not as a fundamental force, but as an emergent metric, rooted in randomness (perhaps the informational content of matter, for which mass is, normally but not always, useful proxy?)
The randomness of what now?
How do you reconcile this with the frame-dragging effect? Namely, in that it shows that the distribution and movement of mass (not just the presence of mass/entropy) can influence the curvature of spacetime and thereby gravitational effects.
> I wonder if a "time" or "age" factor could be needed in revised calculations of gravity to account for the plausible way in which information increases over time
Information is degraded over time; the universe will eventually contain zero useful information content ("heat death") and will then be modellable as a homogeneous space subject to statistical fluctuation.
> How do you reconcile this with the frame-dragging effect? Namely, in that it shows that the distribution and movement of mass (not just the presence of mass/entropy) can influence the curvature of spacetime and thereby gravitational effects.
If gravity is somehow caused by something like information flow, then a rotating body has an associated rotating flow of information related to the rotating atoms/microscopic degrees of freedom observable at distance. This might have an effect making it distinct from a non-rotating body.
I believe the gradient of a flow field is a tensor, giving gravity tensorial effects as expected.
> Hardly. In fact, just the opposite is true: You can say it about virtually no theories outside a niche within modern physics.
Again I think you're missing the point. There's many ways in which you can construct theories on any topic. They're all just models: which can be seen by how these models often evolve over time, and yet were at each time seen as fairly valid. This is true in all the sciences, and is inherent to science. You make like that's a criticism of modern physics only, but I think it's more an effect of the point below about measurement.
> In other disciplines, e.g. biology or chemistry, it's effectively forbidden to work backwards from data and come up with a "solution" that fits but can't be validated in any way.
in any way: I think this is a mischaracterization. Fitting with the data is a validation in some way. You seek orthogonal validation via new testable predictions, which is fair enough, but it's also fair to consider the measurement problem again (mentioned below), and the process of development wherein advances may be made before testing is figured out, if the domain is of sufficient complexity.
> Chemical space is vast; if you have a complex molecule, there are any number of ways retrosynthesis will work; finding one such way, without empirically validating it, is of near-zero value.
That's the thing. These synths look like they work, on paper. The groups and charges move around right, but they don't actually work, which contradicts your point that you can't devise a large number of plausible alternatives in chem. The same is true of bio and metabolic pathways.
In practice, often we don't know how to test something at the time we sketch it out. I guess that's the consequence of being at the frontier, there's much unkown. So the analogy between these disciplines and physics fits, but perhaps not in the way you intended.
But on another level, the analogy doesn't work, as finding synths is more an eng problem. You don't have to understand why a synth works, for it to work...which is often the case. And the theories about why that was allowed by enthalpy or catalysis or whatever are often evolved over time. Whereas theory building is more focused on having a why that might bring insight.
I think your main problem here is the false dichotomy you see with "work backwards from data to solve the constraints" and "this doesn't validate it in any way". In fact, in the world of theories, fitting the data, is a pretty fucking great validation hahaha! :) But I do get the sense you are expressing about the futility, which I think is real, but just not the whole picture.
> The great theories of 20th century physics were empirically validated almost immediately. Even their thought experiments were subjected to intense scrutiny and debate. Now those theories are fodder for engineers who tinker with material devices like MRI machines.
almost immediately: is inaccurate. Theories can make many predictions, and some of GR and SR were only validated recently.
I guess you can say that our measuring ability has not caught up to our theorizing or mathematical ability. Which is regrettable, but not a condemnation of the theories, as you seem to think.
I get the impression you advance that modern physics is in the business of producing time wasting untestable theories. Which is a fair enough take. I just think there's more nuance there, and you risk maligning a good theory, with this broad stroke.
> How do you reconcile this with the frame-dragging effect? Namely, in that it shows that the distribution and movement of mass (not just the presence of mass/entropy) can influence the curvature of spacetime and thereby gravitational effects.
I don't see that it contradicts it, so it may not be the best counter example. The movement or spin of the mass that induces frame-dragging can be considered information as well.
> Information is degraded over time; the universe will eventually contain zero useful information content ("heat death") and will then be modellable as a homogeneous space subject to statistical fluctuation.
I know that's the model, but what if it's inaccurate/only a part of the picture? What if the information is elsewhere and we don't know how to measure it? There's precedent for that: there's information in quantum states that was not known to exist or be measurable before quantum theory.
> I think your main problem here is the false dichotomy you see with "work backwards from data to solve the constraints" and "this doesn't validate it in any way". In fact, in the world of theories, fitting the data, is a pretty fucking great validation hahaha! :) But I do get the sense you are expressing about the futility, which I think is real, but just not the whole picture.
I think your main problem is that you only read half of the comment you responded to.
It has two parts:
> There's something distinctly unsatisfying about this sort of paper, because the number of potential solutions to the problem they're trying to solve is in principle vast, and you can arrive at any number of them if you laboriously work backwards and fit your equations to the data.
Which you've adequately responded to. But it also has:
> Without very rigorous efforts towards empirical validation, in novel ways if required, this sort of thing is just another wholly speculative theory to add to the large and growing pile.
Which you haven't.
Taken together, the parts are obviously talking about a class of theory that is fit to data AND does not bother checking the theory against other data or making predictions in reality.
Your examples of theories that evolve don't seem relevant, as they're evolving because people made efforts to test the theory out and account for unexpected results (or, at the very least, look at datasets that weren't the ones that the theory was specifically designed to account for).
> Without some specific idea of "elsewhere", that idea is in the "not even wrong" category.
Which idea?
Also, based on what is to-be-specified idea in the not even wrong category? [By which I guess you mean you think it's so wrong that you cannot even dignify telling me why it's wrong? haha! :) Novel sly-arrogant/humble-brag appeal-to-authority, I'll give you that! Because your understanding is so above everyone else's you couldn't explain it or we wouldn't get it? Haha! :)]
Also, finally, pray tell what do you mean by "elsewhere" ? Hahahaha! :)
Which idea? You seem to have heavily edited your post after I replied; I remember it as being much shorter. I specifically meant
> I know that's the model, but what if it's inaccurate/only a part of the picture? What if the information is elsewhere and we don't know how to measure it?
"What if the information is elsewhere"... well, there's a lot of possible elsewheres. Without some kind of specifics, that's "not even wrong", not because I'm so much smarter than you or so much more of an expert, but because there's nothing there to let anyone be able to determine whether it's right or wrong.
By "elsewhere", I mean the word that you used. What did you mean by that word? Without a specific, you've got something that sounds like stoner physics: "Dude, what if, like, the information is, like, still there, man? What if it just, like, went elsewhere, man?" That's not something that we intelligently interact with. You introduced the word; what did you mean?
But... What if dark matter is the "elsewhere"? (No, I don't have any idea how that would work. Nor do I necessarily think that this is a sane idea. But it's a candidate for "elsewhere" that kind of seems to fit with the course of the discussion.)
Yes, I edited it but I hadn’t seen your reply at that time. I didn’t edit to confuse you! Just to add more details in rebuttal of the points of the other commenter above. Well, I’ll read your answer and get back to you in a bit. Thanks for replying! I wondered if you were an alt of adepts and got mad at me! Haha! :)
Yeah I get your point now. I did a big edit to add more points and forgot I had used that word. So to address your points:
> but because there's nothing there to let anyone be able to determine whether it's right or wrong.
Oh, I see what you're saying. Well I think it's more like...in this theory, we can leave the 'where is the information' question reasonably open right now. We don't have to pin it down. I'm OK with that.
It's the other things that are important to me for now. Information, time. It seems you didn't get those?
I think there's enough there if you read the thread. If you just want to take one thing out of context as you're too lazy to ready back, then blame me when it's not easy for you to explain, you're just picking on people. Blaming them for your own shit, that's not good.
> stoner physics
That's not very nice, tho. Like, why do you have to denigrate it? I get that it sounds that way to you, but that's not how it is. You find there what you bring to it, and you could just as easily engage intelligently with the elsewhere if you respected me by thinking: Okay, it's an unknown for now, I respect that.
Or even better, Wow that's interesting. Let me think about that, maybe I can propose some ideas. That would have been constructive. It seems your idea of "intelligently engage" is to have everything handed to you, and if it's not, take out your own laziness to think, by abusing other people. That's not good, AnimalMuppet.
It's you who didn't engage intelligently with this. That's not my fault, that's on you. Why did you have to take the conversation in this direction? You could have simply made a good contribution.
With someone who has already disrespected my views, that I shared openly and vulnerably, how do you think I'm going to feel exposing more of my ideas to someone who is already ridiculing them? You're not a very nice person, are you?
AnimalMuppet. I'm guessing from the capitalization, you're not a guy. You capitalize to make yourself louder because you don't feel heard here, in this "culture", but you come here because you think you should be heard. But you can't get over that pain, so if you see someone or something you think is a little bit weak, you want to unjustifiably take out your pain on them, to compensate for how you're not getting the respect here, in this "culture", that you think you deserve.
I feel sorry for you that you didn't have a great experience here, but don't take it out on random people to you, like me. I don't have anything to do with you. In fact you shouldn't be taking it out on anyone, you should just be dealin' with that, yourself, rather than being a bitch. Simple, don't you think? Haha! :)
> Again I think you're missing the point. There's many ways in which you can construct theories on any topic. They're all just models: which can be seen by how these models often evolve over time, and yet were at each time seen as fairly valid. This is true in all the sciences, and is inherent to science. You make like that's a criticism of modern physics only, but I think it's more an effect of the point below about measurement.
In science, as opposed to theology, the models in themselves are of no use until they're validated.
Hence the motto of the Royal Society: Nullius in verba. One expects more than words -- one expects experimental showings, or the reasonable expectation of an experimental showing in the very near future. 20th century physics, in the main, had this. It is the cornerstone of all other disciplines.
> in any way: I think this is a mischaracterization. Fitting with the data is a validation in some way.
"In some way" is doing a lot of work there. How many potential fittings from cosmological data are there? 1000? 10^10? 10^500? Do you know?
> That's the thing. These synths look like they work, on paper. The groups and charges move around right, but they don't actually work, which contradicts your point that you can't devise a large number of plausible alternatives in chem.
What are you talking about? You can come up with any number of theoretical retrosyntheses that do work, but are unwieldy, impractical, or can't be validated for any number of reasons -- lack of reagents or intermediates, etc.
You can derive any number of plausible processes. Nobody does that, though, because one is expected to do more -- to come up with something that runs, and ideally to run it and report how it works, with yield rates and so forth.
Similarly, I don't think that the paper in OP has constructed something that runs. It is mere backwards-fitting to cosmological data. The more interesting question, as I've noted, is how many such things are possible.
That sounds very unconvincing. Afaik there are several found galaxies with more than normal amounts of dark matter, and less. How would these fit into ANY modified gravity theory?
This type of theory seems to only handle the common case, but the universe is full of edge cases...
If the base of this theory is some form of chaos you can always fit the distribution of the chaos to explain outliers with the probability they actually occur at.
Can you ? How is this different from, say, anything involving pressure and temperature, which under current theories, are statistical phenomena arising from quantum behaviour, fundamentally based on our lack of information about what is going on (aka entropy) ?
I think technically you can. Although it's just moving the question to "why is the chaos distributed like that?".
Current explanation for existence of galaxies is those are quantum fluctuations that grew large. So apparently fluctuations can explain everything if you do enough of hand waving in between. I don't think any particular quality of galaxies as we see them today can be traced to specific quantum property. But that doesn't stop us from believing in that explanation.
I don't think entropy can be defined as your lack of knowledge, at least straight out. You have to be careful as under strict Bayesianism knowledge can't ever decrease, implying that entropy is monotonically non-increasing, and that's clearly wrong.
Bayesianism is not wrong (it more or less can't be), the main problem is exactly what you mean by "knowing" and "entropy". Obviously, an embedded agent knowing from within physics is limited, but that's not a problem with theoretical Bayesianism, and it's not a problem if you're dealing with cards.
Wrong when applied in the general context of fundamental physics ?? (See also how entropy can often be approximated as an objective and property of a system, its density intensive, but in the general case that's wrong.)
Speaking of, I'm somewhat puzzled, you said that "under strict Bayesianism knowledge can't ever decrease", but how does it then deal with something as simple as card shuffling ??
Depending on how you want to do it, the hypotheses that get probabilities can be represented as computer programs that predict the input the agent receives. If done correctly (perfectly if you want it to be absolute) the probability-weighted sum of the calibration metrics won't ever go down.
I... am not sure that I understand what you mean ?
That after shuffling you know less about the stack of cards (that used to be at least partially revealed) is a fact that our model must follow or fail at being relevant.
It's been hard to find out more information about this, but I did find some :
Same thing : "And the Bayesian Second Law (BSL) tells us that this lack of knowledge — the amount we would learn on average by being told the exact state of the system, given that we were using the un-updated distribution — is always larger at the end of the experiment than at the beginning (up to corrections because the system may be emitting heat)."
Though I also did find another interesting thing :
"in a strict frequentist view, it is meaningless to talk about the probability of the true flux of the star: the true flux is (by definition) a single fixed value, and to talk about a frequency distribution for a fixed value is nonsense"
But that's pretty much the case in statistical physics ! A macrostate is actually NOT a state (=microstate), but a probability distribution !
"For Bayesians, probabilities are fundamentally related to our own knowledge about an event. This means, for example, that in a Bayesian view, we can meaningfully talk about the probability that the true flux of a star lies in a given range."
So looks like statistical physics are already at least part-way between Frequentist and Bayesian ??
Also, this one sounds potentially interesting, but sadly, paywalled...
The Guardian headline stands in contradiction to this statement from the paper’s abstract: “We caution that a greater understanding of this effect is needed before conclusions can be drawn”.
The byline specifically says: "Exclusive: Paper by UCL professor says ‘wobbly’ space-time could instead explain expansion of universe and galactic rotation"
"Could instead explain" is not a definitive statement.
Their headline is "Controversial new theory of gravity rules out need for dark matter"
"Controversial new" tells me that this isn't settled science, nor is the Guardian trying to say otherwise. "Rules out" by itself, out of context, could imply proven science, but not within the context of the headline and byline.
Yes and also in gravitational lensing. The most famous example of this is the Bullet Cluster [1], which displays a pattern of lensing that doesn’t line up with the distribution of light (visible matter) but does line up with where we might expect dark matter to end up after a collision.
From the preprint link at the top's page 6, after Eqn (23): "While this study demonstrates that galactic rotation curves can undergo modification due to stochastic fluctuations, a phenomenon attributed to dark matter, it is important to acknowledge the existence of separate, independent evidence supporting ΛCDM. In particular, in the CMB power spectrum, in gravitational lensing, in the necessity of dark matter for structure formation, and in a varied collection of other methods used to estimate the mass in galaxies.
These now form an important set of tools with which to test [our] theory".
Other lines of evidence like the one at your link just increase the ability to test their theory (which is meant to solve some quantum gravity problems at very small length scales) at very large length scales.
They are not motivated by the desire to prove MOND is correct or that cold dark matter doesn't exist. Rather they can show that in certain restricted circumstances their theory allows for nearly-MOND-like orbits. So their theory survives a small but important hurdle imposed by our observations of nature (we observe MOND-like orbits).
Elsewhere in the comments someone asked the good question of (highly paraphrasing) whether this success is wiped out if a distribution of dark matter is added to their SdS-like universe as the generator of some of the observations normally taken as proof of \Lambda-CDM. That question is closely related to the final sentence in the quote above, and answers are hard to guess at.
Inflation in what era? There has to have been some inflation at some point, right, otherwise how could the very hot period that's considered to be right after the start of the universe be low entropy? Sure, maybe you could say there's no big bang and time runs the other way on the other side of that low entropy state, but there has to be some sort of low entropy state.
It could be that it is actually not low entropy, and there are some other processes that explain a recution in entropy at some scale. That is, maybe the second law of thermodynamics is not that universal.
That would be really weird. I'd be more willing to buy galaxies just coming into existence somehow in the gaps as the universe expands, as that at least drives the arrow of time consistently in the right direction.
The arrow of time can survive local entropy reduction, but probably not what you're talking about.
>"explain galactic rotation curves without needing to evoke dark matter."
I wonder if this makes it incompatible with dark matter?
If stochastic spacetime explains the rotation curves exactly without black matter, then black matter can't exist at the same time.
The evidence for dark matter is find in, galaxy rotation curves, gravitational lensing, cosmic microwave background, structure formation, bullet cluster and other galaxy cluster collisions, baryon acoustic oscillations, Lyman-alpha forest absorption lines, etc.
There is plenty of evidence for dark matter and it's existence is widely accepted. Explaining just one piece of evidence can be evidence against the new explanation if it's incompatible.
> Can dark matter and [the author's theory of gravitation] fit together...?
Nobody knows yet. The authors aren't seeking to overthrow the concordance cosmology; this paper is essentially trying to see if their theory is quickly killed by being unable to model a stable spherically symmetric galaxy in an expanding universe. Their theory predicts significant differences from the Schwarzschild-de Sitter (SdS) metric in General Relativity, with SdS serving as a proxy for an isolated galaxy. The paper in part investigates whether those differences can be "hidden" by removing the cold dark matter which otherwise would be the standard source of outer orbit anomalies.
They don't go deeper than that, and at this stage just can't: "To make it tractable analytically, we have restricted ourselves to spherically symmetric and static spacetimes". They restrict themselves in other ways too.
> The evidence for dark matter
From just after Eqn (23) in the preprint:
"While this study demonstrates that galactic rotation curves can undergo modification due to stochastic fluctuations, a phenomenon attributed to dark matter, it is important to acknowledge the existence of separate, independent evidence supporting ΛCDM. In particular, in the CMB power spectrum, in gravitational lensing, in the necessity of dark matter for structure formation, and in a varied collection of other methods used to estimate the mass in galaxies"
It doesn't explain the rotation curves of galaxies at all, and doesn't try to.
It takes a simple proxy for an isolated galaxy and sees if there are physically reasonable orbits around it. MOND-like orbits are physically reasonable (we observe them). The authors' model allows for almost MOND-like orbits.
The explanation for why these MOND-like orbits exist will vary. In the standard cosmology, it's because of a distribution of cold dark matter around the central mass. In MOND, it's because the strength of Newtonian gravitation falls off at great range from the central mass. In the theory discussed above it's because the central mass induces fluctuations in the gravitational field that grow in relevance with distance from the central mass, and in a particular range of radial distances those fluctuations are most likely to drop the orbiting body onto a closer orbit.
There's a certain irony that a hundred years ago Physics was divided between bitterly divided camps declaring light as continuous or as quantized until finally they found it was a duality of both, specifically converting around the inflection point of interactions with persistence (i.e. no quantum eraser).
So it's a bit bizarre to watch modern physics repeating the process with spacetime and the commitment to finding a 'solution' to quantum gravity, particularly when including notions of quantized particles like gravitons or dark matter that can't be directly interacted with in persistent ways (in which case, why are they expected to be quantized and not continuous?).
I know most physicists proudly state that they only concern themselves with 'how' and not 'why' - but it's refreshing to see work that clearly had the wherewithal to ask themselves 'why' regarding a number of broad assumptions about the fundimentality of quantization as a default mode and I look forward to more 'postquantum' work in the future.
I think there's a bit of language problem here. In the way physicists use the word in 2024, making gravity "quantum" means something very different to just having discrete "quanta" particles.
It means a theory with stuff like superpositions of classical states,interference and entanglement. It also means having a theory which can consistently interact with other quantum stuff.
The standard through experiment goes something like this. Take a particle with some mass (maybe am electron, maybe a bowling ball), general relativity tells us what its mass does to space-time around it. Now take that particle and put it in a superposition of being located at point A and point B (you can do this if the particle is quantum). What the heck is it doing to spacetime now? GR doesn't tell us, because GR can't deal with quantum particles. Naively it seems like space-time should also be in a superposition, but we tried to make theories like that and they didn't really work.
Yes, I'm familiar with the thought experiments. My favorite to date was Zych, et al Bell’s theorem for temporal order (2019).
But it still seems a bit like putting the cart ahead of the horse given the amount of resources dedicated to the search when the smallest gravitational field measurements to date (or at least as of 2021) were on a 92mg object.
And while in theory the largest mass to be put in a quantum superposition to date was 16mg, the degree of separation in the vibrational superposition is still well below an order of magnitude gap from a difference detectable with current gravitational measurement.
For a field where behavior changes in relationship to measurement and the persistence of information, there's not currently any measurable discrepancy between GR and QM. It's literally only in gedankenexperiments there's a conflict.
Maybe the difficulty in preserving coherence for larger objects relates to their inadvertently leaving measurable information in spacetime? Perhaps we'll find there's a fundamental limit on gravitational measurement that happens to be right around the limit at which we can maintain coherence.
It just seems like the kind of quest that's premature when we can't measure a conflict existing outside of our imaginations, and where the mystery will likely resolve almost immediately after we can actually measure the conflict in the first place (should we ever be able to do so).
So yes, if we end up with an entire planet in a quantum superposition near two different spaceships shooting at each other, things might get weird. But at least currently that 'if' seems to be doing a lot of heavy lifting.
I don't think its premature when we have stuff like the model of Oppenheim et. al. which appears to predict stuff which we can observe from astronomical data.
This paper [1] also discusses some experimental tests of their model which are not too technologically insane (you don't have to put a planet in superposition). Pars of the parameter space of their model are already ruled out by stuff we already know and more can be excluded with a combination of deriving better theoretical bounds (their analysis is quite coarse to simplify things) and better experiments.
I don't think its very meaningful to talk about any "measurable discrepancy between GR and QM" in any case - we simply don't have any predictions in many cases where both QM and GM are very relevant. General relativity doesn't tell you want happens when you put a particle in superposition - its not that it tells you something and we expect some differences from that in reality.
It seems like this theory is proposing some entropic term on top of classical general relativity whereas Verlinde came up with a completely entropic theory of gravity. They both claim to explain dark matter.
I can imagine that wobbliness comes from chaotically passing strong gravity waves of very high wavelengths coming from outside of the observable universe and from before times of Big Bang.
There's no specific reason that forces us to believe that Big Bang was the actual beginning of everything. It might have been "local" high energetic event happening in much larger structure.
For example try to imagine how the collision of two dense clusters of trillions of black holes each traveling at nearly light speed would look like from the point of view of the dust swirling in-between and around them. I can't imagine it would be very different from what we observe in our universe.
Maybe? I don't know. I imagine we don't have any good ways to observe gravitational waves, even fairly strong ones with periods of millenia or even centuries which are still very short times on a galactic scale.
And apart from that, I think that we can only look at CMB and conclude that it's a bit wobbly. Must be quantum fluctuations, sure, why not, but is it only the quantum fluctuations? Or maybe the spacetime between us and CMB source is a bit wobbly too?
Another thing is large scale structure of our universe. Visualizations look like foam. Planty of chaos that wobblines can safely compose into.
I think the work of those scientists is very important because it might allowed us to pinpoint how strong distortions of the spacetime would need to be to explain what we see and maybe we could narrow down the range of frequencies which might gives us ideas how to look for them.
Simplistic and grandiose assumptions make our current best model of the universe a bit restrictive. To the point that we are starting to find direct counterexamples for our theories derived from it. Mature galaxies way too old. Megastructures way too large. CMB fluctuation not exactly fitting best theoretical models. Unreconcilable differences between Hubble constant measured from CMB and that measured from galaxies.
I think accepting a bit of chaos beyond what we currently believe is an inevitable way out.
> I can imagine that wobbliness comes from chaotically passing strong gravity waves of very high wavelengths coming from outside of the observable universe and from before times of Big Bang.
Could gravitational waves from outside the observable universe actually reach us? I thought the same limitations apply as with any other information.
Why wouldn't they be able to reach us if they started traveling in our direction before the event of the Big Bang?
Even though the spacetime expands if they started their journey early enough they might have been arbitrarily close to us at the moment spacetime within our observable universe started expanding.
For example the front of some gravitational wave originating from outside could be near the times of Big Bang as close to us as the matter that eventually formed Andromeda galaxy. As waves travel with the speed of light it would passed us very long time ago.
The limitations you speak of are calculated with assumtion that both all the matter in the whole universe and the spacetime itself started at the moment of Big Bang.
If there was matter outside our bubble of observable universe and time before the Big Bang those limitations are not valid for anything originating there.
Is that the "postquantum" bit? i.e. not that spacetime is quantized as such, but that it's not a uniform continuous medium at all scales, that classical mechanics would assume.
The authors asked themselves how their quantum gravity theory differs from General Relativity, and whether the successes of General Relativity in astrophysical settings would be fatal if their theory has strong differences, and that's the basis for this paper. The tl;dr is that their theory predicts different trajectories outside of large central masses, but that might not conflict with evidence from galactic-dynamics astronomy.
This is the second paper released in the past few days by the University College London Oppenheim group. It's a preliminary investigation of the longer length scale features of their classical stochastic theory. The central question is how its version of Schwarzschild-de Sitter (SdS) differs from standard General Relativity.
The first paper, and I think the more interesting one, is about the short length scale aspects of their asymptotically free theory, in which the gravitational interaction weakens as distances between interacting sources decreases. The asymptotic freedom means the theory is amenable to renormalization, unlike perturbative quantum gravity and a number of other approaches. That paper is at <https://arxiv.org/abs/2402.17844>. Note that they do not know how to make the gravitational part quantum mechanical without introducing problems (i.e., it is haunted by "bad" ghosts in the sense of <https://en.wikipedia.org/wiki/Ghost_(physics)>); their classical and stochastic gravitational sector is ghost-free (a point also made at the end of Appendix A in the large-scale paper), and it is reasonable for them to believe that could be good enough that it's worth continuing to investigate what the theory predicts and how its parameters are set.
The second paper was motivated by the first: "The theory was not developed to explain dark matter, but rather, to reconcile quantum theory with gravity. However, it was [noted] that diffusion in the metric could result in stronger gravitational fields when one might otherwise expect none to be present, and that this raised the possibility that gravitational diffusion may explain galactic rotation curves".
That MOND-like effects might arise in their approach to the problem of small-scale quantum gravity is at least interesting. It was not the starting point.
Moreover, they did not start with the idea of modifying General Relativity to get rid of the need for (some or all) cold dark matter. As they say: "While this study demonstrates that galactic rotation curves can undergo modification due to stochastic fluctuations, a phenomenon attributed to dark matter, it is important to acknowledge the existence of separate, independent evidence supporting ΛCDM. In particular, in the CMB power spectrum, in gravitational lensing, in the necessity of dark matter for structure formation, and in a varied collection of other methods used to estimate the mass in galaxies."
> Now explain the Bullet Cluster
This paper does not seek to do so. "To make it tractable analytically, we have restricted ourselves
to spherically symmetric and static spacetimes, with metrics of the form of Eqs. (17)." Eqn 17 describes out an adapted Schwarzschild-de Sitter spacetime and leans on an argument that Birkhoff's theorem applies (in particular that their model spacetime is stable against certain perturbations, notably those concentric upon the source mass). There is further detail in Appendix B.
Studying this restricted model, the de Sitter expansion of the spacetime and MOND-like anomalous Kepler orbits at some remove from the Schwarzschild central mass are in their theory driven by entropic forces generated by the fluctuations in the gravitational field of the central mass (and they do a good job in Appendix D explaining this).
In GR's Schwarzschild-de Sitter the free-fall trajectories of test particles around the central mass are totally determined by the mass; the gravitational field doesn't fluctuate. The (Boltzmann) gravitational entropy of the region outside the central mass is everywhere very high.
In GR-SdS we can consider adaptations where with M=const. we turn the pointlike central mass into a spherically symmetric shell, or a concentric set of such shells, or even a ball of fluid, or a ball of dust, or a ball of stars and other galactic matter. None of these symmetry-preserving adaptations changes the free-fall trajectories of test particles outside the outer surface, or the gravitational entropy at any outside point.
In the author's theory, the spacetime is stochastic. It fluctuates. Close to the central mass fluctuations are unnoticeably small; the gravitational entropy is very low. Far from the central mass the gravitational entropy is very high, and gravitational fluctuations are noticeable. A sort of thermodynamics leads to a diffusive flow outwards from the central mass, from the low entropy near there to the high entropy at increasing radial distance. This diffusion is carefully constructed so that the outwards flow is only really appreciable at large-scale distances. The effect is that large-radius orbits are statistically pulled inwards by something describable as stronger gravity at larger radiuses (see around Eqn (21)). This is an "entropic force", very roughly analogous to squashing a sponge ball in your hand then releasing the pressure and watching the sponge ball expand, where the material of the sponge represents the gravitational field.
Their stochastic fluctuations are still generated by the spherically-symmetric central mass. These fluctuations break the spherical symmetry of the outside metric. Consequently they have to do some work to make the outside metric look appropriately Schwarzschild-like in their "diffusion regime", and to keep that stable against the stochastic perturbations.
The authors contend that with reasonable choices of parameters, and restricted to static spherical symmetry of the central mass (and no additional dynamics), this effect comes close to duplicating MOND's low-acceleration regime.
They don't go into anything like a backreaction upon the Schwarzschild metric by large fluctuations.
(They do have an idea about how to get the de Sitter trajectories though, but that doesn't fit very naturally into this comment, which is already long.)
> Bullet cluster
The authors know full well that the metric for a gravitationally bound cluster of galaxies isn't well-represented by their choice of SdS-like metric. A galaxy cluster is too lumpy for the Schwarzschild part.
Two gravitationally bound galaxy clusters having passed through each other (trailing collided gas and dust, and tidally stripped stars and other matter) is even less like Schwarzschild. This is because SdS solutions of the Einstein Field Equations do not linearly superpose. So their metric is a poor description of any sort of "close call" interaction between galaxies or galaxy clusters, even if the individual components are "close enough" to Schwarzschild from the perspective of an observer sufficiently large (as in cosmologically large) distances. They do not (and within this initial paper should not really be expected to) offer a more suitable metric. I'm sure they'd love to look into things like that though.
The non-linear superposeability of useful solutions of General Relativity is a problem for asking how astrophysics differ in most theories that preserve the equivalence principle (this one does, it's a metric theory of gravitation). As the replacement for the Einstein Field Equations lose symmetries (sphericity, staticity) they tend to become analytically intractable and non-numerical approximations become unreliable.
The authors -- imho in a strikingly principled way -- call attention to various difficulties in using this work to describe astrophysical systems, particulary from the middle of the fifth page of the PDF.
They are not obviously worse off than the Verlinde programme of emergent-entropic gravity, where the gravitational field is generated by entropic forces rather than vice-versa.
Mainstream theories are so good that they dwarfed competition. Lack of competition caused lack of progress. It was easy to collect hundreds of downvotes on HN just by talking about alternative theories or about physical representation of abstract theories. Now such comments will collect mere dozens of downvotes. It also should be obvious that a new successful theory will account for more things, so it will be much more complex than previous theories.
Yeah, um... if you're going to claim something like that, you need more than the claim. What evidence do you see for your position? What evidence can you give us so that we can evaluate your position?
The interesting thing, as far as I'm concerned, is the size and shape of the answer space. How many theoretical solutions can be coaxed to fit the cosmological data? It can't be infinite, but it seems as though it's a rather large number.