Hacker News new | past | comments | ask | show | jobs | submit login
I stopped working on black hole information loss (backreaction.blogspot.com)
461 points by nsoonhui on April 23, 2022 | hide | past | favorite | 356 comments



Sabine’s niche seems to be a confident cynicism or learned skepticism and is a much appreciated addition to the space.

I like Sabine’s personality and the fact she stands in defiance of the status quo of scientific outreach, namely my two least favorite tropes: “science is fun” and “let me explain something using terrible metaphors because I fail to understand the math myself”.

Sabine, if you’re reading this… please create more technical content.

Maybe a companion video to your more general takes.

One that shows the numerical side of things. Your experience and personality already lend itself to this effort.

You say “the math is insufficient” show us the math! Show us how it’s insufficient using numerical examples.

Scientific outreach has a real “draw the rest of the owl” problem and I think Sabine is perfectly poised to fill the gap.

There’s a George Carlin quote I’ll paraphrase ~“never talk down to your audience, they’ll catch up eventually.”


Completely agreed. I was skeptical about her at first. My first exposure to her was the panel interview hosted by PBS Space Time "Theories of Everything" https://youtu.be/N_aN8NnoeO0

At first I thought she was biased because she was was so skeptical about almost every point during the discussion, but her logic and arguments were all so solid and well thought out, they were impossible to ignore.

I checked out her channel, and she was the first YouTuber that I found who actually explained the syntax of Quantum Mechanics equations and what Kets are. https://youtu.be/ctXDXABJRtg

More detailed content would be greatly welcomed from her.


She seems to be after some hidden truth that "scientists" are not telling us. I’d be more appreciative of her insights if those were not implying we’ve been lied to. She should consider that 1. Being wrong is ok, doesn’t mean someone’s lying. 2. Maybe she is wrong. 3. Interpretations of quantum mechanics are fun and there’s nothing to be angry at. So I’m a bit skeptical about her skepticism, it doesn’t sound constructive to me.


The tone is off putting for me as well. It comes across as arrogant mixed with what I'll call a “Youtube language”: the TRUTH about X, what they don't wan't YOU to KNOW, along with excessively expressive thumbnail.

This “Youtube language” is something a lot of content creators suffer from, unfortunately.


I hate those titels, too, but let's be honest: They're a necessary sharade. If you want to earn any money from YouTube or even just get views, you need to please the algorithm. Sure, you can work without them, but it's like driving with the handbrake pulled.

The content of the videos is what matters. The advertising is annoying, but necessary.


Derek of Veritaseum made a video laying out why "clickbait-y" titles like that are necessary, and I feel like it makes a pretty good case.

https://www.youtube.com/watch?v=S2xHZPH5Sng

The tl;dw is that the most effective titles come right up against the edge of what an average target audience member would consider unacceptable on the axes of withheld information (to create a curiosity gap) and misleading sensationalism (to maximise the perceived importance of the topic); and you have to play the game of having effective titles because otherwise no one clicks on your video and it stops being recommended to people who would find it valuable.


I felt that this change reduced the quality of Veritasium, for me. I know it's just the titles, but it feels more bait and switch.

I understand why he did it, but wish he didn't. If I was a leader at YouTube I'd be pretty concerned about this - long term, it drives your platform in the direction of clickbait. Yeah there's benefits to that, but could also turn into a competitive weakness for YouTube. I think they should improve the recommender to make this unnecessary.


I saw that video. I don't need an explanation of why it's being done, it's pretty obvious and it works.

What I'm saying is that we can do better than 50 year olds making child-like expressions with clickbaity titles.


When he did this is basically when I decided to no longer watch his videos, and I don't regret this decision.


I think we can aspire to better than that. I do at least.


People click those titles and don't click other titles. No one cares that's it's not sufficiently uppity to satisfy the intellectual elite.


Evidently, at least two people care.


Really? Titles that are factually reflective of content are “uppity”, and not liking obviously intentional click-bait titles makes you “intellectual elite”? Wow. Welcome to the idiocracy.


Don't shoot the messenger


Alternatively, you could accept what is at root a rather small thing, and adjust you own reaction. That’s what I did when those kind of titles started to become common.


I deal with it, of course. I'm a heavy Youtube user. The net result is positive.

But it's extremely distasteful and I'm not willing to lower my standards because of this stupidity.


It’s not even about your or my standards. The overall tone of media sets the standard. There’s already a rising tide of anti-intellectualism that seeks to actively disparage intellectual thought as “uppity” as described in a comment above. This is not healthy for society. If you wonder how it was that in history, waves of advancement in knowledge seem to be followed by periods where things seem to go backwards, I find it easy to imagine that popular anti-intellectual sentiment would have looked very similar. Distrust of “elites” and disparagement of the philosophies of those “elites”, throwing out the baby with the bath water.

EDITED to add that what attaches itself to such sentiment (or stokes it), are cynical political actors happy to gain power irrespective of the cost to society.


>you need to please the algorithm

Don't blame the algorithm. Blame people themselves. It's not YouTube's fault that people find those kinds of titles more interesting.


Google put huge amounts of effort into punishing the search rankings of sites that attempted to manipulate their ranking in shady ways. YouTube could put in some amount of effort towards the same end.

Of course, that would probably reduce engagement...


"Making people want to click the video" is not shady in the same sense as SEO. A comparable trick might be jamming a bunch of keywords into the description.

Your real complaint is that people like stupid videos, but that's not YouTube's problem or fault.


The concern with SEO is that the goal is to drive traffic to a site without meaningfully improving the content. Doesn't the same argument apply here? If the video doesn't change, and it's just the title and thumbnail changing in an effort to get more clicks, how is that materially different from SEO?


Because it's the viewer deciding to choose the video, not some algorithm deciding what they get to choose from.


> Google put huge amounts of effort into punishing the search rankings of sites that attempted to manipulate their ranking in shady ways.

From the search results I get, it appears that they abandoned that effort long ago.


I read the article. I've read other articles of hers as well. And I don't read her that way at all.

What she primarily is, is someone who insists on experimental data as an anchor for physical theories. She's not totally against theory that isn't backed by experiment (despite her last paragraph or two), but given multiple competing theories that cannot be experimentally verified, she is unwilling to accept any one of them as "the truth". I don't think she's wrong in that.


Sorry I didn’t read the article. I plaid guilty. I have this feeling because of how she comes across on YouTube content, which might not be representative of her work.


Well, the article was the transcript of one of her YouTube posts. But the article doesn't tell me tone of voice, facial expression, and the like. So... maybe I have to say "Sorry, I didn't watch her on YouTube."


She makes very specific points on what can and can’t be reasonably treated as a scientific problem. Her main concern appears to be that much physics research and funding has deviated towards mathematical philosophy. Work which may be correct, but is untestable and ultimately unscientific.


Re point 2: she made a video on neural networks that I think many people here will be able to appreciate.

https://youtu.be/fxiHM11w-rk (about 2 minutes in)


Here is my first exposure to her. Its an in depth discusson on her own book. She does strike me as very cynical.

https://www.econtalk.org/sabine-hossenfelder-on-physics-real...


Doesn't Feynman Lectures book III explain kets fairly well?


Feynman was a YouTuber?


Have you read 'Surely You're Joking Mr Feynman'? Dude was a youtuber physicist in a world where youtube didn't exist.


Somebody put his lectures up on YouTube a long time ago.

If "person who's content is available on YouTube" qualifies one as a YouTuber, then yes


I think generally "YouTuber" is taken to mean somebody who is building particular YouTube personality/brand thing.

But even if we want to be very general and just parse it using default English, the -er suffix would mean someone who is doing something. I believe it would be more accurate to say Feynman was YouTubed.


Feynman most probably would be a youtuber if he were alive.

Even posthumously he’s been one of the most popular educators on youtube.


How did you go from "Feynman Lectures Book III" to "Youtube"? billfruit was asking about a book.


Let me answer for OP. Because all these lectures are actual recorded lectures that have been uploaded to YouTube.


> Scientific outreach has a real “draw the rest of the owl” problem

It seems like this is just the “you need to study the problem and it’s dependencies for many years to understand the physics and mathematical details” problem. Which does not actually seem like much of a problem to be solved, just a reality.

Your example in the other comment depends on linear algebra, so ~first year undergraduate of most technical fields. IMO that’s fundamentally more accessible than the black hole information paradox, with its dependencies on general relativity and quantum field theory.

I’m sure more technical videos and posts could be made about this, but how small would the target audience be?


> It seems like this is just the “you need to study the problem and it’s dependencies for many years to understand the physics and mathematical details” problem. Which does not actually seem like much of a problem to be solved, just a reality.

Which is exactly the "draw the rest of the owl" problem. Something that takes substantial time, skill and knowledge to achieve is demonstrated as "three easy steps."


I wonder if we’re in agreement?

The key word is ‘numerical’.

Maybe you’ve yet to see or do numerical analysis of both general relativity and quantum field theory but they are in fact both linear algebra.

Which as you point out is “fundamentally more accessible”.


As a physicist who used to work on unifying QFT and gravity, no, QFT is not accessible at all, to anyone. General relativity is kind of accessible to a non-physics student if they otherwise have a strong math background, which is like 0.001% of the population, optimistically.


All these topics are so advanced that even for someone trained in one it would take years to become proficient in the other.

Source: I'm a Physics PhD


Fun fact: When I was in a highly respected grad school in the 1980s (multiple Nobel Prize winners in the department), the QFT course was presented in such an obscure way, I stopped attending lectures, and I doubt anyone passed. The next year under "QFT" they taught Ashcroft -- i.e. a Solid State Physics course. That way requirements were met.


I took QFT in college and then grad school in two of the leading institutions in HEP. I can’t say the instruction was very enlightening in either place. Whichever path you take (not sure what you had in the 80s, I’m thinking of Peskin & Schroeder / Schwartz / Weinberg, maybe even supposedly easier textbooks like Nutshell) it’s just a very dense subject.

Now, GR or at least the core of it is a lot easier as long as you’re mathematically prepared for differential geometry.

Btw, I took particle physics from the Peskin in Peskin & Schroeder when I was an undergrad. Everyone in the class (like four of us by the third week?) was super duper lost by midterm.


yeah like this for anything. String theory is easy if you're mathematically prepared by being a field's medalist


Having attended many IAS talks, I don’t think string theory is easy for the likes of Ed Witten.


0.001% is already about 800_000 people :-)


80,000. And imagine, QFT is even harder than percentages!


I wonder how many people on earth can compute 8_000_000_000 * 0.001% 'cos obviously I'm not in that part :-)


General relativity is not linear algebra. Just glancing at the underlying equations, you can see they are non linear. [https://en.wikipedia.org/wiki/Einstein_tensor]

The same is true for quantum field theory. I'm curious why you're so confident that they are "in fact both linear algebra"


they involve matrices


They also involve addition, but that doesn't make them arithmetic. Knowing linear algebra gets you perhaps 10% of the way to having the math background for a first course in GR.


General relativity is linear algebra? Has anyone called up Tullio Levi-Civita and told him that his services will no longer be necessary?


I’ve seen this sort of reactionary response in mathematics and music as well.

When talking to a musical educator complaining about struggling students just starting out I’ll say “I found it extremely helpful to paint the C major scale on my guitar when learning music theory.”

And they respond “Well what about chromatic 12 pitch atonal music?! What would Anton Webern say?!”


As someone who has studied GR (at the grad school level) and music, no, that's a terrible analogy.


In GR terms, Levi-Civita is more like "This is a major scale and you need to know it" than serialism.

Some subjects are just plain hard and only accessible to smart persistent people.

GR and QFT definitely qualify - but they seem to be warm up exercises compared to whatever Quantum Gravity will eventually become.


Eh, no. The problem with your analogy is that the “not linear algebra” bits aren’t esoteric, rarely practiced or used edge cases: they’re the core of the mathematical model.


I guess you are mistaking the “local linear nature” of tensors for differential geometry. GR is certainly not “linear algebra”.


>Maybe you’ve yet to see or do numerical analysis of both >general relativity and quantum field theory

I call you out. Don’t think you’ve done either one after reading your comments.


No, just because general relativity and quantum field theory use linear algebra does not mean they are linear algebra.


Numerical relativity is not just or even mostly linear algebra. Even in the weak-field limit, where linearized GR is qualitatively correct, you still have to actually solve the resulting PDE.


They're both what now?



A tensor is a differential structure with pointwise linear properties. But the important thing is not the pointwise behavior but the differential nature of the “family of tensors in non-Euclidean space”.


This is too strong. There are important fields whose points are not tensors (e.g. spinor fields) and treating them like tensor fields risks introducing all sorts of subtle confusions.


Of course you are right but I was just trying to explain to OP that certainly General Relativity is not just “linear algebra”.


That's very weak


I’ll add to this to give an example of someone who I feel is doing a great job of satisfying this very request in the field of mathematics: Timothy Gowers.

His yt channel is an excellent resource of theoretical understanding through numerical methods.

Here’s my favorite series where he confirms/elucidates a “strange” theoretical fact with an explorative numerical style: https://m.youtube.com/watch?v=byjhpzEoXFs

Simply superb content.


> Sabine’s niche seems to be a confident cynicism or learned skepticism and is a much appreciated addition to the space.

While I also appreciate a healthy dose of skepticism, and in weaker moments I can also get very cynical, this is not a very constructive attitude to science or life in general. If we were all doing science like that, we'd not have gotten anywhere.

As mentioned elsewhere in this discussion, a "solution" will not only consist of a theory that "solves" the problem in the sense that Sabine described, but even propose experiments that can actually be executed within a time frame (and budget) in order to validate the theory. Just because her colleagues (and Sabine herself) have failed at that so far should not be a reason to give up. With that attitude, there wouldn't be much science left today.


> While I also appreciate a healthy dose of skepticism, and in weaker moments I can also get very cynical, this is not a very constructive attitude to science or life in general. If we were all doing science like that, we'd not have gotten anywhere.

What I read in this article is pointing out that these black hole information loss papers are not doing science.

There are no falsifiable predictions, because we can't actually run a meaningful experiment on black holes. At least in regard to information loss. As someone else in the thread said, thinking about how things might work without a way to tell if your idea is right is philisophy. Last I heard, physics was supposed to be science and not philisophy.


Lots of results in theoretical physics couldn't be verified in experiments until decades later. It doesn't mean that it's not science if you can't make an experiment at the moment.

"Not falsifiable" commonly refers to no experiment being possible in principle. That's quite different from what you mean.


I'm not sure whether her cynicism is helpful or not. But, style aside, it seems like it's mainly a function of whether her subfield as a whole skews optimistic?

It seems like her impression is that (parts of) the high energy theory community are too optimistic to an extent where they unintentionally or intentionally deceive the public (who are ultimately funding science through taxes).

If that's correct, perhaps it can be beneficial for science as a whole to provide a counterbalance?


>> It seems like her impression is that (parts of) the high energy theory community are too optimistic to an extent where they unintentionally or intentionally deceive the public (who are ultimately funding science through taxes).

This is a fair take. I really appreciate it. Sometimes i'm too cynical about the motivations of the folks being cynical :)


Yeah, cynicism is actually terrible for science, and as a result many scientists aren't overly cynical. I think it's something that happens as folks mature: they tend to look for the positives in papers, rather than the negatives.

This has certainly been my experience in cryptography: as a student you often start off proposing schemes which get broken by your advisor/collaborators (not out of malice, but just because broken schemes are broken), and so you learn to react to novel ideas with "how is this broken?". However, as you mature, you realize that all new correct ideas arise from the ashes of many broken attempts, and so your reaction slowly changes to "how can I fix this?", leading to an overall more positive outlook on both your work as well as others' work.


You are partially correct, absolute cynicism would mean that you would see the possibility that any hypothesis can be wrong before testing it so you’d just not do any experiment at all. So you do need to be optimistic.

But that’s not what most scientists do today. They are cynics masquerading (even to themselves) as optimists. They have preprogrammed themselves to never even think of a question that has a good chance of failing, modern academia has collectively programmed them all to only ask questions that never have a real chance of being false to begin with. So just softball niche questions or in the case of the videos topic, reformulate the question in a way so that the answer doesn’t fundamentally solve the real problem. Both because the real problem might be unsolvable and also because if you solve the problem then you have fired yourself from yoUr job.

Now you might think I and Sabine and others are just shitting on scientists doing the work, but many of us are only doing so after wasting decades with this establishment and giving up. Perhaps you can see that for yourself earlier and save yourself a lifetime.


Dang man, what?????? Tell us more of this tale!


Duck Tales, woah oh!


> a reason to give up

I don't think Sabine is telling us to "give up". Let me give an example. Suppose there is an experiment that is solvable by building a super collider whose diameter is the diameter of the moon. Should we be rushing to fund it now? Or should we wait for lunar construction techniques to bring the costs down first?

Even a lot of physics myths around doing the hard thing to seed other tech is flat out wrong. There's an often claim that the ssc tech resulted in MRI superconducting magnets; but that tech was actually descended from NMR research done by petrochem (and if you can believe it, dairy) industries that precedes the SSC by 1.5 to 2 decades.


I'll just add that I appreciate her cynicism because there is a staggering amount of bullshit in the high energy theory community. Embracing creativity is absolutely paramount, but people need to grow up and accept their own theory failures when they occur.


This seems to be all of academia. Maybe high energy physics is the worst offender, but from what I saw in gradschool, you need to be half charlatan to make it as a tenured professor, otherwise your grants don't get funded and your papers don't get published.


> Sabine’s niche seems to be a confident cynicism or learned skepticism and is a much appreciated addition to the space.

I think Sabine has settled into a niche with a few other physicists of "If you don't even have a hope of testing your theory, what's the point? Let's direct those resources into things that can be tested sometime in the next century."

Theoretical physics seems to have a few arenas where it has just completely left the realm of reality. It's not even like "Well, maybe we can test these in a few years when technology gets better" but more like "We would need 15 orders of magnitude more energy than exists in the universe to test this."


It isn’t talking down to your audience to relate that mathematical logic to them in understandable ways. A lot of people are curious about physics or astronomy but don’t have the time or energy to learn high level math to do so.


If you think numerical examples would help you probably need to read a quantum field theory book and understand that this is really complicated stuff.

Physicists for the most part aren't talking down spiritually, they're just talking to the equivalent of a child.


There's a wonderful textbook, I think by Griffiths, that starts from the premise that if you learn the math of quantum mechanics first (i.e. how to actually set up and solve relevant systems of equations) then it's much easier to learn the meaning and implications after. It gives you a lot of problems up front to give you a rock-solid grounding in numerical examples, and only in part 2 does he delve into what it all means.



His book on elementary particles perhaps?

I think there's a General Relativity book that does the same thing only with the schwarzchild solution, but I'm not sure who wrote it.


Yes indeed, very early on it shows the equivalence of summing over position/momentum states. Depending on the desired answer one is much easier to calculate.


I made sure to say “outreach” stead “education”.

I think it’s a paradox to say “you need to be fully educated on a subject to engage with an explanation intended for lay people”.


> “let me explain something using terrible metaphors because I fail to understand the math myself”.

Some of these channels have millions of viewers, and the one I think you are referring to has 11M. Remember when the Reimann Zeta function summation of -1/12 was all the rage, or the Banach-Tarsky paradox? Google either of those and there are a dozen YouTube videos for each one. These are post-doctoral topics that become name-dropping buzzwords, I would never claim to understand these, yet people who watched the video do.

Which seems like a lot what the internet has given us: information, not knowledge. To quote a speaker I saw at Siggraph in ~1996, "Information is not power, knowledge is power." This hit home. We've seen it from people reading WebMD in the 2000's and then trying to teach their doctors, to the crazy scientific ignorance about mask mandates today. All because of internet edutainment or in the worst-case misinformation.

Anyway, yes, her article got cynical fast, but I think it was because she bounded with realism because plotted on the arch of technology evolution: it would take thousands of generations (even at an exponential pace) to create tech that can take actual measurements. That seems like good science to me.


I agree that most videos in YouTube simplify the problem too much or are just wrong. My recommendations anyway:

* Mathologer: "Ramanujan: Making sense of 1+2+3+... = -1/12 and Co." https://www.youtube.com/watch?v=jcKRGpMiVTw

* Vsauce: "The Banach–Tarski Paradox" https://www.youtube.com/watch?v=s86-Z-CbaHA

Each one is like half an hour long, but you may need to pause them and rewatch them a few times to understand the details.


Yes, I've watched both of those. Mathologer is in the biz of math, Vsauce is in the biz of ka-ching.


I don't think the vsauce channel is egregiously monetized. He sometimes does not upload for months or even years, and does not make the most clickbaity videos out there. He could've easily been much bigger than veritasium, in terms of monthly views, if he wanted to by just making the same type of videos as he used to do ( the broad mishmash videos that I really really liked honestly).

The fact that he didn't, to me, indicates he is obviously not in the business of making money.


Yeah, that's a good point. But I always found Vsauce and Ve to be clickbaity, with the former targeted at a more educated audience. But that's my opinion.


I am a fan of her work and videos but saying We will never make experimental progress on the black hole paradox for TEN THOUSAND years seems myopic at best. Does she not instantly realize how long that is? Unless we nuke ourselves out of it, I see us as an interstellar civilization easily within 100-200 years. That’s being conservative. I’d be surprised if we don’t have probes looking around the nearest black holes or even creating ones experimentally in double or triple digit years from today.


The closest good candidates for stellar black holes are thousands of light-years away from Earth. There is one possible candidate at about 1.1 kly, but there are good reasons to think that it (QV Telescopii) will turn out to be something else, as seems to be the case for LB-1 *. V616 Monocerotis at ~ 3.3 kly <https://en.wikipedia.org/wiki/A0620-00> is the closest thing that is virtually certain to be a black hole.

Given measurements of the Local Standard of Rest and star surveys of the LSR neighbourhood, it is unlikely that there are any stellar-mass black holes at all within about 300 light-years of Earth (and X-Ray astronomy and microlensing/MACHO-hunting surveys each gives even more pessimistic nearby black hole numbers) so the prospects of getting a signal from an Earth probe dispatched to close to a black hole within a millennium is, pardon the pun, faint. There could be only some thousands up to a few (~10) million in the Milky Way, a galaxy of hundreds of billions of stars at this point in its evolution. There may be as many as ten billion stellar black holes in and around Milky-Andromeda five or so billion years in our future.

I note you also say "creating ones experimentally", but that is in the realm of science fiction at this stage, barely hard s.f. even for microscopic masses, and it gets decidedly soft as one climbs up the mass scale from nanograms.


Fair, is it still theoretically possible that primordial black holes of mediocre sizes are floating around?

Also of course experimental black hole creation is in the realm of science fiction but so was going to the moon 150 years back. Question is if there’s a fundamental theoretical block , similar to FTL , in achieving it?


Primordial black holes may be floating around, but they are very unlikely to be any to be closer to Earth than stellar black holes, given the kinematic evidence. (There could be faint hope for an isolated one of a fairly small handful within the Milky Way. FWIW, I don't think there are small PBHs. There might be gargantuan ones in the universe. There definitely isn't one of those hiding in our galaxy though, but maybe in our galaxy cluster near the centre of mass, wherever that is (millions of light years away, at least). There are some huge central black holes in some local galaxies that may have been seeded as primordial black holes -- the mechanism of growth of such ginormous black holes is not fully understood yet)). So they aren't really going to help you reach your thousand-year deadline, absent FTL travel.

Some researchers have proposed semi-seriously that there could be a small black hole lurking at the outer margins of our solar system. That obviously could be reached within a thousand years in principle (depends on how far out it is, although it must be considerably beyond Neptune's orbit), but we are perhaps decades out from a detection of it, if it even exists. The idea is, however, one of many reasons to build telescopes that are mainly to be aimed at objects in our star's immediate neighbourhood, to do very sensitive mapping of the metric the solar system sources (or, in Newtonian terms, our sun's gravitational field and the disturbances planets, asteroids, and other objects impose on it), and to do "nanolensing" observations on objects at a variety of distances. The Vera C. Rubin Observatory's Legacy Survey of Space and Time is a step in that direction, and will eliminate many possible hiding spaces for a nearby black hole.

There is an upper limit on proper acceleration proportional to mc^3/h-bar for massive subatomic particles like electrons, from Caianiello (1984) <https://doi.org/10.1007/BF02748378> (free PDF <https://link.springer.com/content/pdf/10.1007/BF02748378.pdf>) arising from uncertainty relations. We have no hope of producing that kind of acceleration as even with the best possible superconductors an Earth-bound accelerator can manage much less than a millionth of that <https://arxiv.org/abs/quant-ph/0407115>, and we need to be much closer in order to make a black hole out of small charged particles. Large particles are even harder to accelerate extremely. We also have a power-density problem: what fuels such an accelerator? Alternatively, accelerating less extremely means much longer linear accelerators, or nearly equivalently, less curvature in a ring accelerator. I think we're looking at a ring about the diameter of the orbit of Jupiter, or a linear accelerator crossing a large chunk of the inner solar system, but that's just a guess (someone has probably worked out some comparable size for ultrarelativistic heavy ions (lead or gold) though).

So the tl;dr is that it's not impossible, and it's easier than trying to throw large stars at one another, but neither is really plausible within your 1000 years. And neither is easier than sending a probe to V616 Mon. and waiting several thousand years. And we might do better by making really good observatories in and around Earth. Technically we could do all three, I guess, but we're more likely to live long enough to see improvements on the results from the <https://eventhorizontelescope.org>.

And of course, what I said about your three-digit timeframe goes double (if you pardon the expression) for your two-digit one.


Thanks for the detailed reply, obviously I’m not from the field and most my knowledge is from YouTube so your refutations help. Will read up further for sure.

FWIW if I live long enough for someone to confirm that Planet X is a black hole I’ll strip naked and dance in Times Square because golly is that such an out of the blue hypothesis.


Is the OMG cosmic ray particle an exception to your acceleration limit? (Sorry tried to calculate the number with wolfram alpha and got lost in the dimensions). If so perhaps there’s a possibility of using cosmic rays to get around us being unable to make such fast particles?


The relevant quantity in ultra high energy cosmic rays (like OMG!) is the GZK limit (initials of a paper's authors), which is an upper limit on the energy-momentum of an intergalactic proton, set by interactions with charged matter and cosmic microwaves along the way, which transfer an UHEC proton's momentum to particles resulting from the interactions. OMG! is one of several curious possible violations of the GZK speed limit. (GZK does not deal with re-acceleration close to our galaxy via coupling to magnetic fields for instance, nor says anything about a galactic origin for something with cosmic-ray energies; it's all about how very fast protons can be slowed down over cosmological distances).

The highest energies of UHEC protons are probably not caused by "instant" accelerations as in a supernova (which certainly produces high energy protons that fly across the cosmos). Instead, they're more likely drawn up magnetically around the supermassive black holes powering active galactic nuclei (quasars, blazars and so on) and into jets, where they spend possibly weeks or accelerating through a natural counterpart to a particle accelerator which benefits from things like inverse Compton scattering (a hard X-ray from the black hole hits our proton boosting the proton's momentum and lengthening the X-ray's wavelength (i.e., reducing its momentum)).

But because we don't know OMG!'s origin, we can't say if OMG! tests any acceleration limit originating in uncertainty relations. Also, the Caianiello limit is unhealthy for protons (which OMG! likely was) staying protons: that they are composite structures makes them liable to be blown apart by extreme acceleration. Assuming it's a proton and had a blazar origin, OMG! is (really rough back-of-envelope work) likely to have fallen between 7 and 40 orders of magnitude short of the maximum (proper) acceleration. The intuition here is that OMG! is well short of Planck energy (by some 7 orders of magnitude) and super-high acceleration should take a proton from thermal energies to Planck energies. Of course a proton much closer to Planck energy could lose proportionally more along the way than a less-energetic proton, so who knows? There could be a doctoral dissertation waiting to be written on this!

Also, sorta disappointingly, OMG! could have had an electric charge greater than +1 (i.e., some ion with more than one proton) and in that case could have originated as close as our own sun <https://en.wikipedia.org/wiki/Solar_energetic_particles> <https://en.wikipedia.org/wiki/HZE_ions>.


You are amazing and I have learned so much here, thanks!


This line of thinking led me to a paper I had not read before (even though it has a tremendous number of citations and is reference (2) in his famous Black Hole Explosions? (evaporation) letter of 1974[1]), by Hawking (1971) <https://academic.oup.com/mnras/article/152/1/75/2604549> (PDF freely available there).

In this paper he considers the possibility that the universe is almost dominated by tiny (~ micrograms) black holes formed in the very early universe, persisting to the 1970s and leaving tracks in sufficiently large and sufficiently numerous cloud chamber detectors given sufficient time. (This was also decades before optical detectors like at SNOLAB, but incongruously after the Nobel prize for bubble chambers, although it was on the cusp of the activation of <https://en.wikipedia.org/wiki/Gargamelle> and the Big European Bubble Chamber -- he would have been aware of how close they were to activation, and results from smaller predecessors). Fascinatingly, several years after publication, conceptually similar papers were put forward for hunts for dark matter, even though it is impossible to extract from this Hawking 1971 paper that he was thinking of galactic rotation curves.

The paper also pre-dates evidence for the acceleration of the expansion of the universe, and contains the curious sentence: "An upper bound on the number of these objects can be set from the measurments by Sandage (7) of the deceleration of the expansion of the Universe." However, the excellent Allan Sandage paper of 1961 <https://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_que...> does not really let one take sides ("It suffices to say that no uncontested evidence is presently available [to choose between steady-state and expanding universe]") like Hawking 1971 seems to. Nevertheless, Hawking raises an interesting and slightly spooky premonition of dark matter at super-galactic scale, keeping clusters of galaxies looking like clusters over long periods of time, rather than separating into individual isolated galaxies: "This extra density could stabilize clusters of galaxies which, otherwise, appear mostly not to be gravitationally bound." In context Hawking was clearly thinking about a universe that grows into some very long term non-expanding/non-contracting configuration which includes clumps of galaxies.

Hawking 1971 tl;dr is "good luck actually observing these small weakly-interacting things, if they exist". Which is appropriate for 21st century particle dark matter experimentalists too. :^)

Penultimately, some patience with google scholar has found that a small handful of the many articles citing Hawking 1971 explore the idea of primordial black holes as dark matter. (but see below).

And finally, sometimes asking questions like yours (how does one build a very small black hole?) and taking them seriously leads to some very early, vague, ideas about how the universe might work in bulk, which is probably very very similar to what provoked Hawking's letter about 52 years ago.

- --

[1] https://sci-hub.se/https://doi.org/10.1038/248030a0

The relevant part of his evaporation letter: "There might, however, be much smaller black holes which were formed by fluctuations in the early Universe². Any such black hole of mass less than 10^15 g would have evaporated by now." In that context, I think a very low motivation for someone not terribly interested in the history of primordial black hole research to have previously hunted down a 1971 note whose idea was shortly thrown under the bus by its author is fairly understandable. :^)

Hawking 1971 is however an early note on primordial black holes (the only obvious earlier one in the literature is from Zel'dovich and Novikov in 1966), and that seems to be behind many of the citations I briefly sampled. Practically all also cite the more extensive follow-up Carr & Hawking 1974 "Black holes in the early Universe" <https://ui.adsabs.harvard.edu/abs/1974MNRAS.168..399C/abstra...> which is very close in submission date to the publication date of Hawking's "Black hole explosions?" yet does not appear to consider evaporation.


One refutation I’ve heard (in YouTube videos lol) for the possibility of a swarm of “small” pbhs floating around is that if one of them passes through a neutron star it instantly becomes a black hole - thus you can’t have densities high enough that they hit neutron stars often enough to take them all offline beyond the numbers we observe..


Neutron stars are rare and hard to hide (they're very bright in X-rays, and stand out against dim and dusty backgrounds, unlike isolated black holes). There are maybe low tens of thousands of neutron stars in the Milky Way (we've counted about 3000 of them).

There are hundreds of billions of regular stars in our galaxy (and in similar galaxies), some much more massive than neutron stars, and all of them have a much larger cross-sectional area than neutron stars. (NSes are tiny -- compare with New York City -- <https://commons.wikimedia.org/wiki/File:Neutron_Star_Manhatt...>) So small black holes are much more likely to collide with ordinary stars, and even Long Island NY and other comparable features on Earth, other solar system bodies, and exoplanets (we've counted five thousand of those now, and a lot of them are considerably closer than the nearest neutron star). But we see no evidence of that.


Again pardon my amateur source of the discussion - I learned this from A PBS space time video where it was contended that a fast traveling pbh with asteroid level mass wouldn’t actually slow down and just pass through most things like planets. But not neutron stars. Also I suppose stellar cores? No idea.


Although it's great that it's provoked you into asking questions, it seems like there is a lot that the PBS video must not be placing into your mind, like how these asteroid size masses are moving with respect to some local standard of rest in their region of the galaxy (and how commonplace they are) are crucial to the idea.

If they're moving gently along with all the stars in their neighbourhood, collisions between these asteroid-mass black holes and anything appreciable is unlikely (just as star-star collisions are very rare) that we wouldn't expect to see that ever happen.

Wandering pretty violently in comparison to nearby bright objects, an asteroid-sized black hole is likely to zip by a star and its satellites on a (strongly) hyperbolic orbit, like <https://en.wikipedia.org/wiki/%CA%BBOumuamua>, just dimmer. It's even less likely to hit a neutron star because of the NS's compactness, and because the geodesics it generates produces greater hyperbolicity near its surface (because that's quite near the centre of mass) than near the surface of a regular star. And hitting a sun-like star is already hard for an asteroid: <http://curious.astro.cornell.edu/our-solar-system/comets-met...> (likewise, we have never seen a small black hole flare up near our sun).

It is more likely that -- very occasionally -- an asteroid mass object (by strong equivalence principle, it does not matter if it is rocky, icy, or a black hole) is captured into a weakly hyperbolic orbit, or even an elliptical one, by a ~stellar mass. There are reasons to think an NS might be more efficient at this than a fatter star. But then you get a (possibly wide) binary star-BH that soon becomes hard to distinguish from a binary that formed together and "grew up" as stars together, with one happening to get too fat and collapse. The missing nebula in the captured-BH case would raise questions.

I don't buy the idea of a small black hole that somehow manages to slice through a neutron star doing much to the neutron star. Typically many many solar masses of matter fall back onto neutron stars after they violently form, and almost certainly there are asteroid-sized chunks of heavy atomic nuclei landing on them often. If you had some cosmic pool shark aim a small BH so precisely that it not only enters the (very very very very very small) NS core, but stays there, coming to a stop just so, the NS would collapse into the BH [1], because no matter can rest just outside the event horizon. The inner hard "platform" matter would fall in first, and in turn the freshly unsupported upper layers which had been "standing" on that platform would fall in, and so on down the widening drain. But if you shoot an asteroid-mass through an NS, it's hard to think there'd be much consequence. Sufficently slowly, there could be a nuclear explosion but probably not one that's catastrophic to the NS's integrity: I think it'd be hard to distinguish from the nuclear explosions from the rain of supernova-remnant material falling back down onto it, or random interstellar matter an isolated NS might encounter, or crustal quakes.

(I could be convinced by an academic paper equipped with an NS equation of state though (see [1] again), in principle, but very likely not a youtube video, much less a second-hand summary of one :-) ).

To a zeroth order approximation, the effect on a planet of a collision with an asteroid-mass black hole is not going to too different from the effect of a pair of collisions with ordinary asteroids (the "pair" is from the ingress and egress sides of the "tunnel": actual asteroids will tend hit a surface and stay there, or break up in a thick planetary atmosphere if there is one, etc., while small black holes will sail through). The asteroid-mass black hole will certainly create lots of electromagnetic interaction on contact; you can think of how black holes tend to have bright accretion structures, so you expect a big flashy kaboom as with a rock (but with more X-rays and gammas).

While we can resolve exoplanets, we can't quite see them being hit by asteroids, assuming the Jupiter-type exoplanets are hit with some frequency, we could blame limits in our telescopes. (JWST will help very soon!). Rockier exoplanets are no more likely to be hit by asteroids and lit up brightly than Earth or Mars. Asteroids are just tiny masses compared to these bodies.

- --

[1] fate of a neutron star with an "endoparasitic" small (and possibly primordial) black hole, that "just so" is already inside the NS: https://arxiv.org/abs/2101.12220


Got it. The reference is great, the only final addendum I wanted to mention was whether a neutron stars densities would stop a black hole in its tracks since the paper only talks about the black hole already having been there. (At least to remove the amateur recollection of mine from the reference link, the mention of this possibility happens in the video at 6:18 https://youtu.be/qy8MdewY_TY ) and here’s them talking about a BH punching through earth which seems roughly in line with what you say as well! https://youtu.be/AK44wAvv2E4


Popular science has leaned heavily on terrible, unhelpful metaphors for decades, long before any YouTube channel existed.

Of course you can't quickly teach the general public the actual math of quantum mechanics or whatever, but I think it's far far better to explain things in simple terms without further complicating the subject with awkward analogies.


Check out the youtube channel Physics Explained. I don't think you can get much simpler than his videos yet still be numeric. They're great videos though, not pop-sci at all and his explanations are great. But don't expect something you can watch casually.


This is like having someone try to explain what a database is in layman's terms, and then having a member of the audience who has written one HTML webpage demand code. They simply do not even have the basic blocks needed to comprehend databse code down: variables, types etc.

You cannot teach them that in any acceptable length of time. Just showing code and pointing at various parts: "this does x", "this other thing does y" is no more useful than using abstract art: the subtle syntax and semantics you are so accustomed to interpreting that you do not even notice it (array subscripts, point dereferences, etc.), they cannot even realize is there.

This is why science educators don't show the math. They're not talking down because they want to, but for the same reason you can't explain object relational impedance mismatch to a 7 year old.


Her video on the Simulation Hypothesis was very poor https://lech.substack.com/p/sabine-hossenfelders-video-the-s....


The responder appears to consider Nick Bostrom an actual philosopher instead of some sort of AI-worshipping SF writer which is never the impression I'd gotten.

> Classic Poisoning the Well fallacy. The audience will be predisposed to compare the hypothesis to religion and not take it seriously before any argument is even made.

Why shouldn't you take religion seriously? Pretty rude to religions.

> There are many theories in theoretical physics where it is very difficult or impossible to come up with experiments to prove or disprove them, e.g. Everett’s Many-Worlds Interpretation or the String Theory. One might call these sorts of theories metaphysical, fine, but it doesn’t make them into disproven pseudoscience or religions.

On the other hand, yes it does do that, which is one of her most common points.

> Again, it doesn’t mean much that nobody currently knows how to do it. No practical quantum computers exist yet, so obviously, nobody tried to run anything useful on them and more quantum algorithms will be found that run with lower time complexities than on classical computers.

This doesn't help him because there aren't any algorithms that are uncomputable on classical computers but computable on quantum computers. She's also correct than quantum computers are special-purpose things and otherwise fairly useless.

Though I don't know why the universe wouldn't be a computable function if you allow the computer to have a true random number generator.


Bostrom is a full professor of philosophy at Oxford. Whatever you think of his arguments (or more likely, whatever mangled summary of them you ended up getting after the internet played a multi-year game of telephone with them first), he's as actual a philosopher as any living person can be.


I don’t have a degree in the field but one of my parents is a professor of philosophy and the other is a Cambridge MA in it so I think I’m qualified to disrespect him. (I can at least tell you that being a philosophy professor isn’t the same thing as being a philosopher.)

More importantly, I know that believing in superintelligence means you’ve been reading too much SF and have forgotten that things can’t exist just because you can imagine them existing. Spend any time worrying about that and next thing you know you’ll have moved to a group home in Berkeley and joined an effective altruism cult.


The idea that a superintelligence (by which I mean, something more intelligent than humans in all ways that we might measure) not only wont occur (which seems likely to me that it wont), but is impossible, seems to me, quite odd?

Why would human intelligence be the absolute limit of what is possible?

Seems implausible.

Or maybe you mean something else by superintelligence, like, "more intelligent than humans in a self-improving way, and then FOOM and then it is necessarily more powerful than everything else", in which case, yeah that might be impossible (the upper bound of intelligence could perhaps be, while still higher than human intelligence, not actually all that much larger)


> Why would human intelligence be the absolute limit of what is possible?

I'm willing to accept that intelligence exists insofar as it's the difference between a human and a gorilla. But is that something you can have "more" of?

That concept implies humans are smarter just because their brains have more IQ, but maybe it's a single fixed function - so the gorilla has 0 intelligences and the human has 1 - then something with 2 isn't a superintelligence, it's more like a conjoined twin and would just find it wants to do two things at once and can't.

But maybe superintelligence means someone who can think more quickly than you. So maybe that's about as scary as Magnus Carlsen or Terry Tao, but outside chess we already have calculators, and people who do math by hand don't seem to fear calculator users as superintelligences despite their superior results. In the real physical world, thinking faster doesn't necessarily get you better results because you still have to be correct, which means guessing and checking, so maybe it's the super-patient person we should be afraid of.

Admittedly neither of those are proofs it's an impossible concept, just some plausible alternatives.

> Or maybe you mean something else by superintelligence, like, "more intelligent than humans in a self-improving way, and then FOOM and then it is necessarily more powerful than everything else"

Yes, I believe this scenario they're worried about is impossible because something will always appear that prevents it, and they've left all possible somethings out of their scenarios because they are only imagining them and not actually testing them.

Such things would be entropy, energy consumption, communications delays, self-interest, "self-improving" turning out impossible due to Goodhart's law, etc. Examples of it not happening are large corporations (which always eventually stop growing, need constantly increasing inputs, and are actually composed of lots of smaller intelligences that don't all have the same goals), and the Mind AIs in Culture books (which have to be held back to even want to think about reality, and if they get too smart immediately stop caring about reality and just think about math forever).


It seems like you are contrasting the "person who thinks faster" against the super-patient person, but, to me, it seems like they go hand in hand.

Thinking faster isn't synonymous with being more likely to take risky mental shortcuts.

Imagine a person who, for every hour for us, experiences an entire year's worth of time (and they can make/review notes on their ideas as quickly, compared to their experience of time, as we can compared to our experience of time. So they aren't limited to being able to keep a small amount of information in working memory.).

Would we not expect such a person to be able to consistently outwit us?

It seems this is enough to demonstrate it is at least a coherent concept, even if something as extreme as what I just described might not be physically possible.

(Of course, talking about subjective experience in this way might be a little bit of a distraction, as perhaps there is no guarantee that a highly intelligent (in the sense of "able to formulate plans and such in the world, etc. ) agent have any internal experience. But, it is at least easy to imagine. And, hey, maybe it does happen to be the case that any highly intelligent agent would have internal experience, idk.)

But, even if something that extreme (able to do as much planning and reasoning over the course of an hour as one of us could do in a year) is physically impossible (and I wouldn't be too surprised if it is physically impossible), I would still expect that there are things at least a little bit in that direction from us. Which, yes, might be be basically along the lines of [ (you or I) : Terry Tao :: Terry Tao : (hypothetical agent) ] (or perhaps iterating that analogy a few times, idk. Again, not sure how far from the limit humans are, it just seems implausible that humans are at the limit.) .

And, if in the same sense that Terry Tao is more intelligent than I am, there were an agent which was more intelligent than all (individual) humans, and wasn't guaranteed to have human-like goals/values, nor to value human interests highly, then yes, I do think that this could be rather concerning, depending on the size of the difference in intelligence, combined with what the strategic positions are.

> and people who do math by hand don't seem to fear calculator users as superintelligences despite their superior results.

This doesn't really seem like a serious argument to me?

Like, obviously people who don't use calculators, can, just, procure a calculator if they find lacking one has put them at a disadvantage? It seems a silly argument.

Regarding FOOM and such, yes, I'm not particularly worried about it, or in fact about AGI at all?

Also, the example of the Mind AIs in the culture books, is not evidence, because that is a work of fiction? The world is not to obligated to follow tropes from stories we tell, and stories tend to differ from reality in ways that make them better stories.

Have you tried arguing both sides of the point you are arguing?


Disappointing since it’s so easy to find problems with it in other ways.


My random Sabine observation is that her accent is very similar to Albert Einstein's.

It took me a while to realize why I thought everything she said sounded so authoritative :)


I find PBS Space Time to be superior to her videos in pretty much every case.

https://youtube.com/c/pbsspacetime


I find PBS Space Time to be better https://youtube.com/c/pbsspacetime


To me it seems more like she failed at creation and found an easier path forward in arson.

In a world where FUD spreads faster than truth and not everyone is an expert in every topic: her work does as much, if not more, harm than good. Particularly in fields she is not an expert in.


If you change assumptions to solve a big problem, you don’t necessarily need to measure the big problem to check the assumptions. General Relativity itself was first confirmed with a relatively simple measurement of star displacement during a solar eclipse.

The “real solution” to the black hole information paradox will be one that solves the paradox AND provides a “small” way to test the change in assumptions that creates the big solution. This is definitely worth looking for IMO.


>provides a “small” way to test the change in assumptions that creates the big solution

Isn't this the problem. Unless we discover some primordial black holes, all black holes we currently know about are way too cold, and will be way too cold for billions of years, to test anything.

GR predicted black holes, but GR wasn't about black holes, and there were plenty of other things you could test.


I think you are missing the point of the parent comment. Changing the assumptions doesn't just change how we predict black holes will behave, it should have impacts on other things that we can measure.


Totally agree. And it may even be that the small thing comes first. That we happen to observe some weird shit in a different domain, and once we shuffle assumptions around to fit observations we get black holes for free.


Here's what I've never understood. From an outside perspective the black hole will evaporate before the thing falls in. Thus a thing can never fall in. From its perspective the hole will emit more and more intense radiation and finally evaporate just before it hits the horizon.

If true, I think you can go even further and say no black hole can completely form; the collapsing matter just gets exponentially closer to being fully black until the effect of the Hawking radiation outweighs the gravitation, but it all evaporates before going fully black. No?

(This latter part assumes there's some Hawking radiation or equivalent from pre-black holes as well).

Now, I assume the pre-Hawking radiation would be unitary, since the only reason Hawking radiation is not unitary is because BHs don't have information, but pre-black holes are not black holes. So doesn't that solve the info paradox? Without resorting to holographs and whatnot? Where's the error?



Sure, but the paradoxes listed on those pages go away because the black hole never fully exists. The mass inside any volume is always just short of what would be required to make a black hole. And so of course we can expect to see crazy quantum effects at the boundaries, just like any other massively dense celestial object. However it's hidden behind insane time dilation to anyone outside.

Going a bit further, it could even explain dark energy (Granted, I'm way out of my depth here): One outcome of General Relativity is that a large enough object of any density is a black hole (a black hole the size of Saturn's orbit need only be the density of water). Thus if we assume the universe is infinite and self similar, mass would have to constantly expand in order to avoid black holing.


Could a black hole exist in the same universe as a magnetic monopole?

https://en.wikipedia.org/wiki/Magnetic_monopole

If energy is related to matter by E=mc², then the electromagnetic field of the monopole would also be diminished by a black hole, if one truly exists. But there might only be a single monopole.

"Standard models of inflation solve the “monopole problem” by arguing that the seed from which our entire visible Universe grew was a quantum fluctuation so small that it contained only one monopole."

https://www.newscientist.com/article/mg14419512-600-do-we-li...

Scientifically I'm way out of my depth, but perhaps there's a perfect universe on the other side of the black hole/white hole/wormhole. Or perhaps the perfect design is already here, and when we find bugs, we should be working to improve the world around us to make it better over time. I know I can't do that alone, but by some miracle, technology seems to be helping.


Some predictions suggest particle-sized black holes are a possible outcome of very energetic collisions.

If they didn't evaporate, they'd be relatively common because the universe is quite an energetic place.

This suggests either that particle-sized black holes don't exist, or that they evaporate very quickly.

This is a good thing. Because some predictions imply they could be created at the energies CERN generates. And if they didn't evaporate they'd end up in or around the earth, with very unfortunate consequences.

None of this solves the information paradox. But if microscopic black holes turned out to be a thing, it would make it slightly - not much, but a little - easier to consider that it might be possible to experiment on them directly.


> with very unfortunate consequences

Less than you might suppose. A micro black hole that the LHD could generate is a tiny tiny hole, and you can only cram a tiny amount of matter down it at once.

An extreme lower bound on the time such a black hole might take to grow large enough to destroy the Earth is on the order of 10,000 years. More likely it would take longer than the lifetime of the Earth.

https://s3.cern.ch/inspire-prod-files-8/836037ce600a97222290...


But can we safely dispose of the micro black hole (say, by launching it into space) and how long do we have before it becomes too big to deal with?


It would make a good science fiction story.

Somebody drops a black hole into the Earth which interacts very weakly with the Earth other than gravitationally so it is falling back and forth through the Earth every 90 minutes or so with the Earth rotating around it so it shows up at different spots.

As the hole takes on mass the height it reaches will diminish and rotationally it will catch up with the Earth, but it takes a while. People rush to move a

https://en.wikipedia.org/wiki/Very_large_floating_structure

into place to catch it.


Question/ELI5 for anyone still reading this thread: What if blackholes form a worm-whole emto elsewhere, possibly to other blackholes, they would have enough energy to sustain a large enough wormwhole where They start acting like one or two way tunnels for information across great distances and maybe even galaxies?

Also:The wiki for hawking radiation lead me to the Unruh effect, can the heat from this effect be used to generate even more acceleration? As in, a spaceship only needs enough energy to accelerate to a point where the unruh effect radiation is sufficient to maintain acceleration, after that point it can continue to accelerate without needing to use any fuel?


Taking your "Also" first:

The accelerations required for reasonably measurable Unruh temperatures will rip apart any structure we know of, so at present we only have hope for accelerating charged leptons. However even still we are probably only able to sustain the accelerations for on the order of nanoseconds, which poses seriously difficult engineering challenges to the detector-side of a measurement experiment.

That's the problem with Unruh radiation: acceleration has to be powered by something, and it's hard to separate the power source from the accelerated object. And if your object is accelerating strongly enough, it will pancake/disintegrate/explode. So we must use an outside power source, like in linear particle accelerators, to drive the extreme acceleration (at least many billions of billions of "g") of a particle that can survive it (or a stream of them), and try to work out the detection difficulties. This idea dates to J.S. Bell in 1983, who probably correctly summarized that the temperature changes would be "... too small for real experiments in linear accerators."

Wormholes in General Relativity are weird. They are not forbidden, but they make a mess of trying to extract good (as in compatible with everyday experience) behaviour from dust, gas, stars, and so forth populating the universe as we know it. Worse, once you allow them, it's hard to suppress them, and we can be pretty confident (by virtue of actually being alive and seeing a reliable relationship between a running engine and the fuel it consumes) that they are not popping up everywhere willy-nilly like they would without some unknown suppression mechanism.

In order to allow them in the first place, one needs some "exotic" matter (exotic in the sense of undiscovered, unknown, and very different from ordinary matter and even any plausible form of dark matter). Introducing that can give you wormholes, but then you have to be very careful about where and when you introduce the exotic stuff. Otherwise, wormholes everywhere, eating through ordinary matter structures. Depositing energy into discharged batteries, draining non-leaking fuel tanks. Or worse doing that in your cardiovascular system or brain. Also alien brains, stars, clouds of interstellar dust...

(Pervasive wormholes has been embraced by some name-brand theoreticians, but they then rely upon some unknown scale suppression mechanism: there are wormholes all over the place but they are always very very very tiny compared to atoms, so the effects on e.g. AAA-batteries and stars are undetectably small. In this approach you are already riddled with wormholes connecting subatomic bits of you to subatomic bits of e.g. our galaxy's central black hole, but alas, you remain unable to travel faster than light).

We can of course say, "well General Relativity could be wrong", and explore the consequences of a theory wherein we can build usefully-traversable wormholes without unknown matter. That theory, however, is virtually certain to be wrong on details about the orbits of bodies in our solar system and the difference at Earth-bound receivers from test signals sent across the solar system originating in our space probes like MESSENGER.


I had a similar issue. This is my tl;dr understanding now.

The thing to realize is that the view from outside shows the object falling in and getting dimmer and dimmer and more and more red shifted. You never actually see the object disappear.

From the viewpoint sitting on the object, you fall right through the event horizon and get turned into mush, pretty quickly.

So, black holes can be created as the matter can clearly fall past the event horizon and add to the black hole mass.


>From the viewpoint sitting on the object, you fall right through the event horizon and get turned into mush, pretty quickly.

Do you? From an outside perspective you never fall in, and from an outside perspective the black hole eventually evaporates. From your perspective, as you fall in, the black hole evaporates in front of you.


Outsider perspective is about not being under significant influence of the black hole, not simply being outside the event horizon. As you fall in your perspective no longer follows that of a simple outside perspective.


You mean black hole is created from your perspective, but not from outside perspective?


A black hole is black because you can't observe it, observing things falling past the region you can't observe isn't a requirement for it to be black it's actually an anti-requirement. The outside perspective still sees a black hole, as in a region of space it can't observe anything from, it just can't observe what's happening to a perspective at or beyond the event horizon.


Shouldn't it be independent from observation and happen even if you look elsewhere? And why the falling matter can't be observed? If it didn't reach even horizon, it can be observed given a good enough telescope.


> From its perspective the hole will

Which "its"?


you can always start by performing a mass/energy balance...


One has to be skeptical even about the whole notion of the temperature of a black hole and existence of Hawking radiation.

In a book from 1927 Richard Tolman tried to generalize thermodynamics to General Relativity. One of the most interesting of his results was that in GR thermal equilibrium required a temperature gradient that depended on the gravitational field. Tolmen’s result is still sometimes discussed but it is not settled if he was wrong or right.

The catch is that if his reasoning was correct, then the black hole horizon from a point of view of external observer should have the temperature of 0K which in turn implied no Hawking radiation.


I did a search on arxiv and I see people are writing a small number of modern papers on that exact topic. Are you familiar enough with them to summarize?


I studied theoretical physics and read Tolman’s book almost 30 years ago before switching to programming. After that I have been following the topic as a hobby.

As I understand, the math behind a particular generalization of thermodynamics that Tolman had used was sound. But there are other ways to try to merge the notion of temperature with GR that are considered more compatible with current attempts to merge gravity and Quantum Mechanics. We do not know which one is right.

In Tolman’s approach one can not have thermal equilibrium in a stationary thermally isolated gravitation body if the temperature is constant. Isolated here is important. Think as if a star is surrounded by a membrane that no heat can penetrate and consider the resulting temperature profile.

Extremely simplifying to the point of risking to give wrong impression the gravity has inherent energy gradient itself and to get isolated stationary system without energy flow the temperature must show a gradient as well.

Tolman himself did not try to apply that to black holes, but it is straightforward to apply his equations in that case and that gives the absolute zero temperature at the event horizon. Again extremely simplifying in Tolman’s approach a black hole is a gigantic freezer from a point of view of external observer. That neatly solves the apparent disappearance of entropy. There is none as approaching the event horizon from a point of the external observer means loosing temperature. So entropy is lost before the horizon and there is no information paradox.

Edit: for a star in a thermally isolated membrane Tolman’s approach gives that with a thermal equilibrium the temperature drops towards the center as the gravity is stronger there. Although that may sound counterintuitive, the idea is that stronger gravity slows down the time and from a point of an external observer that means colder temperature. And at black hole horizon time stops completely as seen by the external observer so the temperature is zero.


A few of my friends in school went into physics very starry eyed and excited. They worked in academia. Then, they worked for space startups and NASA. Now they work in "business intelligence" and the tech ad business. I do enjoy Sabine's writing and perspective. I miss my friends' aspirations and enthusiasm for physics and space.


Sonic black holes have been studied experimentally. They are only weak analogs of actual black holes, so how much light they can shed on real black holes, I don't know, but it does seem to me that as long as they do have insights to reveal, we can't say that work on black hole information loss is purely mathematical. It's also possible that pure mathematical solutions will yield predictions that can be tested w/o a black hole.

So I'm a bit skeptical of her take on this subject, though if work in this space is unrewarding and there's more rewarding work to do elsewhere, then that makes sense. But then, I am not a physicist!

  https://interestingengineering.com/sonic-black-holes-and-the-information-paradox


Doesn’t the black hole just delay the information, not destroy it? Things that “fall in” from our perspective just fade down into a static low frequency frozen image on the horizon, and the remaining trip to the horizon as seen from the outside takes infinite time.

The falling perspective likewise loses timely access to information about the entire universe as the singularity fills their view.

I don’t see a paradox. Just the strange behavior of time at the limit.


That's not the paradoxical part. Matter goes in, sure. But before it goes in, it is something, it has a form and a composition, it is in a state and it contains information of its prior states as well.

The part that is problematic is that matter that enters the black hole is only returned to the universe as anonymous radiation.

The universe is stateful, and while not all processes in the universe are reversible, matter and energy do encode the states that led to their present state and thus the prior states can be inferred (by a hypothetical, powerful enough computer, for example).

The problem with black holes is that the Hawking radiation from a black hole does not encode any information about its prior state.


I think you missed the GP's point, which is that matter is never "returned to the universe" because from any frame of reference outside the horizon, the matter takes infinite time to transit the horizon, so it never actually appears to enter the hole.


How does that remain true after the black hole has evaporated and there is no longer a horizon?


Disclaimer: we are at the hairy edge of my knowledge here, so what I am about to tell you could very well be wrong.

Hawking radiation has never actually been observed. It is just something that pops out of the math if, as Sabine rightfully emphasizes in her video, you make certain assumptions. And one of those assumptions is that you have a fully-fledged black hole, i.e. an object that actually contains mass beyond the event horizon. We are used to thinking of this assumption as having actually been confirmed by observation, but it is not actually true. No one has ever actually observed a black hole, notwithstanding that we've ostensibly taken a picture of one. That image was of the radiation emitted by an accretion disk, not the black hole itself. Black holes themselves are, obviously, impossible to image.

So we don't actually know whether black holes actually exist or not. The only thing we've directly observed is their gravitational effects, and the gravitational effects of an actual black hole are indistinguishable from having all of the mass of the hole actually resident just outside the event horizon. What actually happens at the horizon is beyond the reach of our current theories because there both gravity and quantum effects are significant, and we do not yet have a consistent theory of quantum gravity. Everything we think we know about black holes is actually the result of taking GR and QM and framming them together in some ad hoc way by adding simplifying assumptions which may or may not actually be true.

This is the point Sabine was trying to make: the black hole information loss paradox is not a problem with physics, it's a problem with our current theories. We simply don't know how the universe actually behaves in the presence of extreme concentrations of mass/energy. The only thing that the BHILP actually tells us is that either GR or QM -- or both -- are wrong, mere approximations to the actual truth in the same way that Newtonian mechanics turned out to be an approximation to the actual truth (one that happens to work extremely well in weak gravitational fields).

But no one has a clue which one is wrong or how despite 100 years of effort. And one of the reasons for this is that we have no data, and no reasonable prospects for obtaining it. So we may just have to make our peace with not knowing.


> Black holes themselves are, obviously, impossible to image.

What we observe is the "shadow" of the black hole. The expectation is that the flux from the shadow should be consistent with zero. For M87* the observed flux ratio with the ring was ~10:1. See Paper 1:

First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole https://arxiv.org/abs/1906.11238


>Black holes themselves are, obviously, impossible to image.

Unless they produce Hawking radiation. Then by definition they are possible to image. In the original article she mentions that temperature of known black holes is lower than CBR, meaning that they are too cold to be seen agains background of cosmic background radiation.

I personally think that they produce no radiation and do not evaporate, but this is just unscientific philosophical opinion.


> Unless they produce Hawking radiation.

Fair point (modulo the practical difficulties of measuring Hawking radiation).


It's fascinating to think that our theories on everything around us are based on everything near us enough to be measurable and experimented on. Black holes are so far away and so outside of our normal observations of mass and energy that we can't observe them well enough to do experiments.

It's like we've come up with two theories of how the universe works but and the universe is like "you guys are like 90% there but here's a case where these don't work.". It's fascinating to think that there's gotta be some one theory that can account for everything in the universe both big and small from black holes to quantum stuff. It's like we've dug ourselves in two very deep holes over the years and maybe we need someone to come along with a new hole that encompasses both things that see the whole picture. Anyways now I'm just speculating.


> there's gotta be some one theory that can account for everything

Actually, there doesn't. The universe is under no obligation to operate according to laws. That the behavior of the universe is so lawful is quite remarkable. It didn't have to be this way. Our universe could be a simulation, and that simulation could have been created by some capricious being who makes all kinds of random shit happen just to fuck with us. That does not appear to be the case, but there is no reason that it could not have been.

Likewise, there is no reason why our brains should have the capacity to be able to figure this out. It may be that the Kolmogorov complexity of the universe is vastly larger than what the human brain is capable of dealing with. Again, this does not appear to be the case. In fact, it appears to be the exact opposite. We can explain 100% of the phenomena within our solar system, and even within most of our galaxy, with theories whose KC is shockingly low, small enough to be grasped by a single human brain. But it didn't have to be that way. And maybe it isn't that way. Maybe we have actually reached the limit of what the human brain is capable of (in terms of figuring out physics). I don't think so, but you can't rule out the possibility on the basis of the evidence we have.


I'm just saying there's gotta be because of what we've observed so far. Like you said, we can explain 100% of the phenomena in our solar system. It leads to believe that we can do that in the future for things outside of it.

Also, just because a theory is discontinuous at its boundaries doesn't mean we can't have a theory that is on the other edge of that boundary. The unifying theory is supposedly supposed to link both general relativity and quantum mechanics.

Obviously this is just my opinion but I think any system that is sufficiently observable, with enough time can be figured out completely. I don't think it's a matter of if our brains can understand it but if we can have the ability to run experiments on it and enough time.


Wall of text, didn't answer the question.


The question assumes that black holes evaporate, and we don't actually know that they do.

Is that better?


That is a more transparent answer and direct answer, yes. Maybe if you had lead with that sentence, I wouldn't have been inclined to respond. But it also fails to address what I point out below. (as an aside, this answer is also not one that I find satisfactory on the actual topic.)

Dllthomas correctly pointed out that given the two premises that (from the frame of the observer) (1) [matter takes infinite time to transit the horizon] and (2) [due to hawking radiation, black holes have a finite time span], then (2) resolves (1) when the black hole disappears.

To recap, Instead of acknowledging that (2) resolves (1) you proceed to question the existence of hawking radiation and black hole evaporation, which compared to consensus is a radical view and is not warranted. Also the pivot was done in a way that seems to complicate and obfuscate rather than address the point directly (this is a common debate tactic; however I'm not sure if you were conscious of the behavior or if it was more subconscious/rationalization (more likely); you may not have even been aware you were doing it).

It seems like you chose to reject consensus instead of simply accepting that (2) resolves (1), maybe because you seem to have a fixation on not ever conceding a point in a conversation. Sometimes it's ok just so say, "yeah, that's a good point".

(btw if you're rejecting hawking radiation why say anything about black hole theory at all this point because it could all be wrong, no reason to speculate about it)


> you seem to have a fixation on not ever conceding a point

That is quite the accusation coming from someone whose entry into the conversation was "Wall of text, didn't answer the question." But let's see...

> Dllthomas correctly pointed out...

dllthomas did not "point out" anything, correctly or otherwise. All he did was ask the following question:

"How does that remain true after the black hole has evaporated and there is no longer a horizon?"

This question assumes that black holes evaporate. That assumption may be incorrect. We do not know whether or not black holes actually evaporate. In fact, we do not even know whether or not black holes actually form at all. And therefore:

> (2) resolves (1) when the black hole disappears

That might be true if (2) were true. Even that is arguable, but it is neither here nor there because we do not know whether or not (2) is true

> compared to consensus is a radical view

Yes, of course. The consensus view leads to a paradox, and so we know that the consensus view cannot possibly be correct. We also know that decades of effort have not resolved this paradox, and so it is extremely unlikely that there is a simple straightforward solution that has simply been overlooked. So the correct solution will almost certainly be a radical departure from the current consensus.


I think that's the point. From an outside perspective the black hole will evaporate before the thing falls in. Thus a thing can never fall in. From its perspective the hole will emit more and more intense radiation and finally evaporate just before it hits the horizon.

If true, I think you can go even further and say no black hole can completely form; the collapsing matter just gets exponentially closer to being fully black until the effect of the Hawking radiation outweighs the gravitation, but it all evaporates before going fully black. No?

(This latter part assumes there's some Hawking radiation or equivalent from pre-black holes as well. And I'm not sure whether that would be unitary or not, so it may not resolve the information paradox anyway).

(Edit: I think the pre-Hawking radiation would be unitary, since the only reason Hawking radiation is not unitary is because BHs don't have information, but pre-black holes are not black holes. So doesn't that solve the info paradox? Without resorting to holographs and whatnot? Where's the error?)


Maybe, but I think if it were easy to show that black holes (for whatever definition is important here) never actually form, the physicists would have noticed.


When they are paid for research of black holes, it becomes easy to overlook their absence. They can also say they do hypothetical research, if black holes don't form by collapse maybe they form by other means, e.g. primordial black holes.


100% agree, but I'm still not seeing the flaw in the argument.


What is problematic about that? In what way does it violate the second law of thermodynamics? If anything, it seems like a great example of the natural tendency towards disorder. Also, it’s not possible to infer macroscopic prior states even with an infinitely powerful computer. When you mix water at different temperatures, entropy is irreversibly increased. It’s not possible to tell the initial temperatures just from the final state.


It's a subtle point. Several key theorems of thermodynamics rely on the ability to count unique states. If you could evolve the exact same state in two different ways, the proofs would fail.

That's how thermodynamics derives indistinguishable macro states from distinguishable micro states. Throw that off and thermodynamics stops working.

Since it has worked well so far they're reluctant to throw it out. Something has to go, and since they already know there's something funny going on where black holes meet quantum mechanics, that's the lowest hanging fruit.


So you are saying that if I solve the Schrödinger's equation for all the particles that make up her I could know where my wife wants to go out for dinner?


> In what way does it violate the second law of thermodynamics?

If you read my comment, you will find no mention of the second law of thermodynamics or any violation of said law.

In fact, black holes need to evaporate in this way in order to comply with said law of thermodynamics.

> When you mix water at different temperatures, entropy is irreversibly increased. It's not possible to tell the initial temperatures just from the final state.

It is still water, however. You may not be able to say what temperatures W(a) and W(b) were from W(c), but you could at least say that W(c) may actually be W(ab) i.e. may be the mixture of two bodies of water W(a) and W(b).

Bring the same water to a black hole and you have: W(a) went into a black hole and x came out, where x is some random heat.

If you detect the heat x, what could you say about anything that may have been before?

If W(a), W(b), Chair(a), Xylophone(g), Stone(f), Person(z) or anything went into the black hole, only heat x comes out in the end.


Perhaps I’m missing some advanced theories or contradictions, but it seems to me quite intuitive or at least reasonable that there exists some very efficient generators of entropy, i.e. black holes. Even if not having fallen into a black hole, such water would have decayed to heat and fundamental particles eventually.

Why is it contradicting that it happens much faster in some places in the universe? It may be surprising but what does it contradict? I’m asking sincerely.

Regarding the water thought, if you recite a poem inside a chamber, it turns into heat. If you are outside and can only measure that the room slightly increases its temperature you also can’t recover the poem.


That's because something may fall on the black-hole with larger entropy than the usual "black-hole matter", and it would decay into the lower entropy particles.

But whether something with higher entropy than "black-hole matter" exists is a more refined question than if black-holes erase the entropy of what fallen into it. And Quantum Mechanics has a problem with erasing entropy, even if it's into a larger amount.


What is the difference of this paradox to a particle entering a gas container, thermalizing, and the ejecting the particle (evaporate the gas). Thermal states is described by macroobservables only.


Totally uninformed here:

Have we proved the Hawking radiation is without information (or enough of it), or is it just 'encrypted' at a level we can't distinguish from noise?


I'm also totally uninformed, but my gut feeling is that physical systems don't encrypt information, at least not in the way we assume when talking about encryption. Also if you go down that route you risk having something that can't be proven: how do you prove that black holes are not using a one time pad to encrypt information, with each black hole using a different and random key?


I mean, hawking radiation itself is unproven - we can't experimentally verify it because the temperature of the radiation from stellar mass black holes would be too small. For small black holes, nobody has seen one 'pop'. In theory it's testable but probably not in our lifetimes.


This seems to address it (from the article):

"[Hawking] radiation is thermal which means it’s random except for its temperature, and the temperature is inversely proportional to the mass of the black hole. This means two things. First, there’s no new information which comes out in the Hawking radiation..."

Its mass is one of the few things we know from the outside.


Probably for OP this isn't enough: if the radiation was carrying encrypted information then it would look random without knowing the "key", whatever a "key" could be in the context of a physical system. But I think that talking about encryption here without a solid preparation in the filed of physics it is just trying to apply something we know to try to solve some problem we have no idea how to approach.


How do we know it's thermal? How do we know there aren't small fluctuations that are too small for us to detect millions of miles away?


It's important to understand that Hawking radiation is not something we've observed and have noticed seems random.

Instead, Hawking radiation is a prediction of a mathematical model. In that model, Hakwing radiation is purely random.

If I remember correctly, Hawking radiation is postulated to arise because of fluctuations in the vacuum giving rise to virtual particle pairs. Normally, these would annihiliate back almost instantly. But, when such an event happens near the event horizon, one of them may fall into the black hole, leaving the other one to "escape", and appear as if the event horizon is emitting radiation. Since this radiation is caused by random fluctuations in the void outside the event horizon, it can't be correlated with anything past the even horizon, so it can't carry information about that.


AFAIK Hawking radiation has never been detected. It is hypothesized on the basis of current theories of quantum mechanics and gravity, and those assumptions imply a thermal distribution of energy. So, we have a reason to think that Hawking radiation occurs and has this property, while no-one so far has proposed a mechanism that would encode data on it.


Thermal radiation from a classical object looks "random" but still obeys (and in fact inspired) quantum theory. The information about the past of the object is there, it's just unfeasible to recover. I suspect Hawking radiation is the same way.


Why do you suspect that? Do you deny that quantum mechanics is a valid model of the universe?

Or do you believe quantum mechanics is equivalent to relativistic mechanins?

Either would unwrite a century of physics.


I'm proposing that quantum mechanics is right and GR is wrong, particularly as regards the no-hair theorem. I really can't tell how you jumped to the opposite proposals from my comment.


A lot of complexity is hidden behind the term “information”. You should be careful not to just use your existing intuition/definition for this word, it’s extremely specific Quantum Mechanics jargon here.

This is talking about quantum states and how they describe the world. Each state corresponds to physical (quantum) reality, conforming to the laws of physics. So it’s not like you can just twiddle bits to make new representations.

I think this is an area where appealing to the lay reader’s intuition is counterproductive. If you haven’t solved the Schrodinger equation before then you definitely shouldn’t be trying to intuit things about quantum systems; they are just weird and kind of irreducibly complex from the mathematical representation.

Let me attempt to go against my advice above and give you some intuition for why encryption doesn’t parse here. It would be like you have a program with some static types, some classes, and then say “what if we just encrypt the memory location for this object on the heap and run the program”. The program is the thing that is running (laws of physics), the variables on the stack/heap are the state for the current execution, and it has no concept of decryption, so it would just produce garbage and crash. In the same way, the quantum physics description of a system has superposed states that are all valid configurations of the physical system, and no notion of “encryption”. So there is nowhere in the model of physical reality (and therefore unless we are missing some new Physics, nowhere in the reality that is modeled) for this information to “hide”.

Or taking a different tack, “thermal entropy” means it’s just a bunch of gas buzzing around randomly at the same temperature - there is no physical place for structure to be “encrypted”. Where is the “key” in your model of the world? It’s just a cloud of gas. What physical process performed the encryption? That would require a complex structure, yet we are talking about a cloud of particles emitted when one half of a particle-antiparticle pair is captured by the black hole’s event horizon. There is no place in a workable physical model of the world for an entity that performs encryption on the quantum states (whatever that might mean).

All this just points to why you can’t encrypt states in this way, not why the black hole information paradox is a problem. For that you really do need the maths; eg see https://www.cs.umd.edu/class/fall2018/cmsc657/projects/group... for the Physics here; while that requires graduate-level understanding of QM, hopefully the intro will be useful.


> If you haven’t solved the Schrodinger equation before

Heh heh, define "solved"... If I remember my QM class correctly, we "solved" the Schrodinger equation for the hydrogen atom; the book achieved this by observing "... which gives us <equation>, and, hey! look at this! it turns out that FamousLastName polynomials--which you've never heard of--turn out to solve this equation, here's their definition, and... problem solved!" (If I remember correctly they were Legendre Polynomials, but maybe that was some other equation. And to be fair to the book, it was a pretty good book.) After having "solved" the equation for one isolated, most-basic atom, they went on to say, "basically we have no idea if there is an analytic solution for anything more complex".

If manipulating bra and ket vectors symbolically around an equals sign counts, we did a lot of that, although it did not develop in me the least intuition about QM. But then, I never really understood what those bra and kets were doing, and my grades steadily dropped (fortunately for my GPA there were only three courses). So it's possible I might have developed some intuition had I understood what was going on.


> the encryption ... would require a complex structure

Not a quantum information theorist, but am a scientist. We actually do have (conjectured) low-complexity one way functions, so this by itself is not necessarily true. I do agree that it's a fairly unlikely to be the case that natural processes execute this algorithm, though.


I think the GP is thinking of two-way encryption under a symmetric key here, else it’s hard to see how the information isn’t still “lost”.


Obviously no idea if this is true, but, an object falling into a black hole maybe could emit some radiation containing the key before the majority of the mass "encrypted" into the black hole.


The theorems that derive it's existence don't use the underlying state. They come from the margins of a black hole, which completely hides what's in it. That's the No Hair Theorem.

If information leaks out they'll have to figure out why the No Hair Theorem is wrong.


But that just suggests that at some high enough energy that QM becomes nonlinear and singular and nonreversible and information is destroyed.


Hmm I thought the laws of physics were time-reversible, but then I found this: https://www.wolframscience.com/nks/notes-9-3--time-reversal-...


Same. https://www.youtube.com/watch?v=L2idut9tkeQ is the relevant episode from Space Time.


I have wondered that, though without knowing enough to even figure out if it is a reasonable question. One follow-on thought I had was this: what about the matter that becomes the black hole when it forms? When a star collapses into a black hole, where does the event horizon first appear?


What do you mean "where"? It appears at the region one Schwarzchild radius away from the center.


Sure, but the Schwarzchild radius is a function of the mass within it. I'm thinking by analogy to a galaxy or globular cluster, which has enough mass to be a black hole, if it were dense enough, but it will not become one unless and until some dissipative process has caused it to collapse towards its center. When this happens, I am supposing that the black hole will first form in the center (where the gravity well is deepest), with a large part of the cluster mass initially outside of it, and grow as the friction continues to feed mass into it. If this is, in fact, a reasonable model, would something similar happen in a collapsing star? (Only much faster.)


This seems like a fully general (and somewhat unconvincing) argument against doing theoretical physics. The key paragraphs are:

"What’s going to happen with this new solution? Most likely, someone’s going to find a problem with it, and everyone will continue working on their own solution. Indeed, there’s a good chance that by the time this video appears this has already happened. For me, the real paradox is why they keep doing it. I guess they do it because they have been told so often this is a big problem that they believe if they solve it they’ll be considered geniuses. But of course their colleagues will never agree that they solved the problem to begin with. So by all chances, half a year from now you’ll see another headline claiming that the problem has been solved.

And that’s why I stopped working on the black hole information loss paradox. Not because it’s unsolvable. But because you can’t solve this problem with mathematics alone, and experiments are not possible, not now and probably not in the next 10000 years."

First, let's grant that no experimental evidence will be forthcoming in thousands of years. (It's conceivable to me that some astronomers will get lucky and provide some indirect evidence of some sort, but ignore this for now.)

Why do we believe that this problem can't be solved, or at least profitably investigated, with mathematics (and physical intuition, and the rest of the experimental evidence we have about black holes – it's definitely not "mathematics alone")? At least in principle, one can imagine that there is a finite set of possible solutions (corresponding to dropping various assumptions, as she mentions earlier in the article), and all but one of those can be ruled out a priori via mathematical inconsistencies, a contradiction with physical evidence from non-black hole phenomena, or other undesirable properties.

Maybe there are special features of the black hole information problem that make this impossible. But this overall mode of mathematical investigation is how theoretical physics works and has always worked. Einstein discovered general relativity by tweaking assumptions and deducing the theory was likely to be true because it resolved various issues, but we had no direct test [edit: of gravitational waves] for about 100 years. It would have been unfortunate if he concluded the problem was pointless to work on because no experimental evidence would manifest within his lifetime.

(Example problem fixed by Einstein: https://aether.lbl.gov/www/classes/p10/gr/Precessionperiheli...)


I don't think Sabine claims that we can't possibly discover a testable theory that also solves the black hole information loss paradox. However, that doesn't mean that investigating the black hole information loss paradox problem itself is a good way of arriving at that theory.

Theoretical physics has always been most successful when investigating proven experimental inconsistencies - the measured invariance of the speed of light in different rest frames for special relativity, for example, or the photovoltaic effect or ultraviolet catastrophe for QM.

Investigating other effects of quantum gravity and arriving at a theory that can be tested here on Earth would potentially lead to a testable theory that also provides insights into black holes. Or, perhaps investigating the measurement problem could lead to a more fundamental non-linear theory (which QM would be only an approximation of) that would be consistent with information loss.

These are both much lower hanging fruit than worrying about effects that we have no hope of measuring (note that we can't even prove that burning a book doesn't lose information - it's just easier to explain where the information could, in principle, be going, but it's still impossible to measure with current or foreseeable technology).

Your example of gravitational waves is exactly on the money for this. If instead of focusing on the inertial & gravitational mass equality "coincidence" and on gravity's effect on light, Einstein had tried to come up with a model for gravitation waves as the only thing he investigated, chances are he would not have arrived at GR. Perhaps he would have arrived at some SR + gravity waves theory that would have taken 100 years or more to disprove, and missed all the other insights.


Was ultraviolet catastrophe really an experimental inconsistency? It seems to best understood as a theoretical inconsistency: The theory predicts a diverging (infinite) value, which we know would be wrong regardless of what the experimental evidence is. It's a mathematical problem, not an empirical one. You also give the example of "the measured invariance of the speed of light in different rest frames," but Einstein claims SR was motivated by the invariance of Maxwell's equations (a theoretical consideration), not e.g. Michelson–Morley. So it sure seems that investigating theoretical inconsistencies has motivated a lot of good work.

I agree that, practically speaking, studying the black hole information paradox might not be so productive. Maybe there are special features of the problem that make it difficult to investigate productively through theoretical considerations alone. But this is not how I read Hossenfelder. Taking her blog post literally, she seems to be against theoretical investigations of any phenomenon (any inconsistency, etc.) where experimental tests aren't forthcoming. I think this is ridiculous.

Maybe she doesn't actually believe this, but then she needs to make an argument specifically about the black hole information paradox and why this particular problem is unproductive, not launch a broadside on non-empirical reasoning more generally.


> Was ultraviolet catastrophe really an experimental inconsistency? It seems to best understood as a theoretical inconsistency: The theory predicts a diverging (infinite) value, which we know would be wrong regardless of what the experimental evidence is.

Well, that theory also predicts that the universe can't exist for very long, which is a pretty big experimental inconsistency.

> You also give the example of "the measured invariance of the speed of light in different rest frames," but Einstein claims SR was motivated by the invariance of Maxwell's equations (a theoretical consideration), not e.g. Michelson–Morley.

Well, Maxwell's equations were relatively well experimentally verified by other experiments, so there were good reasons to at least tentatively accept their prediction of a constant speed of light* in any rest frame as a given. Even if the Michaelson-Morley experiment was not big on his mind, it was still relatively clear that this type of experiment could be performed with technology already available at the time, as the speed of light had been measured with pretty good precision already for a few decades.

So, SR was not some highly speculative theory based only on extrapolating other theories, as solutions that only address BHILP but not other problems of QM/GR are currently.

* or at least of electro-magnetic radiation, not sure when it became accepted light was EM radiation relative to the SR paper


> Well, that theory also predicts that the universe can't exist for very long, which is a pretty big experimental inconsistency.

Sure, but I don't need to appeal to this fact to know there's a problem that must be solved. The infinities are enough.

> Well, Maxwell's equations were relatively well experimentally verified by other experiments, so there were good reasons to at least tentatively accept their prediction of a constant speed of light* in any rest frame as a given. Even if the Michaelson-Morley experiment was not big on his mind, it was still relatively clear that this type of experiment could be performed with technology already available at the time, as the speed of light had been measured with pretty good precision already for a few decades.

But again, even if MM couldn't have been performed, Einstein still would have had good theoretical reasons (consistency with Maxwell's equations) to posit SR. And indeed, this was the path the discovery seems to have actually taken.

> So, SR was not some highly speculative theory based only on extrapolating other theories, as solutions that only address BHILP but not other problems of QM/GR are currently.

Sure, but this is a difference in degree, not in kind. You accept that theoretical arguments for new phenomena that are not directly testable (at present) are acceptable if they're convincing enough. The question about BHILP under this view (which I agree with) is then about how convincing the theoretical arguments are.

Hossenfelder, taken literally, suggests that no theoretical arguments will ever good enough in the absence of empirical evidence.


> Sure, but I don't need to appeal to this fact to know there's a problem that must be solved. The infinities are enough.

By that logic, you shouldn't believe in black holes / GR at all, right? In practice, we can always replace infinities with some arbitrarily large numbers that don't grow all the way to infinity because of yet-unknown physics (probably quantum gravity, in the case of black holes).

> But again, even if MM couldn't have been performed, Einstein still would have had good theoretical reasons (consistency with Maxwell's equations) to posit SR. And indeed, this was the path the discovery seems to have actually taken.

I don't think this would have been promising if the MM experiment seemed a hundred years away, and in general I don't think SR would have been as compelling in that case. I suspect we would have still had arguments about an aether if experiments on the speed of light in different inertial frames had remained beyond reach.

> Sure, but this is a difference in degree, not in kind. You accept that theoretical arguments for new phenomena that are not directly testable (at present) are acceptable if they're convincing enough. The question about BHILP under this view (which I agree with) is then about how convincing the theoretical arguments are.

Sure, it's a difference in degree in the end. The plausibility of having an experiment "soon" is the degree here. I don't think that Hossenfelder would argue that if an experiment is only possible 2 years from now, or maybe even 20 years from now, you shouldn't work on some theoretical subject. But, when that time horizon stretches well beyond your lifetime and the lifetime of your students, it's perhaps time to reconsider.

I also know for sure she doesn't have a problem with doing theoretical research where you don't yet know how something would be testable, as long as you plan to define an experiment as well. She explains this in some detail when discussing her own work on superdeterminism - where she plans to first define a concrete model, and then come up with experiments which could invalidate that particular model - instead of giving up a priori because "there's no known way to test such a theory".


> By that logic, you shouldn't believe in black holes / GR at all, right? In practice, we can always replace infinities with some arbitrarily large numbers that don't grow all the way to infinity because of yet-unknown physics (probably quantum gravity, in the case of black holes).

I don't see how this follows. The ultraviolet catastrophe (or other divergences) says we have to fix something, it doesn't say that we should choose a weird ad hoc fix.

> I don't think this would have been promising if the MM experiment seemed a hundred years away, and in general I don't think SR would have been as compelling in that case.

It probably wouldn't have been as compelling, I agree. Don't get me wrong; the gold standard is empirical evidence. But the invariance arguments and procession of mercury would be grounds for taking it seriously. Einstein's argument for the theory depends only weakly on MM.

> Sure, it's a difference in degree in the end. The plausibility of having an experiment "soon" is the degree here. I don't think that Hossenfelder would argue that if an experiment is only possible 2 years from now, or maybe even 20 years from now, you shouldn't work on some theoretical subject. But, when that time horizon stretches well beyond your lifetime and the lifetime of your students, it's perhaps time to reconsider.

This seems like an unnecessarily narrow view of what constitutes worthwhile physics. First, because the experimentalists are very clever, and there's no knowing what indirect tests they might propose. But mainly because, if we can make theoretical progress on an important question, why not do that (even if empirical data is not forthcoming)? This view suggests that a physicist in 1915 (or 1925, etc.) should not try to work out properties of gravitational waves, for example, which seems obviously ridiculous to me. (The direct confirmation came about 100 years later.) If the theoretical motivation for doing so is solid, why not?

I agree that BHILP paradox is probably a different case, one where we might really have no shot of saying something useful theoretically. But this requires getting into the specific details of BHILP. These general statements about what is and isn't good physics because of near-future testability all seem clearly suspect.


> I don't see how this follows. The ultraviolet catastrophe (or other divergences) says we have to fix something, it doesn't say that we should choose a weird ad hoc fix.

My point was that we have accepted GR even though we know it implies a divergence at the center of a black hole, which we know exists. We assume the math is probably mostly right, we do have a huge curvature at that point, but it can't be literally infinite. We assume that if we had a slightly better theory (probably quantum gravity) that would fix the divergence without changing too much else.

By contrast, we couldn't say "the theory that predicts the ultraviolet catastrophe is mostly right, we're just missing a tiny piece to stop the divergence", because we had good experimental evidence that atoms don't collapse almost very quickly at all. So, while the infinity alone was a motivation to look for a better theory, it was the extreme contradiction with experiment that motivated a fundamentally different theory instead of an adjustment.

> But the invariance arguments and procession of mercury would be grounds for taking it seriously. Einstein's argument for the theory depends only weakly on MM.

Looking this up some more, I had a mistaken impression. The independence of the speed of light from the velocity of the source was already relatively well established by 1905 - in fact, this was part of the motivation for Maxwell's equations from 1865. Maxwell's equations had been thoroughly tested and were well accepted, especially because of experiments by Hertz in 1900. However, the dominant concept was still that light is a wave in the luminiferous ether - but several attempts to measure the interaction with this ether had produced null results. Even what we now call the Lorentz factor had been experimentally measured through experiments on the speed of light in a moving medium (then thought to be a drag coefficient between the medium and the ether).

Einstein then came up with the idea that constancy of the speed of light should apply in any reference frame, not just some special frame defined by the ether. That means that not only is the speed of light independent of the speed of the source (which isn't that strange for a wave), but it's also independent of the speed of the observer. This was only experimentally proven much later, in 1932, by the Kennedy-Thorndike experiment.

Still, SR followed from theories that were well proven empirically, and had obvious experiments that could falsify it.

> This view suggests that a physicist in 1915 (or 1925, etc.) should not try to work out properties of gravitational waves, for example, which seems obviously ridiculous to me.

Well, I think here there is a difference between trying to work out consequences of an otherwise well-established theory (such as working out exactly how gravitational waves would look like in GR, or arguing based on Maxwell's equations) and trying to come up with a new theory that modifies existing ones (such as grand unification or some solutions to BHILP).


> Still, SR followed from theories that were well proven empirically, and had obvious experiments that could falsify it.

Sure, my point is just that reasoning based on thought experiments, parsimony, etc., played a key role in its discovery.

> Well, I think here there is a difference between trying to work out consequences of an otherwise well-established theory (such as working out exactly how gravitational waves would look like in GR, or arguing based on Maxwell's equations) and trying to come up with a new theory that modifies existing ones (such as grand unification or some solutions to BHILP).

I agree, but this claim gets a bit far from what I take Hossenfelder's point to be (as does the UV catastrophe stuff). You and I both accept that certain theoretical activities are productive, even in the absence of empirical evidence (e.g. working out the consequences of a well-established theory). Hossenfelder's argument, as best I can tell from that blog post, implies that she does not accept these activities as worthwhile. I think there is an interesting conversation to be had about exactly where we should draw the line between worthwhile and not worthwhile theorizing when we don't have data forthcoming, and I think there's a good case to be made that the study of BHILP falls on the "not worthwhile" side of that line. But Hossenfelder doesn't make that case, or seem to admit any nuance: If there's no data, you're just "lost in math" and not doing productive work.


> If there's no data, you're just "lost in math" and not doing productive work.

I don't think that's her position, from what I've seen in many other videos. She is a theoretical physicist, after all, not an experimentalist.

I think her position can better be summed up as "if there is no data, and no realistic possibility of getting data, then you're just 'lost in the math' and not doing productive work".

Of course, "realistic" here is both subjective and a matter of degrees, but at least it means it should be a consideration you decide for yourself before investigating a field.

Additionally, I think, though I haven't necessarily seen her be explicit about this, that she would only apply this test to working on new theories, not to exploring the consequences of already-tested theories. Of course, even if I am right that she would make this distinction, the line between "new theories" and "exploring the boundaries of existing theories" can be pretty thin and subjective in places as well.


The mathematical development of non-Euclidean geometry required for Einstein's general relativity took place over the previous century and really changed people's thinking. From Poincare 1905, chapter on non-Eucliden space, the conclusion was then:

> "In other words, the axioms of geometry (I do not speak of those of arithmetic) are only definitions in disguise. What, then, are we to think of the question: Is Euclidean geometry true? It has no meaning. We might as well ask if the metric system is true, and if the old weights and measures are false; if Cartesian co-ordinates are true and polar co-ordinates false. One geometry cannot be more true than another; it can only be more convenient."

https://www.gutenberg.org/files/37157/37157-pdf.pdf

Prior to those developments by Riemann etc., people like Kant claimed space was flat, as there was no other possible mathematically consistent geometrical option:

https://www.ln.edu.hk/philoso/staff/sesardic/Kant.html

This is all comparable to a quote in the posted article:

> "But. There are many different ways to resolve an inconsistency because there are many different assumptions you can throw out. And this means there are many possible solutions to the problem which are mathematically correct. But only one of them will be correct in the sense of describing what indeed happens in nature. Physics isn’t math. Mathematics is a great tool, but in the end you have to make an actual measurement to see what happens in reality."

It would seem, then, that the study of black hole information loss is more in the area of mathematics at present then it is in physics, much as is the case with string theory (for which Fields Medals have been awarded, but not Nobel Prizes in Physics). It might however go the way of non-Euclidean geometry and Einstein's general relativity at some point in the future.


> Einstein discovered general relativity [...] but we had no direct test for about 100 years.

General relativity was tested by the Eddington experiment in 1919: https://en.wikipedia.org/wiki/Eddington_experiment

(I mostly agree with your overall point, this is just a minor quibble.)


As that article mentions, data quality was abysmal. I once saw a remark about the error bars being larger than the supposed effect - though o cannot find it and think that might be overly harsh.

The 1922 test of starlight deflection by the Sun was much, much more accurate. And it did take about 100 years to detect gravitational waves. Though the earliest test was probably the perihelion of Mercury, which I think was covered in the original GR paper.


That seems to be the way of cutting edge science. The data that Hubble gathered that showed the expansion of the universe is considered terrible by today's standards, but it was enough to prove it.


Sorry, I meant direct tests of gravitational waves. I think that claim is accurate.


Why is that one thing particularly important? The point is that general relativity had immediately testable predictions. It wasn't just math.


General relativity predicts gravitational waves. For the theory to be correct, gravitational waves need to exist. You haven't fully confirmed the theory experimentally unless you confirm that consequence.


That doesn't mean that he didn't have ideas for testable predictions of the model. It implied all kinds of things, some of which he had not even thought of (like Black Holes). Its quite a lot different to propose a model with some novel predictions that you can validate, and to propose a model that can never be tested in any form at all.


Some predictions of the model were tested soon after Einstein published the theory. Others, like gravitational waves, were not. Yet people still believed in gravitational waves long before they were empirically confirmed.

I don't see any possible basis for that belief (at the time) other than arguments based on non-empirical reasons like consistency and parsimony. (That is, it would be strange if other predictions of the model worked out but this one didn't.) Yet it is exactly this kind of theoretical reasoning that Hossenfelder, taken literally, seems to reject (in the absence of experimental data).

I'll also note that any other solution of the black hole information paradox has to be consistent with the the rest of what we know about physics, and any future empirical observations. So it can't be completely untethered from reality; it makes testable claims in this way. Further, it's not clear such solutions can never be directly tested, or will never be found to imply novel testable consequences.


This is a useless way of looking at things - even the task of just listing all the implications of a theory is probably unbounded, and if, at any point, you assume the correctness of another theory... If you are only maximally skeptical in some matters, then you are being inconsistent.


Sure. But this is a huge, incredibly novel prediction. It's not some trivial matter.


You're double-counting. Starting from Newtonian mechanics, gravitational waves are indeed a huge, incredibly novel prediction. So is gravitational lensing. So is any other consequence of a coupling between the metric and the stress-energy tensor - because that's the real novel prediction.

Once you grant that the metric varies smoothly over space and time, gravitational waves are pretty mundane.


I don't think one could say it was obvious, if that's what you mean by mundane.

Einstein for many years went back and forth on the issue of whether gravitational plane waves could be observed even in principle (sometimes accepting his initial findings, sometimes accepting Eddington's point that gravitational plane waves were unphysical artifacts of the approximation to the full theory Einstein had to use for calculational tractability, sometimes arguing that plane waves were physical but unstable, and several variations on all of these).

It wasn't until shortly after his death that it became widely accepted that gravitational radiation from a binary could couple to mass at a distance (notably Pirani 1956, reprinted <https://sci-hub.se/https://link.springer.com/article/10.1007...>, the two paragraphs after eq (2.17) on the 9th page of the linked pdf, p. 1223[1], and the three paragraphs after eq (3.7) on the 12th&13th pdf page, pp 1226-1227 being particularly interesting), which in turn provoked Feynman's sticky bead argument and the work by Bondi, Synge and others.

I don't understand the point your comment's parent's user has been trying to make in this conversation, but the entire enterprise of Einstein's theory of gravitation did not in its early days depend sensitively on the details (or even physicality) of gravitational radiation. Indeed, it was not widely accepted during his lifetime that gravitational waves could be measured even in principle. And even then it was almost twenty years before the first relativistic binary (Hulse-Taylor) was discovered and a further six or seven before it became clear that gravitational radiation is a useful test of various theories of gravitation. <https://ui.adsabs.harvard.edu/abs/1982ApJ...253..908T/abstra...> (PDF freely available via link in top left): "With the exception of general relativity and the Brans-Dicke theory, none of [these alternative gravitational] theories predicts even the proper sign of orbital period change due to emission of gravitational radiation, let alone the proper magnitude".

I would insert the words "local, minimal" before your "coupling between the metric and the stress-energy tensor". c, whatever its value, is relevant, and there are plenty of gravity theories with non-minimal couplings (NMDC) and spooky action at a distance (https://www.phy.olemiss.edu/~luca/Topics/ft/nonlocal.html is a "zoo"), which tend to have different post-Newtonian effects. I would almost add "unique" as a qualifier too, but e.g. Brans-Dicke is still technically viable.

- --

[1] Of particular (pardon the pun) interest to anyone who has read Synge's five point curvature detector.


You can make the claims of your original post (and arguably more effectively) without the subjective (and rather idiosyncratic) view that GR was in some sort of indeterminate level of credence until the detection of gravity waves.


Sorry, I wrote poorly. I don't mean to say that GR or gravitational waves were in some sort of indeterminate level of credence. I mean to say that our credence in gravitational waves in the absence of experimental confirmation is justified by exactly the kinds of non-empirical evidence that Hossenfelder rejects. So it's clear that she's being too restrictive in what she believes is worthwhile theoretical physics.


General relativity predicts singularities. By your argument we haven’t confirmed GR experimentally until we’ve observed a singularity.


Einstein is a great example where his intuition served him well for most things, but not quantum mechanics. And it’s not his fault. He was worried spooky action at a distance was opening up physics to be nonlocal, which he feared would ruin scientific progress for practical reasons.

Quantum mechanics is also something without experiment no one would come up with in a million years.

Einstein was right to fear science anti-thetical to the enterprise. If QM were a tiny bit more nonlocal our ability to do experiments would be much more limited.

His wisdom about the limits of science was correct but QM was something no wisdom at the time could foretell without experiments.

Maybe you aren’t aware but there are probably 50 or more interpretations of QM by now. And it’s even worse on the string theory side.

I fear we have always relied on experiments, and going even 100 years without one is cause for alarm.


> Quantum mechanics is also something without experiment no one would come up with in a million years.

You may find the book Quantum Computing since Democritus an interesting read. It's motivating them is how the ancient greeks might have deduced the basics of QM.


> This seems like a fully general (and somewhat unconvincing) argument against doing theoretical physics.

Not exactly, since this quote definitely doesn't apply to all theoretical physics:

> experiments are not possible, not now and probably not in the next 10000 years.


The error is that, simply because, even if true, that an experiment can't be done for the next 10000 years, this doesn't mean that someone else could think of a different experiment that could fulfill the initial prerequisites and provide a workable outcome/answer

I think the writer is being too fatalistic on that simply because he/she can't come up with a workable experiment, then no one else can. I think that simply because of all the nonlinearities of "life", we are bound to have someone else come up with maybe a different experiment which answers the question, just picture that there are several, several ways to prove General Relativity with experiments not just "the one"


At least for cosmic black holes, there is no way to measure the correlation of hawking radiation with the objects composing the black hole unless you can detect and store information about all objects going into the black hole and all radiation coming out, and then look for correlations between these. Even assuming you could store and process this literally stellar amount of information, you would have to be extraordinarily lucky for the correlation to arise immediately. More likely, you would need to do this for the entire lifetime of the black hole, and only after it's (mostly) evaporated could you run the analysis. Even granted the galaxy-sized computer that could do so, you would need to wait a few thousands of billions of years or more to reach that state.


Einstein had no positive evidence to suggest direct tests of gravitational waves would ever be possible. His arguments were entirely about consistency and theoretical parsimony, as are the arguments about the black hole information paradox. If we take the view that we shouldn't work on physics we can't test, or can't test for a very long time, then we are led to the conclusion that Einstein shouldn't have worked on gravitation waves. I find that conclusion untenable.


GR made predictions that were testable at the time and Einstein was keenly interested in testing them. See https://einsteinpapers.press.princeton.edu/vol5-doc/609


Yes, but gravitational waves themselves were not testable. To get from the confirmation of the other predictions to belief in gravitational waves, for which you had no direct evidence (until recently), you need to apply exactly the non-empirical criteria (parsimony, consistency, etc.) that Hossenfelder (taken literally) seems to think is not sufficient for doing theoretical physics in the absence of direct evidence.


Then you are misunderstanding her position. She is not claiming that a convincing solution to the BHILP is impossible. If you came up with a theory of quantum gravity that can be measured in other regimes and that also solves the BHILP, perfect: we now have good reasons to believe that your theory is indeed THE solution to that problem.

However, if you come up with a theory that makes the exact same predictions as QM and GR except for the BHILP, then your theory is not very interesting, since we will never be able to test this theory, and there are other inconsistencies between QM and GR that you haven't solved, so we can't just rest on our laurels and say physics is over.


I don't think I misunderstand her position. She seems to take the position you give:

"However, if you come up with a theory that makes the exact same predictions as QM and GR except for the BHILP, then your theory is not very interesting, since we will never be able to test this theory, and there are other inconsistencies between QM and GR that you haven't solved, so we can't just rest on our laurels and say physics is over."

I think this is wrong. There are non-empirical reason reasons we might prefer the new theory to the old one. For example, we know QM are GR are inconsistent, so if I give you a parsimonious, consistent theory that captures both QM and GR in the appropriate limits, then that's a great reason to prefer it. In principle, we could prove theorems (and some theorems of this form have been proved) that pin down the possible resolutions of the GR/QM inconsistency. If we tighten the net so that only one theory remains, then obviously we should choose to believe that theory.

Now, will that actually happen? I don't know. But merely waving your hands and going "not testable" is not enough, because we all accept that mathematical consistency ought to have a huge influence on theory choice. You need to make some argument about BHILP specifically saying that theoretical arguments will never produce productive physics.


> For example, we know QM are GR are inconsistent, so if I give you a parsimonious, consistent theory that captures both QM and GR in the appropriate limits, then that's a great reason to prefer it.

Absolutely agree. But BHILP is not the only inconsistency, so if you only solve that one and leave the others, but complicate the math or add other elements that can't be tested, it's not a particularly compelling theory.

So, why not work directly on the other inconsistencies, which might be directly testable, and see if this one disappears that way?

To be clear, when I say other inconsistencies, I am referring to things like making QFTs work in a non-flat spacetime, and/or reconciling the linearity of QM (without Born's rule) and the non-linearity of GR.


> So, why not work directly on the other inconsistencies, which might be directly testable, and see if this one disappears that way?

These are not mutually exclusive options. The community works on both.

Again, I agree that so far, practically speaking, work on BHILP has not been super compelling. I take issue with the stronger stance that Hossenfelder seems to take, that we know a priori that work on BHILP will be worthless because experimental data is not forthcoming. ("And that’s why I stopped working on the black hole information loss paradox. Not because it’s unsolvable. But because you can’t solve this problem with mathematics alone, and experiments are not possible, not now and probably not in the next 10000 years.")


Gravitational waves are just a consequence of the Einstein field equations for which he had experimental evidence for (the light bending from the sun during the eclipse)


I think that author is simply saying why they don't want to work on this problem any more, not necessarily that nobody else shouldn't be working on it anymore. Einstein did what he wanted for his own good reasons, he was right and extremely insightful on most occasions and wasn't driven to 'put his name in the history books' but because he was working on interesting problems. Once the problems no longer seem that interesting it's time to move on.


I disagree. The author is very pointedly saying that research on this problem is worthless and a waste of time. See the last paragraph, for example.


Which contains this bit: "I am not talking about this because I want to change the mind of my colleagues in physics."

That seems to be pretty clear to me.


The full paragraph is: "Why am I telling you this? I am not talking about this because I want to change the mind of my colleagues in physics. They have grown up thinking this is an important research question and I don’t think they’ll change their mind. But I want you to know that you can safely ignore headlines about black hole information loss. You’re not missing anything if you don’t those articles. Because no one can tell which solution is correct in the sense that it actually describes nature, and physicists will not agree on one anyway. Because if they did, they’d have to stop writing papers about it. "

It's hard for me to read this is as anything other than "I am not talking about this because I want to change the mind of my colleagues in physics [because they're too far gone]." That is, she believes the research is worthless and the problem shouldn't be investigated. She just doesn't think she can convince people of this.


But what she believes may simply be wrong. It's just one persons opinion and regardless of motivations ascribed to others people need to justify their choices to themselves and themselves alone but some people apparently need to do so publicly because otherwise it somehow doesn't count.

This is just one interesting sub-problem in physics and if you choose for a career in theoretical physics you know that not everything that you set out to do you will achieve and I suspect that in this particular case it is a let down of fairly large proportions.

So you get a lot of internal struggle to justify the choice, the sunk cost issue is massive and your colleagues are going to go and continue without you. All that needs to be justified to the 'self', both on the off chance that they will come up with a solution when you gave up as well as about all of the time that you now feel that you have wasted.

And in the realm of self justification 'I will stop working on this problem because I no longer feel like working on it' is a lot harder to sell to the ego than 'I will stop working on this problem because I feel that it can't be solved in a meaningful way'.

I'm fine with it, either way, I've seen enough people struggle with career choices made in their 20's when they were in their 40's not to recognize the symptoms: you have reached the halfway mark in your life, what do you have to show for it? And if the answer is 'not much' then that can be a problem. But it doesn't have any value for others, that's all just window dressing and ego-placating.


Without empirical evidence, I'd always be skeptical that we really knew what was going in inside some unobservable region of spacetime, even if the math all works out and agrees with what's observable. Theoretical physics divorced from the need for experimental results gives us endless string theories. It's the same arguing for some interpretation of QM. You might have the most convincing arguments for there being many worlds making up the wavefunction, but without some way to confirm that, it's metaphysics. And one mights well go all the way and embrace Tegmark's mathematical universes or some simulation argument. But that isn't science anymore.

I don't care how good the math looks. Reality isn't obligated to confirm to some human aesthetic about beauty or simplicity. You have to first make some metaphysical assumption that the universe has to be that way. And the only way we can really know is to have empirical evidence.


If I told you "some unobservable region of spacetime" had to display a certain feature, or there would be a mathematical contradiction, would you accept that?

If not, would you endorse believing in gravitational waves on the basis on the success of the rest of GR, before they were experimentally confirmed?


Except that if there is a way to do the experiment, it can only be found by doing theory. Push the symbols around until something new falls out.

Maybe it will. Maybe it won't. But it seems weird to dismiss it as fundamentally unsolvable.

I happen to agree that it's not likely to be solved, and even if it is won't be terribly useful. It gets funding only because it captures people's attention, for being "deep".

I'd love it if science funding were more rational, but that's hardly confined to fundamental physics. If it were up to me we'd fund theoretical physics AND a lot of other things, and stop funding a lot of expensive things I dislike. But plenty of people disagree, and the resulting process is inevitably irrational.

I see no value in shaking my tiny fists at one particular set of scientists over that.


That's fine. One can also do essentially the same work in a mathematics department without the miasma of subjective or historical notions of what progress means. I mean, mathematicians are perfectly fine with producing theorems, proofs and even conjectures. Physicists secretly wish for the community's (not just the committee's) validation also.

Well yes, Einstein found physicists insufferable as well and preferred to hang out with Godel. That doesnt mean you should go straight and transfer to Logic


I am skeptical of any black hole research because it cannot be currently tested. I recently went down the rabbit hole on how the Event Horizon Telescope (EHT) got a picture of a black hole because this could be used as experimental evidence of how a black hole works. After looking into how the EHT was calibrated, I am extremely skeptical of their results. I'm not an Astrophysicist, but I have a lot of understanding of statistical causality and Scientific Philosophy. The EHT processes that made the image of a black hole break a lot of the scientific methods IMO. The algorithms they use to make the image of the black hole were never tested against a known celestial body to calibrate the algorithms. They took a "a posteriori" approach to their imaging algorithms, which do not produce accurate results. I noticed their unscientific approach to the imaging algorithms after when I watched "Black Holes: The Edge of All We Know", which is a first-hand account on how the EHT developed their black hole image. They were literally testing different imaging algorithms to find the one that looked most like a circle. The correct method would be to calibrate their imaging algorithms against a known celestial body to make sure their techniques produced comparable results from other instruments. Then they should have taken their calibrated imaging algorithms and gave it data from the M87 black hole, but they skipped the hole calibration step and went right into imaging the black hole, which makes me very skeptical of their results.


> The algorithms they use to make the image of the black hole were never tested against a known celestial body to calibrate the algorithms.

This is not true, as is much of the rest of your comment. Please read the papers, they say everything that the collaboration did.


I've taken hours and read https://iopscience.iop.org/article/10.3847/2041-8213/ab0c57 . My take away was they did a good job calibrating their signals, but I've never seen anything about calibrating their imaging algorithms. They have not calibrated the whole imaging stack. My skepticism is in the algorithms, not the signals.


You read the paper about calibrating the signals, that's a reasonable takeaway.

There's a different paper about how the 4 independent algorithms were tested to make sure they did the right thing with simulated data: ring, crescent, disk, double point source.

That's the process that you had an incorrect takeaway from the documentary about. The filmmaker is a member of the collaboration and was highly involved in the imaging.

First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole https://iopscience.iop.org/article/10.3847/2041-8213/ab0e85/...

You'll have to follow the footnotes to find out the ages of these algorithms; CLEAN was invented well before I first used it in 1985, and I first ran into maximum likelihood in 1987 or so.


Thanks. I will read this paper.


When you say the “whole imaging stack,” which one do you mean?

EHT imaged M87 with four teams working independently using different methods. Each of those teams used algorithms that were published and validated before the EHT results were published.


Did the 4 teams create their algorithms before getting the observation data and not modify them at all after seeing the results from the first run of their algorithms? Or was there data and algorithm tweaking to get the results that they wanted?


That famous photo of the scientist with the ring image on their laptop screen was exactly that: after all 4 teams had tuned their algorithms to make sure they could accurately image the simulated data (ring, crescent, disk, double point source)... then all 4 teams imaged the actual data. That was the first moment, Eureka!

Now that you've read the correct paper, you know that. You could have also learned it from the news releases, as it was an important part of the science.


My understanding of cosmology and astronomy is there is an inherent "ladder", nesting of assumptions, or indirect argument structure. Like, IF the Moon is a sphere, then by the shadows Earth leaves upon it during eclipses, Earth is a sphere. This reasoning came before Aristotle knew the Earth was sphere.

https://youtu.be/7ne0GArfeMs?t=632

This is still a way to do science. I'm willing to believe the experimenters dealt with the inherent novelty in a scientific way. What do you mean known celestial body?? We've only ever "been" to 2. We don't get to run all the experiments you'd like. As long as the process is self-correcting, it works. You have not shown this method to be uncorrecting. Better and better standards are used to strengthen these claims. And it is expected some degree of future corrections will occur. That's the nature of the beast.


I feel like the use of astronomy and cosmology to model and understand sub-microscopic processes seems a little… backwards? Frankly, I’m ignorant of both but my gut feeling tells me it’s a bit of a scientific dead end. Would love to hear why I’m wrong or if anyone else more educated than I am in this area shares the same suspicions.


I share your skepticism about the image, the problem is that without spending months learning the science & tech there is no way to substantiate our skepticism.


Just show me a calibration image of a know celestial body using the exact same algorithms and I'll be satisfied.


It doesn't really matter. You are talking about a thing that severely distorts light and gravity. There is no real human-visible picture of it.


Well, if it was nearby, we would see something. A camera would capture an image.

Perhaps our brains would not do well understanding the actual object producing the picture though.


Interesting, this is a slightly weaker form of a theory not being falsifiable. Any theory won’t be falsifiable in the next 10,000 years!

I think it’s a good idea to make these distinctions.


Doesn't the idea of "randomly emitted radiation" contradict the concept of "reversal" itself? If you know enough about a system, anything that appears random within it actually turns out to be determinstic. So I don't understand how a black hole can be said to emit "random" radiation to lose information on the first place.


Non-newtownian/non-classical/non-relatistic Quantum effects are truly random. That's the core axiom of what makes "quantum" mechanics not Newtonian/classical/relativistic. It's fundamentally different from Newtownian statistical mechanics.

Hawking radiation is purely random.


> Non-newtownian/non-classical/non-relatistic Quantum effects are truly random. That's the core axiom of what makes "quantum" mechanics not Newtonian/classical/relativistic.

Bohmian mechanics[1] (which is a model of QM) is fully deterministic, and Born rule emerges from the fact that we don't know some information. QM (equations, experiments) is fully consistent with a world, where the perceived randomness of a measurement is the exact same random, as in coin flips.

For additional properties that QM does not satisfy (because there are known counter examples) see https://en.wikipedia.org/wiki/Interpretations_of_quantum_mec...

[1]: https://en.wikipedia.org/wiki/De_Broglie–Bohm_theory


Note that Bohmian mechanics (the actual mathematical formalism) is not consistent with Special Relativity, unlike regular Quantum Mechanics. It's not proven that it can't be made consistent, and some are still working on that, but so far it has remained elusive. This means that it's not really "an interpretation" of QM, it's a slightly different theory, one that is worse at predicting observed reality...


Fair point. Well it proves that non-relativistic QM (which is not supported by the experiments, and is inconsistent with our best and most useful theories) is not proven to be "truly random", at least the math does not imply it, so disproves parent's statement.


Indeed. Wake me up when Bohmian mechanics can predict the anomalous magnetic dipole moment.


This may be somewhat tangential, and if it is too much so, then ignore this and move along.

Isn't the key axiom - preservation of information - a presupposition, and not a formally evidenced law? As best I can tell, it's mostly been assumed as a required premise for a larger axiom - the eternality of matter.


Here's a relevant blog post: https://scottaaronson.blog/?p=3327 'Is “information is physical” contentful?'

In quantum mechanics, breaking unitarity (which implies conservation of information) is akin to breaking the rule in statistics that probabilities have to add up to 100%. You get things like a 300% chance of rain or a 90% chance of [null reference exception]. It's hard to overstate how deep down it is and how many things depend on it.


The article does say it’s “experimentally extremely well confirmed”.


Sounds like they're a trisolarian agent


The opposite! Debating where effort should be spent is crucial. Trisolarians would love to see more research into unknowable multiverse stuff.


Just because it's not testable does not mean it's not worthwhile. General relativity would still exist even if it could not be tested, fortunately , it is easy to empirically verify. But if the math is sound and meets certain assumptions, then it may be the correct theory. Having many mathematically sound candidates for a correct theory is better than nothing. That' why it's called theoretical physics. She seems to be ignoring that part. By her logic, theoretical physics is a waste of time.


It seems the black hole information loss paradox trying to be explained by quantum mechanics is similar to what happens in the death of humans trying to be explained by religion. There is no one that can examine either state, falling into a black hole or falling into death. Quantum mechanics and religion try to explain this, but it can most likely never be verified.


I watched the whole video.

My current physics acumen is probably in the lower half of the average HN reader.

I enjoyed AP physics in high school. Took the test, got a 4. I’ve been helping my wife back in school with half life decay (as much chemistry as physics) and relished that. As a Mechanical Engineering student, I got A’s in my three required physics class. In short, I really liked physics. It helped me make sense of the world.

I’m old enough, that quantum mechanics was not the public fascination it is now. Cold fusion was the big topic those days.

Despite all of this early formative appreciation for physics, I have yet to really appreciate QM.

It just has never helped me explain the world around me. And unlike many of the transcendental changes that occurred in the industrialized world as we got better and better at physics, whatever QM is supposed to do for me (it’s supposed to have bigger application than the double slit thing, right?), isn’t apparently obvious to me.

The brief parts where Sabine is referring to QM particulars in the video, and when I skim the other articles and such I see on QM, always include a lot of phunky math, amusing names and symbols, and a sort of talk that sounds like convoluted philosophy, or an attempt to explain transubstantiation or the “mystery of the trinity.”

I was amused at the end of the video. There’s a “keep your cake and eat it too” thrust. She clearly states a sort of collegiate/professional “you do you” support. At the same time she tells the viewer they don’t need to pay attention or care about it. But if we the lay masses don’t care, the PR that exists for the sake of securing funding to go on with these papers dries up. So in a round about way, she’s saying “I support my colleagues wishes to continue to pursue these studies, but you my reader shouldn’t bother to support them in any way, direct or indirect.”


The math of QM underpins so much of modern technology, like transistors, MRIs, high precision clocks..

I feel like there's been a humongous failure of science communication if the applications of QM aren't obvious to you :( I don't mean that in a mean spirited fashion, I really do feel despair that someone on HN who enjoyed physics wasn't shown how quantum physics has been the bedrock for so much technology!

EDIT: quick list of places QM/quantum physics is used in the real world: https://scienceexchange.caltech.edu/topics/quantum-science-e...


Of course if QM is a current ly valid theory that it can be found in many of these examples but we don't really need to know about QM or the way it works to have a microwave oven or semiconductors. You don't use QM when modelling relevant things to get a semiconductor working, at least not the ones we were taught at college. My point being is I understand QM apparently underlies everything but do we ever really care? I did hear though that for current cutting edge semiconductors they have to take into account QM


How can you do anything with semiconductors without band gap theory? To move beyond the cat's-whisker stage, QM and its practical implications are both absolutely essential.


> It just has never helped me explain the world around me.

Hum... All of electronics, the entire modern chemistry, solid properties, gaseous state changes, how do you make sense of any of that without QM?

(Of course, it's perfectly ok to not understand those if you don't want to. But then you can't claim that it's missing, just that you don't want to learn it.)


I don't think his statement is that unreasonable. the world is perceived and interfaced as macro yet the quantum is is what makes the macro. So you have to figure out where the macro becomes the quantum.


Can somebody point me towards the entrance to whatever rabbit hole explains why thermal information is not information? I must be missing something.

You can set up a torsion balance and extract a bitstream from the way it dances in a warm gas, and each of those bits fits, albeit with some creativity, into Shannon's definition of information: Once the balance jiggles in a 1-like way, you can exclude the universes where it would've jiggled in a 0-like way from your list, thus reducing the set of candidate universes by half: one bit.

Maybe we can't reconstruct the words on the book that fell into the hole without an impossible-to-achieve perspective, but that doesn't mean it's not the same amount of information. Does it?


Who said thermal information is not information? I'm not an expert, but isn't the problem that a black hole's temperature is directly proportional to its mass? I assume there's a lot more information about the thing falling into the black hole other than its mass which ends up getting lost.


> However, in practice, reversing a process is possible only in really small systems. Processes in large systems become for all practical purposes irreversible extremely quickly. If you burn your book, for example, then for all practical purposes the information in it was destroyed. However, in principle, if we could only measure the properties of the smoke and ashes well enough, we could calculate what the letters in the book once were.

but that's not considering the final state, that's considering the whole process (capturing the dynamics of the change). what's the interval? all time? just the state change? if it's all time, then isn't the information there regardless of what happens later?

/armchair


> They are completely described by only three properties: their mass, angular moment, and electric charge

What about linear momentum, if the black hole has some velocity through space is that not also a parameter of it?


I don't think linear velocity is necessary to describe a black hole. Basically, a black hole that moves behaves exactly the same as one that stands still.


What's different about angular momentum in that case? Given that rotating or not rotating is also similar to moving or standing still


The laws of physics are invariant under a change in linear velocity. If you take everything in the Universe and add a constant to its velocity, and then if you yourself add that amount to your own velocity, then everything will appear exactly the same.

The laws of physics are not invariant under rotation. If you take everything in the Universe and give it a nudge so that it's spinning around a particular axis, things will change. For starters, everything will fly apart, unless you also create a force field that pulls everything towards the axis of rotation (with the force increasing linearly with distance from the axis of rotation).

A technical way to describe this is to say that the laws of physics have Poincaré invariance (or more properly, just Special Relativity has this invariance).[1]

That's why linear momentum is viewed as a trivial property of a black hole, but angular momentum is not.

1. https://en.wikipedia.org/wiki/Poincar%C3%A9_group


The (important) difference is that a rotating frame of reference is not inertial.


No. Simply put, the laws of physics will be described by different equations in a rotating frame of reference than in one that doesn’t rotate, while it is impossible to distinguish two frames of reference that is just moving relatively to each other without any reference to the outside.


Rotating black holes behave quite differently from static (non-rotating) black holes.


AFAIK linear momenum depends on the choice of inertial reference frame, so you may assume it is 0.


> This is a possibility that I thought about at some point myself, as I am sure many others in the field have too. I eventually came to the conclusion that it doesn’t work. So I am somewhat skeptical that their proposal actually solves the problem. But maybe I was wrong and they are right. Gerard ‘t Hooft by the way also thinks the information comes out in gravitons, though in a different way then Hsu and Calmet. So this is not an outlandish idea.

"So this is not an uncommon idea." FTFY


I love Sabine. I find her one of the few sober and rational voices on youtube.

Maybe it is her direct and confrontative way that scares off some americans? I'd say she comes out like a sterotype german :-)


Do all black holes with the same mass and other 'hairless' parameters (momentum, etc) have the same temp and therefore radiation emissions? If so, can we remotely measure mass from this?


Except in the real world for any black hole we observe (stellar mass or larger) that temperature is much lower than the cosmic microwave background so that you won’t see it.

A stellar black hole floating out between the galaxies would not be evaporating now but rather slowly gaining energy from the CMB. The universal will have to expand for a long time before it gets cooler than black holes.


But, my main question: Do they all have the same temp, given all other main parameters?


That depends on if they have hair or not, but nobody knows whether they do and it would be extremely impractical to experimentally verify such a thing. That's what the article is all about.

If they don't have hair then yes, you should be able to estimate mass by measuring their temperature. You would also need to know how far it is though, so that you could compensate for redshift.


OK, that makes sense.

In reading TFA I was just kind of blown away that temp, mass, angular momentum, etc, are all so closely tied together.


Note that they are so closely tied together for black holes, not necessarily for other types of objects. Black holes in the GR description are probably simpler than some elementary particles even.

However, it's important to note that the GR description is itself not exactly self-consistent, as the curvature is divergent at the center of the black hole (it is infinite, I believe?).


No, small black holes are hotter than big black holes.


Black holes of the same mass (and angular momentum and charge) have the same size.


Of course. I apparently read the question to fast.


She's exactly right. Any science discovery that's worth its salt is falsifiable. If you can't validate your results there's a problem. It's the responsibility of every theorist to make testable predictions, and to communicate to experimentalists how they might be tested.

There's frankly not enough pressure within the theory community to disregard cagey theorists that actively avoid testing their predictions, or disregard the importance of testing in general.


Well Einstein-Podolsky-Rosen was published in 1935. Bell's Theorem was published in 1964. Bell's wasn't tested experimentally until 1972, by which time it was largely forgotten except for creative fellow who worked out a way to test it.

I think having testability as a qualification for theory might not serve us well.

It blows my mind how much we have deduced about black holes in 100+ years, yet it was only very recently that we could actually "see" one. Would hate to lose that.


I've always considered Bell's solution somewhat obvious.

It wasn't team Einstein's responsibility to propose a solution, his motive was a rebuttal.

The fact that it went unstated for 30 years, Einstein died in 1955, reflected poorly on team Heisenberg.


This is a limitation of science.

The issue I see, is how to continue to study such purely theoretical matters?

Why should everything have to be testable (and thus, potentially developed into a marketable product)?

It may not be science, but a lot of people still like to do that kind of theoretical work.


Here's the problem I see with it. Say you take multiple "top tier" theoreticians and task them with the same physical problem. Suppose they all make slightly different starting assumptions, and these assumptions lead to mutually incompatible conclusions. Which one is right? Who do we believe?

The whole point of testing theories is there can only be "one truth". If you let theories run rampant without testing you end up in logically inconsistent chaos.

All that said, theories should certainly be given time to grow when experimental validation is not immediately available. However, there's many in the community that take great liberty with that long leash to the detriment of physics progress.


Well, you can do maths research. That doesn't need to be testable, only self-consistent. However, number theorists don't come and say they've uncovered some grand mystery of the cosmos when they prove something about trans-finite numbers, unlike many theoretical physicists.


Exactly. A lot of amazing math has come out of string theory.


> You’re not missing anything if you don’t those articles.

I love the irony that something is missing in her closing remark. Almost makes me wonder if it was intentional.


I have worked on the black hole info paradox too, and this to me looks more a rant than anything. The solution to the information paradox is expected to improve our understanding of the horizon structure, if any, and this should give us predictions detectable as gravitational waves. And we just detected the first gravitational waves! What a time to be alive :)


I don’t know much about the black hole information loss problem and I do like Sabine’s skepticism. However, saying it has no use to work on mathematical solutions in this area is taking it one step too far. Maybe, based on the solutions other (indirect) experiments can be devised, maybe it will help solve other Physics, mathematics or engineering problems in the future.


Just admit it, you kept losing your information.


Maybe in some years we create a small black hole, and since radiation is inversely proportional to mass, not only it will emit measurable emissions, it will pop out of existence or go through it's lifetime pretty quickly. Hence experiments will be possible and the paradox solution can be verified


I really like her. She actually talks about things the right way and approaches things at a fundamental level. She's very smart and reflective of new information and critical about any consensus, like everyone should be.

... and she's not shying away from letting everyone know.

"Gobbledygook". Heh.


If a "real" (big) black hole takes billions of years to evaporate, what does it look like just before it's about to die? (and completely evaporate). Why is this "tiny black hole" not possible to reproduce experimentally? (is it?)


Start by seeing if you can make some neutronium in the lab first. When that goes off brighter than a million suns, the problem will dawn on you...


On the contrary, the near-final state of a black hole at EOL might be quite weak in energy.


Reminds me of the phrase "not even wrong" https://en.m.wikipedia.org/wiki/Not_even_wrong


We might be able to create our own BH to test theories:

> The LHC will not generate black holes in the cosmological sense. However, some theories suggest that the formation of tiny 'quantum' black holes may be possible.


Isn't the way the book is broken apart and joins the current mass of the black hole the "reversible" information? That seems to be similar to her burning book -> smoke example.


> [B]lack holes are extremely simple. They are completely described by only three properties: their mass, angular moment, and electric charge. This is called the “no hair” theorem.


I fully sympathize with the sentiment of the article. I also left with the impression that fundamental physics is data starved.


If someone could challenge my layman understanding:

Isn't the information still there, inside the black hole, but just not retrievable from the outside?


Covered in the video/article.

I'll post the relevant paragraphs, all quoted directly (verbatim) from the blog post:

Physicists knew about this puzzle since the 1960s or so, but initially they didn’t take it seriously. At this time, they just said, well, it’s only when we look at the black hole from the outside that we don’t know how reverse this process. Maybe the missing information is inside. And we don’t really know what’s inside a black hole because Einstein’s theory breaks down there. So maybe not a problem after all.

But then along came Stephen Hawking. Hawking showed in the early 1970s that actually black holes don’t just sit there forever. They emit radiation, which is now called Hawking radiation. This radiation is thermal which means it’s random except for its temperature, and the temperature is inversely proportional to the mass of the black hole.

This means two things. First, there’s no new information which comes out in the Hawking radiation. And second, as the black hole radiates, its mass shrinks because E=mc^2 and energy is conserved, and that means the black hole temperature increases as it evaporates. As a consequence, the evaporation of a black hole speeds up. Eventually the black hole is gone. All you have left is this thermal radiation which contains no information.


That's correct for general relativity alone, but GR isn't enough on its own.

Hawking showed that the information is lost in the process of black hole evaporation as the black hole decays into anonymous radiation, and so once a black hole is gone so too is any trace of the matter it absorbed in its lifetime.

It's this bit that isn't okay in quantum mechanics, and that's problematic because quantum mechanics certainly seems to be bang on the money for a great deal of other phenomena.

One would have a hard time saying that QM was wrong. That's not to say that it is a complete theory, but QM has made many, highly accurate predictions that have served to edify the framework.

I don't know how certain it is that black holes evaporate. It may seem tempting to think that perhaps it is this notion of evaporation that could be overturned, but then you have black holes which simply exist forever, which would be rather problematic as well.


FWIW, I've been enjoying your comments in this discussion.

> It may seem tempting to think that perhaps it is this notion of evaporation that could be overturned, but then you have black holes which simply exist forever, which would be rather problematic as well.

Why is a bound state of matter in a black hole lasting until the infinite future more problematic than a bound state of matter in a proton lasting until the infinite future? Is a theory with non-decaying protons problematic compared to a theory with proton decay?

Essentially, gravitational collapse and horizon-formation is not the information loss problem -- the information still exists inside a growing black hole, we're just disconnected from it by virtue of being on the other side of the horizon. Compare with the information from the very early universe which has exited the observable universe thanks to the metric expansion. Or the information in the universe outside the Rindler horizon of an accelerated observer.

Expand the universe forever, and for every observer more and more information goes to the other sides of cosmological and black hole horizons.

Time reversal leads to interesting thoughts: galaxies with stones (and maybe people, chairs, and xylophones) coming into view from beyond the horizon all seems fine if we time-reverse our universe. Likewise for a black hole that had such things fall into it in our ordinary arrow-of-time direction, we should expect that things like stones could be spat out under time-reversal. The information loss problem arises when a black hole completely evaporates to thermal noise: how does the time-reversed black hole, formed from inrushing thermal noise, know that it should eventually spit out xylophones rather than violins?

We need that knowledge in our time-reversed black hole. Does it rush in along with the thermal noise?

The time reversal picture starts with big primordial black holes that fission into smaller ones, with those spitting out dust, gas, dead planets, space probes, stars and so on. Thanks to the time-reversed metric expansion, these spit-out observers also see a bunch of previously unseen black holes rush into view and spit out things like cats and space probes.

This isn't a problem, the recipe for all that can be deemed to be inside the primordial (in the time-reversed sense) black holes: it's part of the initial values surface, with the relevant values initially inside the black hole horizons.

What if we time-reverse from an expanded universe where all black holes have evaporated into thermal noise? Do we have to rely on fluctuations (Boltzmann brains!)? Or on "false noise" as the initial values surface, with dynamical laws that create detailed structure as we do an adiabatic compression of the seemingly structureless cold gas? Or both? We need to get lots of widely-separated black holes at early times when our collapsing universe is big and sparse, rather than at late times when everything is much closer and hotter. We also need it to be correct when we time-reverse the time-reversed picture.

I'm not sure that the problem is qualitatively very different when one thinks classically or quantum mechanically, although the latter sharpens the vocabulary somewhat ("unitarity!") and introduces some fuzzy questions about entanglement energy (Almheiri, Maroff, Polchinksi, Sully 2012 and subsequent fiery discussion).

The problem is that the singularity blocks time-reversal classically, and in the absence of time-reversibility one cannot have unitary evolution (T-symmetry is necessary but insufficient for unitarity, so some (semi-)classical solution that abolishes the singularity might turn out not to resolve the whole information loss problem).

However, a cosmos with black holes that never evaporate seems to abolish most of the "final values surface" problem: we don't know what the quantum numbers are exactly, but at least we know where they are: they're mostly localized inside black holes.

Finally, in the time-reversed picture we blow apart our poor primordial protons during reverse-baryogenesis anyway, but at least stable protons in our usual arrow-of-time direction means we know where almost all the funky GUT epoch numbers are in our very very far future (ignoring black hole evaporation).


That's what I thought too. If they expand as they swallow matter, doesn't that easily explain that it's still in there?


Disclaimer: I'm not even a physicist.

As I understand it, the trouble is with "what happens to the information inside the black hole?", not with whether it's there at all or not (which isn't disputed - we see stuff fall in, so it's gotta go _somewhere_ and it isn't in our observable part anymore). In addition, because of the nature of a black hole, how would an experiment trying to test any theory about what happens in a black hole even work? As far as I'm aware, we don't know of any mechanism where stuff inside the black hole affects stuff outside the black hole (hawking radiation doesn't, as far as I'm aware, explain _how_ the spontaneous quantum fluctuations come to be - they're just theorized to happen to satisfy the equivalence principle near the event horizon), but that's exactly what we'd need to confirm or deny anything about whatever happens past the event horizon.

On top of this, just the existence of hawking radiation means black holes vanish over time - but without us being able to say that e.g. a book with mass 1kg or a bag of sugar with mass 1kg was once thrown in. We can't distinguish the two cases - the information (as far as we know today) is lost.


If you simply read the SFP you'd know the answer to this. It isn't there forever, because the black hole is not there forever.


Gravity is infinite at the singularity (the middle of the black hole). Everything gravitates towards that point. Our best understanding is that no information can exist here.

Black holes "evaporate" over time -- by emitting Hawking radiation. This is probably where the information goes, in my layman understanding.


> Black holes "evaporate" over time -- by emitting Hawking radiation. This is probably where the information goes, in my layman understanding.

No, the Hawking radiation and evaporation is exactly what causes the problem. If black holes were forever expanding, we could simply say "they have a structure inside that we can't detect, but that structure preserves the information; but, since it's past the event horizon, it will be, even in principle, forever beyond reach of our understanding and experiment".

However, if black holes eventually disappear, it means you have something like book => unknowable inside of the black hole event horizon => something observable outside. The problem now becomes that, from Hawking's discovery, the "something observable outside" is random thermal radiation, which can't contain information by definition. Hence, not just something unknowable, but a paradox (an inconsistency in the formal model).


I still pause when I see how much Sabine looks like Klaus, the lead singer of the Scorpions.


Why is information loss such a big deal? Information gets lost all the time. Hit the wrong button, and the all-important file vault is gone, with all the backups, forever, information in them is lost. You might lose a job, but the universe doesn't break because of that!


Not sure if you are joking, but the article covers this.

In principle, according to QM, information is never lost (until you make a measurement, but that's a different can of worms). In principle, in QM it is impossible to delete information. The article explains: if you burn a book, but gather detailed information about the fire and the smoke, you can reconstruct exactly what every letter on every page looked like, and every ink blot you spilled on it 10 years ago.


Burn it to ash, collect info about the ash and reconstruct - no problem. But if you burn it in a way, that the only remains are "random thermal radiation" all the information is lost and we have a serious paradox to resolve...


> But if you burn it in a way, that the only remains are "random thermal radiation" all the information is lost and we have a serious paradox to resolve...

To quote the famous Spartan answer, If.

That is, QM predicts this is impossible. Even if you threw the book into the Sun, you would (I must emphasize again in principle) be able to measure the radiation given off by the sun and at some point identify the words of the book.


> according to QM, information is never lost (until you make a measurement

> you would (I must emphasize again in principle) be able to measure the radiation given off by the sun and at some point identify the words of the book

Does the “information” [1] survive measurements or not?

[1] “Information? Whose information? Information about what?”


> Does the “information” [1] survive measurements or not?

My understanding is that it doesn't, but I believe that may depend on your interpretation of QM. Note also that "measurement" is pretty ill-defined.

> “Information? Whose information? Information about what?”

About the state of the system (the wavefunction). Basically in QM an isolated system cannot reach the same final state by more than one route; so, if you know what state it's in, you know exactly what route it took, what every previous state was.

Maybe the problem is more clear if moving to computation from pure physics:

In a classical computer, you can do something like "x = x & y; y = y & x". If you run this operation and find that x = 0 and y = 0, you can't know what values x and y had before, so that information was lost (ignoring other physical effects - if QM is right, the information is still retained, maybe radiated away by the processor or something).

As such, in a quantum computer, this operation simply can't be performed. Instead, you have to use an ancillary bit, z, and some QC equivalent of the Toffoli gate [0]. Then you can compute something like {x, y, z} = {x, y, z XOR (x AND y)}; {y, x, z} = {y, x, z XOR (y AND x)}; if you get the result {0, 0, 1}, you can compute exactly what values x, y and z had initially.

The same observations apply to physical interactions.

[0] https://en.wikipedia.org/wiki/Toffoli_gate


Then you don’t think that it’s necessarily possible - even in principle - to identify the words in the book that you threw the into the Sun _measuring_ the radiation?

> Basically in QM an isolated system cannot reach the same final state by more than one route; so, if you know what state it's in, you know exactly what route it took, what every previous state was.

There is no way to know in what state is an isolated system - unless you know the state when you last interacted with it. You can “set” the state now and deduce the future evolution. You cannot “know” the state now and deduce the past evolution.

If you think about the whole universe, the “state” that could be described by a “wavefunction” is not a state describing the observed universe. Not even the number of elementary particles would be defined. The physical relevance is unclear.


> Then you don’t think that it’s necessarily possible - even in principle - to identify the words in the book that you threw the into the Sun _measuring_ the radiation?

Well, this is why I said the measurement is problematic. It's perhaps not possible even in principle to actually measure it. But, in principle, the wave-function of the sun after you throw a book that has a coffee stain into it will be different from the wave-function of the sun after you throw the same book without the coffee stain. And that difference, in turn, could be detected through measurement, as it will affect the probabilities of a measurement of the sun. Of course, this means that you will actually have to throw a whole lot of books to be able to notice this statistical difference.

> You cannot “know” the state now and deduce the past evolution.

Well, because measurement collapses the state, you are right to some extent. But on the other hand, the measurement's result will be affected by anything that has ever happened to that system in a different way.


> But, in principle, the wave-function of the sun after you throw a book that has a coffee stain into it will be different from the wave-function of the sun after you throw the same book without the coffee stain.

That's different from saying that it can be measured in principle. And a having or not a coffee stain is different from the previous example. A coffee stain of a different shape would be closer.

> And that difference, in turn, could be detected through measurement, as it will affect the probabilities of a measurement of the sun.

I'm not sure how to interpret that. You just said that "It's perhaps not possible even in principle to actually measure it". [edit: maybe you mean that some kind of « statistical trace » remains, but we surely agree that’s very different from being able to identify the words in a particular book.]

Anyway, the sun is not an isolated system so it cannot be described with a wave-function. The "wave-function of the universe" - if we assume that such a thing exists which evolves unitarily since the beginning of time - would contain the "information" about the universe in which you threw a book with a coffee stain and about the universe in which you threw a stainless book and about the universes in which there is no Sun to throw things into.


Or the QM is just wrong about information never being lost.


Sure, that's a possibility. But, the problem is that this feature comes from a very fundamental place in QM's mathematics - QM is (almost) a linear theory, and with linear equations, information can't be lost. So, to assert that QM is wrong about this is to invalidate all of QM's equations.

Now, it's important to recognize one thing: QM is not really a linear theory, because it also has Born's rule, or the Measurement postulate. That is, while the wavefunction evolves according to purely linear equations, when you ultimately want to measure the state of the system, you get a non-linear update: the wavefunction suddenly "collapses" to a single value, and information is indeed lost (different wave-functions can collapse to the same state after a measurement). However, the measurement postulate is itself poorly understood, so its hard to introduce it into the discussion without derailing things.

There are even consistent interpretations of QM where this doesn't actually happen, such as Many Worlds, where the information is actually still preserved across the totality of the worlds.


> the information is actually still preserved across the totality of the worlds

That may be correct if the “totality of the worlds” means one single “world” described by a wave function evolving unitarily - corresponding to what you call information. This “information” cannot be destroyed but it cannot be created either. It will always be the same “information” that was already there at the beginning of time.

What is unclear is the “many worlds” part. How are they defined?


> It will always be the same “information” that was already there at the beginning of time.

Yes, absolutely. Though it's worth noting that QM does allow for the void to have random fluctuations that are not caused by anything else, but that can interact with the existing system.

> What is unclear is the “many worlds” part. How are they defined?

There are several ways from what I've read. I personally don't think MWI really solves any problems, or that it is a good description of nature, so I haven't taken too much time to remember exactly how they do it.


> Though it's worth noting that QM does allow for the void to have random fluctuations that are not caused by anything else, but that can interact with the existing system.

The random fluctuations are random only becase we observe whether they happened or not. If you consider QM to be just the deterministic evolution of a pure quantum state there is nothing random about them. You would say that the "information" about those "random fluctuations" was always there.

> I personally don't think MWI really solves any problems

I think we agree on that.


TLDR; There exist many solutions, none of them experimentally verifiable and hence the pointless debate continues.


is it rude to post the TL;DR?

>And that’s why I stopped working on the black hole information loss paradox. Not because it’s unsolvable. But because you can’t solve this problem with mathematics alone, and experiments are not possible, not now and probably not in the next 10000 years.


Unobservable phenomena by hypothetical objects with contradictory arbitrary rules that have no effect on reality.


A quibble - these objects are not just hypothetical. We have images and other evidence of some particularly large ones.

It's the exact rules that are hypothetical.


But we cannot observe Hawking radiation without having, on hand, a black hole small enough to be hotter than the CMB -- or that can be enclosed and protected from it, and the enclosure itself cooled below it, and in a vacuum so hard as not to any have atoms to fall in.

All tall orders.


I thought Susskind solved this.


information paradox of the modern science - reduce everything to a model of a spherical horse in a vacuum, and after that run around shouting that it isn't possible for the horse to have distinguishing features, like the horse's sex, color, breed, etc.


Salient Ayn Rand quote:

"Contradictions do not exist. Whenever you think you are facing a contradiction, check your premises. You will find that one of them is wrong."


This is a good post with all of the right reasons to be skeptical. However, with the really fringe stuff like this, I feel that the answers could come through intuition.

On that note, her central premise is that we can't study black holes. But I'm feeling more and more convinced that the universe itself is inside of a black hole. If that's the case, then maybe we can study the inside after all.

On a whim, I searched for "hawking radiation hubble constant" and stumbled onto a bunch of stuff like this (the Download PDF button works):

https://www.preprints.org/manuscript/202101.0017/v2

I'm not a physicist, but if I assemble a bunch of ideas, I can make a bunch of insights like: if black holes evaporate faster as they shrink, then maybe galaxies slipping outside of our observable universe is causing it to become less massive, which is increasing its rate of expansion. Someday we may see everything shooting away from our reference point faster and faster until we ourselves pop, like reversed spaghettification.

But then again, that doesn't seem quite right, because the galaxies slipping away from us faster than the speed of light probably don't experience anything catastrophic themselves. And also the galaxies might not actually be moving away, just more space has been constructed between us and them like a balloon stretching. I feel like without a solid understanding of this process, it's going to be hard to understand black holes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: