Hacker News new | past | comments | ask | show | jobs | submit login
Olbers' Paradox (wikipedia.org)
162 points by bookofjoe on Dec 12, 2020 | hide | past | favorite | 81 comments



“[Edward Robert] Harrison argues that the first to set out a satisfactory resolution of the paradox was Lord Kelvin, in a little known 1901 paper, and that Edgar Allan Poe's essay Eureka (1848) curiously anticipated some qualitative aspects of Kelvin's argument:

Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all.”


Isn't this just an issue of comparing a countable and uncountable infinity? The number of points on the unit sphere is uncountable, but the number of stars is countable. As such there are in some sense more points on the sphere than there are stars, even though there are an infinite number of each.

Take this together with the fact that intensity falls off as the square of the distance and it seems like the sky should be dark.


I guess the fall-off is even worse. At the beginning the intensity falls off as 1/r^2, but eventually the intensity becomes so small that you're talking about individual photons. At some point the intensity will then fall from a single photon to zero. So, after some critical distance the intensity will actually drop to zero.

More formally, each star emits some amount of power in each frequency band: P(f) so that \int_0^\infty P(f) df = P_total.

For each frequency then, we have a total of P(f)/(hf) photon emitted per second. The total number of photons emitted per second by the star is then \int_0^\infty df P(f)/hf which is a finite number.

The total number of photons received per unit area a distance r away from the star would then be

\frac{1}{4\pi r^2} \int_0^\infty df P(f)/hf

If your detector has an area A (e.g. your retina or some other device), you'd expect to see

\frac{A}{4\pi r^2} \int_0^\infty df P(f)/hf

photons per second from the star. As r gets really large, you'd see this drop arbitrarily low. Conversely, the amount of time you'd need to wait to see a single photon from that star then grows, making the star dark.


> after some critical distance the intensity will actually drop to zero.

No, it won't. If you're going to use a quantum model of light (which you have to to use the concept of "photon"), then you have to use the quantum interpretation of "intensity". The quantum interpretation of "intensity" is the probability of detecting a photon; and this is a continuous quantity which can get smaller and smaller indefinitely without ever dropping to zero.


The probability can then get arbitrarily small, meaning that the expected amount of time needed before the probability of having observed a photon would get progressively larger.

My argument above is semi-classical, but it shouldn't change with a full quantum mechanical approach.


> The probability can then get arbitrarily small, meaning that the expected amount of time needed before the probability of having observed a photon would get progressively larger.

Yes, but the probability is never zero, and the expected time is never infinite. So saying "the intensity drops to zero" is never correct.


The point is not that the probability needs to hit zero, it's that it's not correct to say that you receive a quarter of the power as you move twice as far from the source. It's still true that the expected number of photons per second drops by a factor of 4, but it can drop so far as to render the source dark for an appreciable amount of time.

The paradox claims that the sky should appear bright, which I take to mean that a detector should be receiving light from each point in the sky at each moment in time. It does not say that the detector will receive light from each part of the sky at some point, but that you may need to wait a million years before a particular point flickers and that, even then, there's nothing that guarantees that all points will flicker at the same time.


> The paradox claims that the sky should appear bright, which I take to mean that a detector should be receiving light from each point in the sky at each moment in time.

I don't think this is required for the paradox. All that is required is that the average flux of radiation received from the sky as a whole should be constant, and equal, roughly speaking, to the flux corresponding to the surface brightness of a star. That will still be true, under the specified conditions of the paradox (a universe in steady state and infinitely old) even if quantization is taken into account.


> it can drop so far as to render the source dark for an appreciable amount of time

Ah, I see what you mean: yes, the intensity will be 1/4, but because of quantization, you now have to draw a distinction between the time-averaged power (which behaves like the power does in the classical case--more precisely, this would be the expectation value of the power in the quantum case) and the actual power at a given time, which can vary from the average (even to the point of being zero).


Yeah, exactly


I don't think this is an issue. A star isn't a single point, and each one maps to a small-but-not-infinitessimal area on the unit sphere.


Fall-off of light intensity does not factor in: while the amount of light reaching an observer from any given star does indeed fall off with the square of the distance, so does the apparent size of that star; its apparent surface brightness thus does not change with distance. (Think of day-to-day experience: people who walk away from you do not darken!)

I agree that countability vs. uncountability seems like it should come into play.


Great point, thanks! Makes perfect sense. Turns out though that the intensity actually falls off faster than 1/r^2 toward the end due to quantization effects. Feels silly to include quantization, but I guess when were talking about stars that may be arbitrarily far away this would need to be part of the story.


> Turns out though that the intensity actually falls off faster than 1/r^2 toward the end due to quantization effects.

No, it doesn't. See my other post in response to you upthread.


No. At no distance "R" from the earth does a start becomes a point. A point does not have a surface area, a star does (even if it is very far away).


No, the rational numbers are dense.


While it's true that the finite age of our universe and the expansion of space explain the paradox in the universe we live in, I don't think those are necessary conditions, nor is the dark sky proof our universe is not infinitely old.

What I feel is the core of the resolution of the paradox is (global or local) conservation of energy. Even in an infinitely large eternal steady state universe, if we assume the total energy of the universe is conserved one cannot have an increase in energy density everywhere at once.

If in this universe stars live forever, you'd have eternal "sources" of energy, and for energy to be conserved you'd need compensatory "sinks" draining energy out of the universe, like black holes which don't increase in size. In which the resolution of the Olbers' paradox would be that most of your lines of sight would end in such a black hole.

If, like is usual in physics, you assume local conservation of energy, stars cannot live forever, so in an eternal steady state universe there must be a mechanism recycling the radiation back into a star. In this case, again every line of your sight would eventually hit a star, but most of the radiation would never reach you, being used underway to make a new star. (This is a blatant violation of the second law of thermodynamics of course, which is the actually issue with eternal steady state universes).


Probably better not to use the term "steady state" here (even if pretty appropriate) in that the "steady state" cosmological model is/was one that is exponentially expanding, with all physical observables statistically time-independent. It solves Olber's paradox due to the radiation redshift. That model was observationally incorrect, but actually has pretty much been reborn in "eternal inflation" in which the Universe as a whole is in a quasi-exponential state with local regions expanding sub-exponentially like our observable universe.

In either classic steady-state or eternal inflation case, energy conservation is not necessarily a problem: you can have vacuum energy that converts steadily into radiation, while being generated by the expansion.


How does the universe expand? What is it expanding into? And why isn't that thing considered the universe?


Quoting Wikipedia: It is an intrinsic expansion whereby the scale of space itself changes. The universe does not expand "into" anything and does not require space to exist "outside" it. Technically, neither space nor objects in space move. Instead it is the metric governing the size and geometry of spacetime itself that changes in scale https://en.m.wikipedia.org/wiki/Expansion_of_the_universe


What is the metric governing the size and geometry of spacetime? Gravity??


Gravity is not the metric, but it does interact with the metric (well, gravity is what we call it when mass or energy affects space time). The presence of mass/energy causes the shortest distance between two points to no longer be a straight path through space. The path an object takes through space time is always the path with the shortest space time interval among all paths (in that sense the path is “straight” the same way that in flat space, a straight line is the shortest path between two points) — this distance is given by the metric tensor — but gravity makes this path appear curved in space.


The metric of the spacetime manifold. See https://en.wikipedia.org/wiki/Metric_tensor .

When you have a space that isn't just normal globally euclidean space (such as: on the surface of a sphere), the idea of a vector as in like "a direction you can go in and an amount of how much or how fast or whatever" isn't something that makes sense as something independent of a base location. Instead, each point in the space has associated with it a space of "tangent vectors" at that point, and these spaces are related to each other.

The metric tensor associates to each point a "bilinear form" with some properties, essentially a way of doing something like a dot product of two tangent vectors at that same point.

This in turn allows for defining the notion of the length of some curve through the manifold.


Apparently it can be “loosely” thought of as such, but that implies there is some interesting difference way above my level: https://en.m.wikipedia.org/wiki/Metric_tensor_(general_relat...


> What is it expanding into?

The future, literally. To grossly oversimplify: if all of space is east-west, and time is north-south, the Big Bang is the north pole. Only the universe is the map rather than the globe, and the globe doesn’t have to exist for the map to exist and to have the same expansion in space (/longitude) with respect to time (/latitude).

Also there may or may not be a Big Crunch/south pole, this is all just a way to get into the nature of the geometry by way of a convenient frame of reference.


Energy is not conserved on the scale of the Universe. General Relativity has no energy conservation law (it has a stress-energy tensor conservation law). Two examples of this: expansion of the universe causes the total energy contained in radiation to decrease, and the total amount of dark energy to increase.


It depends on the way you look at things:

Noether's second theorem works fine in General Relativity (in fact, this was the historic context of her paper). So for any given time-like vector field, you'll get an energy conservation law. In case of Friedmann cosmology and chosing cosmological time as said vector field, you'll get a term proportional to H² which picks up the change in energy.

However, you won't be able to make this into a covariant expression: Gravitational energy-momentum can be expressed in terms of pseudo-tensors at best...


I have a related question, borne out of my ignorance as I am not a physicist nor a mathematician.

> Noether's first theorem states that every differentiable symmetry of the action of a physical system has a corresponding conservation law.

Why is this not obvious? Are not symmetries necessarily the transformations which conserve quantities, and the types of symmetry equivalent to the type of conservation?

Because Noether got a lot of respect for her work despite working (in a time of great sexism, no less!), I know that either I have totally misunderstood or this is a Columbus’ Egg [0] — but I am curious which, and if it isn’t a Columbus’ Egg, what I’ve misunderstood (assuming that sort of thing can even fit into a HN-sized reply and I’m not asking something that would normally be the conclusion of a final year degree level physics module).

[0] https://en.m.wikipedia.org/wiki/Egg_of_Columbus


Are not symmetries necessarily the transformations which conserve quantities

A priori, a symmetry is a transformation that, when applied to any valid trajectory, yields another valid trajectory. It is not obvious to me why (certain types of) symmetries necessarily yield conserved quantities...


The symmetry defines that which is conserved, doesn’t it?

If I have a motion vector and a force vector, and if work done is ∫ force • displacement dx, and my transformation is one which conserves the scale of and relationship between both vectors (e.g. displacement), then is it not automatically true that work done is also conserved under that transformation?


That's not what Noether's theorem means! The "symmetries" referred to here are operations which leave invariant the action of the physical system (heuristically, the physical laws governing the system). To the extent that one can (at least in principle) write down the action of composite systems as the sum of their actions + interactions, these are taken to mean (again heuristically) the laws of physics writ large. Likewise, "conservation law" here means not specifically the particular result of an experiment (like the work integral you describe), but a more general notion of conservation, in the sense of there being universally invariant conserved quantities (i.e. things that cannot be created or destroyed).

To get a feel for it (and why it may not necessarily be intuitive) consider the following pairs of symmetries and conservation laws:

- The laws of physics are invariant under time translation (i.e. repeating an experiment at different times gives you the same result ceteris paribus). The corresponding conserved quantity is energy.

- The laws of physics are invariant under spatial translation (i.e. repeating an experiment at two different places gives you the same result ceteris paribus). The corresponding conserved quantity is momentum (this is a vector: one component of momentum for each possible direction of translation).

- The laws of physics are invariant under rotation (i.e. repeating an experiment under different orientations gives you the same result ceteris paribus). The corresponding conserved quantity is angular momentum (again a vector, since the rotation group SO(3) has three generators).

- Electromagnetism, constructed as a classical field, exhibits an internal symmetry in the field quantities. The corresponding conserved quantity is the electrostatic charge (actually a 4-vector current).

It is not a priori obvious from from the kind of arguments you've supplied why these should be the case. Demonstrating these would require access to the Lagrangian formalism (i.e. action principles) and how it behaves under these symmetry operations. I would say that, as you suspect, you are fundamentally misunderstanding the situation.


Thanks! I still don’t really get it, but telling me I need the Lagrangian formalism is helpful, and tells me what I need to study in order to really understand Noether's theorems.

I’ve heard of the Lagrangian before, but my knowledge of physics is probably somewhere around the level of someone who has only just finished the first year of their degree, at least judging by the modules my alma mater lists for their physics degree (I got Software Engineering from them 14 years ago, all my physics knowledge since then is personal geekery).


Right. Sabine Hossenfelder about it: https://www.youtube.com/watch?v=ZYM6HMLgIKA&t=395s


The Hubble was pointed at what appeared to be a black void of space, and revealed lush fields of stars and galaxies.

So at one degree of perception, we have an empty void, and at another, a bright flush of light and activity.


There is still plenty of space between the individual stars in the Hubble Deep Field image. From that point of view it just confirms the paradox - even with a powerful telescope stars don't fill up your entire field of view.

I think a more fitting example of "an empty void yet a bright flush of light" would be the microwave background. With eyes sensitive to longer wavelengths the entire sky is indeed bright.


> even with a powerful telescope stars don't fill up your entire field of view.

Suppose the experiment is repeated on a black pixel from the Deep Field image, and another swell of stars are observed, hinting at a kind of fractal distribution.

Were the universe eternal and static, why could this pattern not repeat indefinitely in infinite time, space and matter? The paradox seems to assume a kind of infinite level of sensitivity of the observer.


No, the paradox as described in the Wikipedia article doesn't assume the infinite level of sensitivity.

The figure explains it visually - the further away you go from the observer, the more stars you capture in your camera's field of view and the apparent brightness stays the same. The 1/r^2 term for light intensity is cancelled by the r^2 for the number of stars.

It's interesting think what an experimental result you describe would imply. It either contradicts the nature of light or that we're in the center of a cloud of stars where the density of stars falls with distance from us.


Thanks for spelling it out. Makes more sense now.


> With eyes sensitive to longer wavelengths the entire sky is indeed bright

With eyes sensitive to the CMB, the CMB is dimming and reddening. Such eyes witnessing an essentially isotropic and homogeneous CMB would be Eulerian observers of the CMB. (In contrast to observers who see a dipole anisotropy because of acceleration along one spatial axis, for example, or observers immersed in the gravitational field of a massive system like a galaxy cluster or a planet).

Such Eulerian observers see a clearly peaked spectrum, essentially identical to a that of a blackbody radiator that is cooling. If such observers are in deep inter-galaxy-cluster space and equipped with instruments to augment their CMB-sensitive eyes, they'd detect plenty of bright spots with frequencies much much lower than the peak in the CMB. As an example an https://en.wikipedia.org/wiki/Radio_galaxy will be much brighter in wavelengths longer than that of the CMB (spectral radiance peak of CMB is ~ 160 GHz and falls off quickly away from the peak).

That -- correcting for proper motions and atmospheric effects -- our view of the sky is pretty uniform in the CMB (cf. https://en.wikipedia.org/wiki/BOOMERanG_experiment and beyond) but far from uniform in VLF, radio, IR, UV, X-Rays, gamma rays, and so on (as known since roughly the 1930s thanks to https://en.wikipedia.org/wiki/Karl_Guthe_Jansky#Radio_astron... ) just like it is in visible light, and that the bright spots at different frequencies aren't coincident in our sky, are important pieces of evidence which must be dealt with by any prospective model of physical cosmology.


This doesn't make sense to me. They're assuming that brightness is a continuous function that you can keep dividing in half over and over and result in a real number, but brightness is not continuous, at a certain point you only have one photon left. It stops being a question of how many photons per second and becomes a question of how many seconds between a photon.


> brightness is not continuous, at a certain point you only have one photon left

If you are going to use a "photon" interpretation, you are using quantum mechanics, and in QM "brightness" involves the probability of detecting a photon and does not require there to be an exact integral "number" of photons. So brightness is still continuous in QM; the probability of detecting a photon can keep getting smaller and smaller indefinitely, without ever having to discontinously jump to zero.


Well, QM didn't exist yet. Instead, you had competing concepts of light as a wave and light as a particle. As a wave, it would seem perfectly reasonable that you could keep dividing and that it was a continuous function. We have the benefit of QM in hindsight.

However, think of it another way ... if the stars had infinite time to pump out these "photon" particles you speak of (harmph! highly dubious), then there ought to be an infinite number of them in any given space, as they have had an infinity of time to reach you.


> This doesn't make sense to me.

seconded, but i think youre misunderstanding the paradox's axioms. part of the assumptions are infinite homogenous distribution, which i think addresses your point (the "the paradox" section).

i think the salient point is why youd assume stars are both infinite (both in existence and lifespan?) but homogenous on an arbitrary scale.

edit: additionally, as long as orbital motion is included in this contrived model, blockage would certainly produce a less than perfectly bright sky.


The point is that any decrease is offset by the increase in the number of stars. Also your distinction between >1 photon per second and <1 photon per second is meaningless.


Can you elaborate a bit more? I don't understand how this responds to the commenter's critique. Under the hypothesis, almost all stars in the universe would be so far from Earth that we would see at most one photon from their light by the time it reached us, for whatever the applicable time interval is. Would that single photon be sufficient to register the star's existence?

I'm not a physicist so presumably I'm wrong. But why is this wrong? As far as I understand it, this paradox is the reason we have the theory that space is expanding at a rate which makes objects in effect move away from each other faster than the light they emit.


the point is 1 photon from a star seems like nothing, but 1 photon multiplied by infinite stars is infinite photons blinding us


"at most one". As you get further away zero protons are reaching you on most cases, and zero photons times infinite stars is zero, which would appear to be darkness


Zero multiplied by infinity does not have to be zero, it can be anything between zero and infinity. Refresh your https://en.wikipedia.org/wiki/Calculus.


How does calculus relate to the context of this discussion though? Photons are discrete, not continuous, so I don't see how calculus applies.


The discreteness is a distraction here. You are interested in the total flux of light, from all stars in some patch of the sky. If some of them contribute on average less than one photon, that doesn't matter, it doesn't cause some sudden drop-off in intensity.

If it did, the same thing would happen not just for stars, but for all sorts of things. If you position a computer screen on a hiltop far enough away that, on a dark night, you can just make out whether it's on or off, then your eyes are getting about 6 photons per second. It doesn't matter whether these come from a million separate pixels, or from one light-bulb of similar total brightness.


That "#:~:text=" highlighter is so annoying it has made me switch from Google Chrome to Firefox.


You can click to unhighlight. I find it useful for linking to a section of a page without an anchor nearby.


I don’t have Chrome, could you elaborate? Where is the highlighter and what does it do?


The link points to "https://en.wikipedia.org/wiki/Olbers%27_paradox#:~:text=In%2...."

In case that renders as a link by Hacker News, I've added a space here and removed the https part:

en.wikipedia.org/wiki/Olbers%27_paradox #:~:text=In%20astrophysics%20and%20physical%20cosmology,infinite%20and%20eternal%20static%20universe.

What it does is that it highlights that text in the page, with a yellow background color. Making the rest of the text difficult to read (your eyes are drawn to the very yellow highlighted part).


I love it, and have been manually creating such links every day since discovering the feature. One should be able to link to any portion of a page, and now one can, largely. The highlighting ought to be something a browser allows a user to customize, but I really wish Firefox (which I use about half the time) had the feature.


Agreed (I use Firefox), but (from being forced to use Chrome) you should be able to get the highlight to go away by clicking somewhere else on the page or by scrolling.



That reminded me to look for extensions that automatically move mobile wikipedia links to desktop ones; thankfully there's one for both Chrome and FF.



Would love to see that rendered superfluous https://meta.wikimedia.org/wiki/Community_Wishlist_Survey_20...


As I understand it, in the galactic core where stars are more densely packed, it would be bright all the time. I wonder if there's a photorealistic render of this.


the game Elite Dangerous simulates it. not sure how realistic but it is fun to explore


You can watch it there : https://youtu.be/mj09iR6Tjd8


It seems to me like this is only true if you assume space is perfectly transparent, which seems like by far the weakest assumption.

Introducing any tiny amount of absorption in space would fix this, even in an infinite size, infinitely old universe.


Well, no, the paradox accounts for this. In an eternal universe any intervening dust would have steadily absorbed radiation by the stars behind it until it glowed at the same temperature. Therefore, the sky should still look uniformly bright.


Ahh, so we're assuming eternal universe implies that the entire universe is in thermal equilibrium. This makes some sense, but it feels like cheating. Because then the paradox is broader: "how can anything be at a different temperature to anything else in an eternally old universe?". Nothing really to do with night skies.


What's the main piece of evidence against the hypothesis of "something" (a force, dark matter, dark energy, some exotic effect) dimming/slowing/distorting the light over very high distances?


I headed to the "Explanation" section, expecting some opining from physicists, and was somewhat confused that it started with a concrete "suggestion" from Edgar Allan Poe.


I recall being told that Hawking radiation happens when matter/anti-matter spontaneously appears when straddling an event horizon.

The Universe full of matter/anti-matter bubbles popping in and out of existence. Most of them don’t straddle an event horizon but they still occur.

How much would such a fog contribute to the darkness at night?

Edit: Aha, clouds are irrelevant. Any dark clouds would heat up until they are the same brightness as the rest of the sky. Thanks, article!


Maybe there enough black holes to hide the stars behind them.


Would the sky also be filled with infinite black holes?


Doesn’t the expansion of the universe explain this? I don’t understand the paradox.


It does, but at the time, we didn't know about that.


Let's assume the universe is not expanding but is infinite. At some point in time stars started lighting up. Maybe they even all lit up at once, long time ago. But some stars are so far away that the light from them has not yet reached us.

Even though there are an infinite number of stars we can only see the light from a subset of them because the light from the rest of them has not reached us yet.

So I think the simple resolution of the paradox is that the speed of light is not infinite. No?

What I also never understood was what about matter that is not lit up? There could be many more unlit heavenly bodies than stars, which would block the light from the stars.


Yes, a spatially infinite universe, with no expansion, but finite time since the stars all turned on, doesn't have this paradox. I think people didn't like this answer because it needs a beginning of time (or at least, of time with stars).

Having expansion just-about implies there must be such a time. Because if you run it backwards, eventually the stars will all be touching each other, and clearly can't then behave exactly like stars do -- something else must have been going on.

Our modern answer does have such a time, and expansion. And the answer for why no stars shine before some time is that they took a while to condense into dense clumps from the initially quite smooth hot gas. What you see in the gaps between them is precisely this hot gas, at the moment it first became transparent to light, this is the microwave background radiation.

In their scenario of infinite time, matter which isn't lit up doesn't help. It would, like your eyeball, get the light of the stars from all directions, and would soon equilibrate to the same temperature as these surfaces.


> It would, like your eyeball, get the light of the stars from all directions, and would soon equilibrate to the same temperature

I see, that may be the crux of the paradox.

But if we assume that stars were born at some time then dark planets could be too. And if stars became to existence at some time it is not too crazy to think that new stars might be continually become into existence and planet too so new planets would get created continually to block the light.


If there's a time at which the stars are born, then there's no paradox. Non-star things can stay cold for the same reason comets stay cold in our universe: they can radiate heat in almost all directions (into the black sky) and receive heat from only a few (like the sun on a brief swingpast, still only < 1% of the sky).

But if the stars have been there forever, the comet is effectively in an oven of uniform temperature. The equilibrium state at which it radiates heat as fast as it gets it has the comet's surface the same temperature as the rest of the oven. So it glows.


Imagine an infinitely long hallway in a hotel, as a mental exercise. And imagine every room is filled , and each light is on. And imagine looking at this hotel for a long distance. It will look like a continuous line.

And now imagine every other room is uccupied. Still an infinite number of rooms are occupied, but this line, from a distance, will be half as dim.

Now imagine ever millionth room is occupied. Still, we have an infinite number of occupied rooms. And now, depending on the distance from which you are viewing this hotel, you may notice points of light instead of a dim continuum.


But in the paradox it is stated that through uniformity and the shells of the virtual spheres being 2d, there are four times as many stars twice as far away and since they are only a fourth in brightness, they are as bright as a near one.

In your example, the number of doors is linear with distance but light falls of quadratically.


I see. This gives me new things to think about. Thank you for explaining it like this.


In the first example, theres an infinite line of lights. In the last example, it's the same infinite line divided by a million, which is still an infinite line. They'll look identical when the photons reach your eyes.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: