Hacker News new | past | comments | ask | show | jobs | submit login
The first nuclear clock will test if fundamental constants change (quantamagazine.org)
288 points by beefman 72 days ago | hide | past | favorite | 182 comments



Let's assume they manage to make a nuclear clock out of this, with an Allan drift that's low enough to be useful. Once that's done, it'll take years of observation to measure any meaningful differences and gather enough data to notice something.

Meanwhile, moving the height of anything a centimeter, the position of the moon, and a whole other host of noise sources have to be canceled out.

I have no doubt this will be done... and it will be awe inspiring to hear it all told after the fact.

While you're waiting... I found this really cool meeting documented on YouTube[1] that has the clearest explanation of how Chip Scale Atomic clocks work I've ever seen.

I look forward to Chip Scale Optical Lattice clocks

[1] https://www.youtube.com/watch?v=wHYvS7MtBok


Can't they do something similar to Ligo/Virgo setups? I.e. Multiple experiments running the same or similar hardware so that you can remove the type of noise you mentioned easily enough.

Additionally, this feels like it is much cheaper to deploy compared to the interferometer hardware used by those experiments, so you can put enough replicas around the world to cancel out any local source of noise.


> Meanwhile, moving the height of anything a centimeter, the position of the moon, and a whole other host of noise sources have to be canceled out.

Because time runs slower the stronger gravity becomes? I don't think it would be a problem, as long as the entire experimental apparatus is within the same gravity field for the duration of a particular measurement.


Optical lattice clocks are so precise these days, you can detect the difference in clock rates caused by a 2 centimeter difference in height. The higher clock will run faster.

In the famous thought experiment you can't tell the difference in an elevator in either a gravitational well, or accelerating frame. It turns out that is only true if the elevator is sufficiently small.

Sufficiently small is getting smaller every year.


The lower gravity clock will run faster, but the experiment should give the same result, regardless of which frame it's running in. The same way that the caesium-133 atom transition frequency is 9192631770 Hz, regardless of the gravitational field.


The time shift of a Cesium beam atomic clock at different altitudes is a well established experimental result.

With Optical Lattice clocks, they are much more stable, and the output is light, so a phase difference can be detected much, much quicker. According to [1], a difference of 1 centimeter can be measured. The quote of interest:

  An optical lattice clock with a frequency accuracy of 1 × 10^−18, which is currently the most accurate in the world, has a detectable gravitational potential equivalent to an elevation difference of approximately 1 cm.
If you can access it, here's the related letter in Nature[2] from 2018.

[1] https://www.rd.ntt/e/research/JN202304_21619.html

[2] https://www.nature.com/articles/s41586-018-0738-2


Yes, nevertheless each clock in its own frame of reference is measuring 9192631770 Hz, or whatever. It's only with multiple communicating clocks that you'd observe the difference: that 9192631770 Hz in one place is different to 9192631770 Hz somewhere else.


So if you have three clocks, one on the floor, one on a table and one at the ceiling, you could tell that relative to the clock on the table, the bottom one runs slower than the one at the ceiling, right?

Is the statement then that if the elevator is sufficiently tall, this difference disappears if the elevator is accelerated by a rocket in outer (flat) space, vs hanging in an elevator shaft?


In a gravity scenario, the clock closest to the "bottom" would run slowest. In an accelerating elevator scenario, the acceleration would be equal in all 3 clocks, and they would all run at the same rate.


So a real universal UTC should have a unit of seconds that assumes a perfectly homogenous universe, with each user responsible for correcting for the depth of their local gravitational well/hill.


No, there is no 'real universal' time, just agreed upon reference frames.

UTC uses TAI, which is a weighted average of global atomic clocks.

Any physically realized time standards will have errors.

Proper time is the only invariant form under GR.


Universal Time would really need to be based on seconds passing at the center of mass of the universe.

Each observer could run their own clock, and calculate adjustments from there. Although that might not be very convenient.


Why would it have to be at the center of mass of the universe? I am going ahead and assuming that if every observer assumed that their local observable universe was homogenous, then in most global (co-moving) locations across the entire universe, we would have similar results, but I am eager for someone smarter's reply.


Also, is the CMB not a reference frame?


UTC is defined on the rotating “geoid” which is the modern version of mean sea level. If you happen to be NIST (one of the USA’s principle time labs) or Schriever space force base (which hosts the ground-based reference atomic clocks for the GPS satellite fleet) you need to adjust the speed of your clocks slightly to compensate for being in Colorado 1 mile above sea level.


> It turns out that is only true if the elevator is sufficiently small.

Do you have more context, explanation or source for this? This is the first time I’ve heard of this being the case and would love to learn more about it.


1) what the others said, in a gravity well it depends on height above the floor, but not in an accelerating elevator; this limits the height according to the sensitivity of your instruments

2) additionally, we do not know of any disc-shaped objects of sufficient density and matter distribution to produce a uniformly linear gravity field; only galaxies come close but their gravity is small compared e.g. to that of Earth. Most of the time gravity is caused by spherical objects, meaning the lines of the fields diverge. If you had an elevator with a sufficiently large, completely flat floor, you could tell that you're hovering above a planet by observing that a plumb bob forms different angles with different (perfectly upright) walls of the cabin.


Time moves slower at the bottom of an elevator shaft because it is closer to the earth. We have instruments so precise we can measure that difference now.


Yes I’ve always wondered about this. I thought the thought experiment basically dealt with points not regions though


This video w/ Dr Iva Fuentes, from two weeks ago, spends a few segments on the breakdown of the equivalence principle w/ quantum clocks.

https://youtu.be/cUj2TcZSlZc?t=4006&si=Ykt0JDqEKk1ObMp3

Video notes have lots of references to relevant papers (and the video itself surveys a few aspects of these issues).


I assume they mean that, in a gravity well, you get less acceleration as you get away from the well, whereas in an elevator you'll get the same acceleration in all places inside the elevator.

Then again, the intent of the thought experiment was that you stay in one place.


The gravity field we are in isn't that constant. The gravitational influence of the moon is strong enough to move a lot of water here on Earth. The other planets are a lot further away, but not completely without gravitational influence. Earth's orbit around the sun isn't a perfect circle and has ~3% difference between lowest and highest point. The seasonal shift in mass distribution on Earth is big enough that we used to correct for it in astronomical time observations (the up to 30ms or so between UT1 and UT2).

On the other hand, I don't think this experiment is really all that sensitive to gravity since we aren't really measuring time.


But it would depend on how long it takes to make a single measurement. Perhaps the moon wouldn't move far.


> Lots of nuclei have similar spin transitions, but only in thorium-229 is this cancellation so nearly perfect. > > “It’s accidental,” said Victor Flambaum(opens a new tab), a theoretical physicist at the University of New South Wales in Sydney. “A priori, there is no special reason for thorium. It’s just experimental fact.” But this accident of forces and energy has big consequences.

...

> Physicists have developed equations to characterize the forces that bind the universe, and these equations are fitted with some 26 numbers called fundamental constants. These numbers, such as the speed of light or the gravitational constant, define how everything works in our universe. But lots of physicists think the numbers might not actually be constant.

Putting these things together, if the physical constants do change over time, then perhaps there really isn't anything special about thorium-229, it's just that it's the one where the electrical repulsion and strong nuclear forces balance out right now. In a billion years maybe it would be some other element. Maybe we're just lucky to be alive at a time when one of the isotopes of an existing element just happens to line up like this.

Perhaps too there's an optimal alignment that will happen or has already happened when those forces exactly balance out, and maybe that would be an ideal time (or place, if these constants vary by location) to make precise measurements in the changes to these constants, much like a solar eclipse was an ideal opportunity for verifying that light is bent by gravity.


Not a physicist, just a passionate layperson.

AFAIK real practitioners choose their units such that a lot of things are unity: speed of light is 1 (hence E = M), h-bar is 1, etc.

There are some numbers like the “fine structure constant” (which I think is tantalizingly close to 1/137) that do seem difficult if not impossible to derive from others.

The pop-science explanation for this that a layperson like myself would know about is the “anthropic” principal, they are such because only in such regimes would anyone ask the question.

I don’t know what real scientists think about this.


The speed of light will always be seen to be the same no matter what, no matter where you are, no matter when you are. That's because we measure the speed of light with light, and we measure distances using light or light-by-proxy (because the electronic interactions that make normal forces what they are... electronic and subject to the speed of light, as is everything else).

Other constants might change, but it would be very surprising if the speed of light (as observed locally) could possibly vary.


c is no longer measured, it is defined, and unless some contradiction to special relativity is discovered, c cannot change. If the speed of causality changes, then our measure of distance would change. For example, if c halves in some sense, then this means light travels half as far during N ticks of a clock, and the meter will halve (and all internet latencies will approx double!). If we keep the old meter, then we might say c has changed; it's truly a matter of definition (and practicality) at that point.


Same goes for their new thorium “clock”. They define it as their unit to measure everything else. They assume that all constants may be changing but not their thorium clock. I think this is an unjustified assumption.


If they detect that the change there would be a strong incentive replicating the setup as well as to put into scrutinity. Academia even with its faults is a very good institution for that.


Wouldn't a different speed of light impact the Schwarzschild radius of black holes of a given mass?

Assuming that you can create a standard clock, and given a black hole of standard mass, you can then measure speed of light in black hole radii per unit of time, which will differ with different speeds of light.


> Wouldn't a different speed of light impact the Schwarzschild radius of black holes of a given mass?

No it wouldn't. Our fundamental unit of distance (the meter) is defined in terms of the speed of light, so the radius will stay exactly the same, in meters.


> Our fundamental unit of distance (the meter) is defined in terms of the speed of light, so the radius will stay exactly the same, in meters.

Why so?

The Schwarzschild radius formula (r(s) = 2 * G * M / c²) uses c² as dividing constant. So when the speed of light is reduced by half, the radius of a black hole of the same mass will be larger by a factor 4.

Add to that the reduced distance light travels in the same time, and you'll get that the time light takes to travel the distance equivalent to the schwarzschild radius of a black hole with iso-mass is 8x larger in a universe with a speed of light 1/2 that of ours, assuming no other constants have been modified.


You're ignoring the units of G. The gravitational constant has units N * m^2 / kg^2 = m^3 / (kg * s^2). That contains a length unit raised to the third power, which exactly balances the factor of 8 that you identified.


extraordinary. you’ve demolished his argument with purely dimensional analysis. bravo!


If you take a piece of string to measure the radius, and then later c changes and you measure it again, the two pieces of string will have a different length when put next to each other. Saying it doesn't change because the length of the meter changed in the meantime is playing with words. The reason humans came up with a definition based on light is the assumption that c is constant and thus would be a more reliable choice as a standard than some physical bar of metal kept in Paris. But if the constant is not constant after all then something else will be needed as reference.


We're inside the system (I don't mean this in the 'we're in a simulation' sense) - changes fundamental to how the system works are not necessarily measurable to us. It's one thing if the speed of light is localized and the changes are visible at different scales/distances. It's another if it's universal and changes - you have to understand that the speed of light is fundamental to how things works. It influences things like electromagnetic forces, which define atomic structure.

The speed of light changing would mean that even a physical object like a meter long ruler would also change.

Which is why, to my layperson understanding, this is such an exciting field of study - the different fundamental forces in play with these nuclear clocks might enable us to catch relative changes to the fine structure constant, which includes the speed of light as a component.


The point was that we wouldn't be able to tell that there was a change, not that a change couldn't happen.


Black holes also frequently send ambassadors to the White House so they can comply with US law. Human leadership in physical law is well respected universally.


We can try to use gravitational waves (speed of gravitation propagation) to measure length.


Gravitational waves travel at the speed of light as well. So if the speed of light changes, so would the speed of gravity waves, presumably - otherwise, they would probably not be equal today (though of course it could always be a coincidence).


Gravitational waves and light travel at the same speed in a vacuum.

What if you took a very very very long piece of glass and sent both gravity waves and light waves through it?

The light will be slowed down. Google tells me that there is something analogous to the refractive index for gravity waves, so there should also be some slowing of the gravity waves, but would the optical refractive index and the gravitational refractive index be the same?

I'd expect that it would not be the same. The optical refractive index if I recall correctly doesn't depend on the masses of the particles that make up the medium it is traveling through. Just charge and arrangement.

Gravity waves should only depend on mass and arrangement.


The speed of light in a medium is always some constant specific to that medium times c. If c changes, the constant will not be affected. So I don't see how you could use the speed of light in a medium to deduce anything about c. The same logic applies to gravitational waves. And as long as the speed of gravitational waves is always equal to the speed of light, the relative values of the two will still not change.

Say old light speed in glass is k*c1, new speed is k*c2. Old gravity wave speed if n*c1, new is n*c2. How do you you use these numbers to find out if c1 == c2 or not?


Speed of light can also be derived from the speed of a signal through a length of wire.


I think the point is that if the speed of light changes, so does the length of the wire.


Basically. I suppose that ε0 and μ0 could change in different ways such that c remains the same, or in different ways such that c does change but the change might be noticeable.


You’re assuming a monotonous linear change. It could be periodic or jumping between discontinuous values.


  These numbers, such as the speed of light or the gravitational constant, define how everything works in our universe. But lots of physicists think the numbers might not actually be constant.
In my ignorant, non-physicist head, gravity always struck me as a force that would make sense as variable.

Maybe that would explain all the missing 'dark matter', or even provide an alternate explanation as to why so many species on our planet were larger millions of years ago (assuming an explanation for these two phenomena isn't self-contradictory, which, given my lack of physics background, it might well be!)


The article mentions 26 constants but it seems there is more than that https://en.wikipedia.org/wiki/List_of_physical_constants

And I think if the constant is a ratio, like the fine structure constant, https://en.wikipedia.org/wiki/Fine-structure_constant no change can be detected, even if there were a change because the ratio will stay the same. Likewise a constant like pi will stay the same because it is a ratio.


There are 26 fundamental constants, ie values that can not be determined from theory alone and need to be experimentally measured, which all other constants can be written in terms of. And it's not even a specific 26; 1/c is just as valid a constant as c, and you could rewrite any equation to use that instead of c.

For ratios, the constancy of the ratio is exactly what they seek to test.


This always seems like a logical error to me and perhaps someone can explain:

To measure a constant, you need something constant, but you do not know if something is constant if you do not have something constant to measure it against. (False premise?)

I believe we can only assume things are constant, but they only appear constant.

I you read the work of the physicist Julian Barbour regarding time I think you will be in for some remarkable insights. "Time arises out of change".

https://www.youtube.com/watch?v=GoTeGW2csPk


It's okay to measure one thing with something else that's variable. For example let's say I want to determine aluminum's coefficient of thermal expansion. I have a block of aluminum which I am measuring with a steel ruler. Both objects will change size if I vary the temperature, but by measuring both at several temperatures I can determine the ratio of their coefficients of thermal expansion. Funnily enough, if I'm using a mercury thermometer I'm really measuring everything relative to mercury's coefficient of thermal expansion.


It's possible to measure the ratio of some values that we think are constants https://en.wikipedia.org/wiki/Dimensionless_physical_constan... and see if they are the same here now and in old far away galaxies.


Matter in other galaxies would behave differently from matter in the Milky Way if fundamental constants are not always true. I argue about this sometimes. Others keep stating that the wavelengths are equal, so everything else must be.


I think the better way to ask this question is: how much large scale spatial variation can there be in the laws of physics so that the observable behavior doesn't contradict existing observations? As far as I remember, this has been studied, but I can't find a reference right now.


wikipedia has a high level review of current constraints: https://en.wikipedia.org/wiki/Time-variation_of_fundamental_...

    fine-structure constant: less than 10^−17 per year
    gravitational constant: less than 10^−10 per year
    proton-electron mass ratio: less than 10^−16 per year


Well, if you think about it, on a large scale of the universe, our laws are helped by our mathematical inventions of dark matter and dark energy. So is there really dark matter and dark energy, or is our understanding of the laws of the universe incomplete?


Many, many, many scientists have asked this question. Many have made careers out of arguing the case.

But, the overwhelming majority of scientists that start out asking those questions ultimately land on the mainstream theories around dark matter and dark energy being our best, most consistent, and broadest ranging answers.

If someone were to come to them with a better theory that could explain more completely the sum total of these observations they would almost certainly be open minded about it.

So... is there really dark matter and dark energy? Probably. We've got a whole lot of evidence that isn't explained better by any alternatives. But I doubt any of these scientists would say it's totally impossible.


Yes of course, it's the best theory we currently have. Also Newton's gravitational theory was the best explanation until someone came with a paradigm shift that explained the observations better. And we seem to be quite stuck with the current theories, so I suspect we might need another paradigm shift.


Newton's gravitation is still pretty relevant today. Only in special circumstances is relativity necessary. I wonder if Dark Matter and Dark Energy will be just as relevant in the future?


I'm not sure the SR is really that large of a paradigm shift.

All Einstein did, I know that conceptually this was a large leap and just saying "all he did" is doing him a severe disservice for his contributions, is note that the mass of an object is dependent upon the speed of the object. Such that for velocities that are not an appreciable fraction of c we can use Newton's laws of gravitation perfectly well.

TLDR; Einstein did not replace Newton, he tweaked him.


As I understand it, dark matter and dark energy are just placeholders for discrepancies between our current physical model and observations made by telescopes like Hubble and Kepler. This could mean either that our measurements are inaccurate, or that the model is incomplete. Honestly, I think that both are extremely likely.


Dark matter (matter that has mass but does not interact in any other way) might be the literal solution. But there are also other suggestions (MOND is a big one).

The https://en.m.wikipedia.org/wiki/Bullet_Cluster is pretty interesting.


"Dark matter" and "dark energy" could just as well be called "unexplained matter" and "unexplained energy".

These terms are mostly placeholders for things we don't understand.


not even that. "unexplained matter" implies that there is some matter to explain (what type etc) when in reality is an unexplained observation that could be explained by current laws/constants if only there was some more (actually a lot more) matter. is a pure mathematical construct and mathematically it wold be just as valid to ad "dark constant modifier"


I think they can also just be an unknown phenomena. They don't need to be literal mass or energy. Essentially, to fit our current model to observations of how our galaxies and the universe in general behave, we need to add mass and energy to the equation to make it match. Essentially, dark matter and dark energy are fudge factors to get our models to work.

There are a ton of theories to reconcile the differences, but very few are provable with our current techniques. Detecting literal dark matter is one possibility. Changes in universal constants would be another.


Our understanding of the laws of the universe is incomplete either way. If dark matter exists, we still don't know what it's made of or exactly what properties it has.


The mainstream thought is that they are real and undetected, but there are theories that they aren't and there's plenty of attempts to modify laws to explain them away (and I suspect there's some wistful thinking that there's maybe a Noble prize there, so there's already been a fair bit of work done, even though it's very controversial).


To be fair, the last time someone explored a discrepancy between known physics and empirical observation we got quantum mechanics and relativity. There is a commonly held belief that as observations outpace theory that we'll have a similar leap in technology. I think this is why everyone was so excited about the LHC and Kepler Observatory.

I don't know whether the next breakthrough in physics will be quite as relevant in our lives as quantum and relativistic physics. It would be nice if we could link gravity and E/M like we did with the strong and weak forces. Who knows what we could do if we knew how those two go together.


> So is there really dark matter and dark energy, or is our understanding of the laws of the universe incomplete?

These propositions are not mutually exclusive, the former implies the latter, right?


If the fundamental constants are not constant, why not expect them to change in this galaxy as well? The appeal to "other galaxies" seems suspect to me, a way to evade falsifiability.


"A way to evade falsifiability" is the goal of the statement, given that we've been searching for evidence to the contrary for as long as we've been able. We haven't found any, and we've searched close-at-hand the most thoroughly.


The galaxy is very small compared to the size the universe. If there were observable differences from 100k light years away (so just 100k years ago), the differences across billions of light years should be much more noticeable.


If the constants are the same in distant galaxies, then that's either a massive coincidence or the constants are stable over both time and space (because of lightspeed delay). The further away we look, the more obvious any effect should be.

If we detect a change then it's worth checking if this is also observable over shorter distances and timescales, and at that point we would look at our own galaxy.


If the constants change over very long time spans, we could observe this by looking at distant galaxies from billions of years ago. We don’t have a way to make similar observations within our own galaxy.


What if the constants only changed over incredibly small scales, vibrating back and forth between two very similar numbers like a standing wave with extremely small amplitude and wavelength, such that any measurement done on even small scales has trouble seeing anything but the average?


Let's start with the universe since the ignition of the first stars. Your question is also super-interesting in the context of the very early universe, so I'll come back to that further below.

Depending on the constants, with significant fluctuation of them you'd expect spectral line broadening rather than the sharp lines we see in precision interferometry, violations of local Lorentz invariance, different structures in "stacked" spectra (like the Lyman-alpha forest), and instabilities in Keplerian orbits. Present measurement precision of subatomic transition spectra has really boxed you in on this: many physical constants have relative standard uncertainties on the order of 10^-10 or better.

> any measurement ... [sees only] the average

So you'd start wondering: in the limit of infinitesimal fluctuations, is a fluctuating constant just constant rather than an "effective constant"?

Where's there's still wiggle room is in the exact masses of heaver generation standard model particles (top quark, tau mass, W-to-Z mass ratio for example) and somewhat frustratingly Newton's gravitational constant, all of which have relative standard uncertainties worse than 10^-5.

(There's a quick explanation of standard uncertainty and relative standard uncertainty at <https://www.physics.nist.gov/cgi-bin/cuu/Info/Constants/defi...>)

However, assuming cosmic inflation, one might expect incredibly small scale fluctuations in physical constants to be stretched, just like incredibly small scale fluctuations in the densities of matter and radiation. This could lead to later-universe regions of arbitrary size with a significantly different value for one or more physical constants, just like we see regions relatively stuffed with galaxies (filaments) and regions that are relatively empty (supervoids). We'd expect that when we look at different parts of the sky we'd see differences in things like the Lyman-alpha forest, the population and/or spectra and/or light curves of quasars/supernovae/variables, and so on.

So, in order to have the apparently constant physical constants we observe, while keeping your idea that there are tiny fluctuations in them, you'd have to suppress high frequency fluctuations in the constants in the very early universe, because otherwise you'd have to suppress gross effects like different gas and dust chemistry when comparing one galaxy cluster to another.

And we are looking: https://cen.acs.org/physical-chemistry/astrochemistry/Scient...

(The cosmic inflation epoch predates the "freezing-out" of some of the physical constants, so my thinking is that during inflation there must be some precursor constant(s) that determine(s) the mass of the electron (for example) once there are electrons after the electroweak epoch. Even after inflation the ordinary expansion of the universe can stretch fluctuations enough that (assuming your idea) there is likely to be a directional dependence on precision extragalactic astronomy.)


The idea is they're fixed/set by the overall size of the galaxy.


What's meant by "the wavelengths are equal"? (And have we measured comparable wavelengths in other galaxies?)


The wavelengths of physical processes are equal. If fundamental constants changed, we'd expect, say, the Lyman series to change too.


Yes, we've measured comparable wavelengths. It's one way we can measure the red shift. Not just (red shifted) absolute wavelengths, but the relative spacing between them are quite sensitive to physical constants. These spectra can also be used for identifying the elemental composition of stars.


> What's meant by "the wavelengths are equal"?

Absorption lines of the elements in the stars whose starlight we observe. THey are the same after correction for redshift.


Presumably they mean propagating EM radiation we observe from earth appears to behave the same on earth as we observe from distant galaxies since the event that created them happened at a time much different than ours and a distant region of space.


I mean, technically the EM radiation we observe from distant galaxies does look different than the EM radiation we observe locally: it's red-shifted.

I'm sure someone has proposed this is due to physical constants changing over time, rather than the expansion of space-time, and I'm sure someone else has explained why this is wrong.


Not necessarily. We have redshift and we use that to measure distance (in space and time). If fundamental constants were different in the past that might merely change only what distances we measure.


That would probably require quite a coincidence, since the redshift and the spacing between wavelengths both depend on the same constants but in different ways.


One thing I have been arguing for a long time is that the fundamental constants are different until we observe them. i.e. if we don't observe it, it's possible for a tennis ball to travel through a wall. But in the universal program, if we will now or later observe the result, then it won't happen. But it'll happen so long as we will never observe the result. In fact, it's probably happened many times.

No one has proven that this is impossible, AFAIK.


What does "impossible" mean to you if not that a thing and it's consequences can never be observed?


Impossible means it does not happen, not that it does not happen only when we look. Just because we can't see it doesn't mean that it doesn't happen. After all, as the comment I replied to pointed out, other galaxies can have different constants. We have to be humble and admit we just don't know.


The problem with these type of arguments is rigorously defining “we” and “look”.

Turns out that our gaze has no effect on anything and we’re uninteresting squishy bags of mostly water as far as physical processes are concerned.


Yeah, but no one has proven that this is impossible so it's still possible. Just like OP comment.



This seems like a distinction without a difference, since we can never positively categorize any unobserved phenomenon as impossible (vs merely unobservable). To me, it seems ontologically cleaner to treat existence and observability as the same thing. shrug


Okay, fine, I'll come clean, I was just making an unfalsifiability joke. The original god-of-the-gapsy comment was the one that got me. Always just out of reach of our verifiability is the magic. Why not all the way out?


Whelp, looks like I'm today's Poe's Law poster child. ;)


By "Observe" dont they mean the act of any photon "hitting/interacting with" the system collapsing it into a known/predictable state.

Not specifically a "intelligent" observer per se.


How can you even prove a negative?


Logical mangling time: If you can't prove a negative, how can you prove that you can't?


If fundamental constants could change, this would violate energy conservation, and the second law of thermodynamics. Someone once said, if your pet theory violates the second law, there is no hope. Or am I missing something?


Energy conservation isn't as sacred as many people (including me) assume. See for example https://www.preposterousuniverse.com/blog/2010/02/22/energy-...


And in fact, energy is not conserved (and cannot even be defined) globally in General Relativity. There is a different conservation law, called the conservation of stress-energy.


Conservation of energy is the first law. I don't suppose anyone has any doubts about the second law?


The second law is not a law in the same way like the law of gravity is, it’s more a statistical statement. It simply states that more probable things will happen more often. How do we know what’s more probable? It’s what happens more often. It’s only inviolable insofar as we presume we know all the laws of nature.

Also, the second law is only applicable to closed systems. The universe may not be a closed system in the way we normally think of it.


The second law may be in a way we must evolve to conceive it, or may be in a way that we may never conceive it, or we are acting in ideas that are as distant as friction creating fire. We crawled, the we walked, then we ran, rode, motored, flew, rocketed, got stuck in orbit...

My college physics professor once said, "if in order to make progress we must leave reality, by all means let's leave reality." He also pointed to three red volumes on his shelf, and said those may interest you, and they did. (Richard Feynman)


Thermodynamics by definition only studies equilibrium processes. Applying thermodynamics laws too broadly is a common misconception, even among those who study physics at university, because not many people get far enough to study physical kinetics (like Landau vol 10).


Violating energy conservation (the first law of thermodynamics) does not inherently violate the second law of thermodynamics. It's not hard to imagine a situation where the energy of a closed system changes but not enough to decrease the total entropy of the system, for example if the energy of the closed system decreased.


My best guess at this moment is that all the fields can or may influence each other, resulting in relative changes.

Some things may seem incredibly constant, but have to be measured in such a ridiculous small or big (time) frame, that it's barely not measurable at all.


It’s still something of an open question whether or not G is actually constant.

Not only that, but the results differ depending on whether atomic or dynamical time is used! In the latter case no change is measured using lunar reflectors.


Remind me what are the dimensions of G?


Possibly a dumb question: How do you determine the accuracy of the most precise clock? You don’t have anything more accurate to measure it against, right?


I think you might mean the one _electron_ conjecture. It’s fun because you have anti-electrons whose Feynman diagrams look like electrons going backwards in time. So you could conceivably be observing the tangled world line of a single electron bouncing back and forward in time — sometimes observing it as an antielectron.

Doesn’t work with photons because there’s not an anti-photon.

Anyway it’s sort of a fun “woah!” moment that Feynman was so good at producing, but I don’t think it’s taken particularly seriously as a theory.


Positrons don't merely look like time-reversed electrons, and it's not limited to Feynman diagrams. Everything we know about those particles, experimentally and in our best theory, says that they literally are identical but for a minus sign on the time variable.

And it does work for photons because there is an anti-photon: the photon itself. The particle is symmetric under time reversal.


Of course you’re right about photons!


The version of that story I remember is that John Wheeler said to Feynman that the reason all electrons are alike is that there's only one electron, which we perceive as a positron when it's going backwards in time. Feynman instantly refuted the idea by pointing out that there are more electrons than positrons.


Yes I think I probably saw/read Feynman retelling the story.

And yes, where’s all the antimatter, right!?


If the laws of physics can drift over time, might that explain the Big Bang?


I don't think so. There was no time before the Big Bang, so it's not like the laws of physics have anywhere to drift from such that they're in a bang-causing configuration at t=0.


I think that’s an overly strong statement. There’s a theory that the Big Bang followed a Big Crunch from a “previous” universe [1]. Or our universe is a black hole within another higher dimension universe since the edge of our universe looks a lot like what we would think the event horizon looks like within a universe [2]

It’s correct to say that the time of our universe begins at the Big Bang, at least as far as we can measure it in any way and according to the currently dominant theories, but there are ways that it would make sense to talk about a time before the Big Bang and what caused it to happen.

[1] https://www.universetoday.com/38195/oscillating-universe-the...

[2] https://www.discovery.com/science/Universe-Inside-Every-Blac...


Big Crunch/Cyclic Universe theories are generally considered to be improbable based on our current understanding of the universe.

That there is no time before the big bang (possibly with some qualifiers to define the big bang, start of the universe, etc.) is the overwhelmingly prevailing view of modern cosmologists, from how I understand things.


I think it would be clearer if such theories were described as claiming that a certain bang wasn't actually the big one.

I suppose these are equivalent, but one feels like a historical distinction while the other feels like a thermodynamic one and I think it's thermodynamics that contrasts the theories better.


> There’s a theory that the Big Bang followed a Big Crunch

Does that theory come with a testable hypothesis?


Most unfortunatly, there is the sawtooth cosmogony put forth by Joe Haldeman: if we harness the entire power of Jupiter converted to energy, and recreate the big bang, it's game over here, and everything restarts.

The test of course, trivizes,it. To destroy our universe, and the we know wheather, there is a steady state, a forever expanding, a saw tooth, or a wimper. ( see Dr Fred Hoyle), or, as our brains grow freely in epanded capacity, something so radically beyond our current comprehension that it will leave us in a pseudo-comoyose slack jaw state for a very long time.

"A watch maker, without ever opening a watch may make some very ingenious ideas about how a watch works, but without opening the watch, he may never know the truth." -Einstien

The big bang came with the presence of background radiation, in a non-uniform way pointing to the area in the Hercules constealtion. The crunch can come with a slowing of expansion or a change in the constants that hold our current paradiem together.

This work is seriously cutting edge.


Yes and it’s likely to be falsified. But us living within a singularity I believe is a consequence of string theory if I recall correctly.


To flesh out my thought, I'm thinking something must have changed to make the universe go from a previous stable state to the BANG state.

A weakening of some force keeping things together seems as likely as anything to me.


Many argue there is no "before" the bang state. Time and space might well have started with the big bang. There would be an absolute zero point in time, and nothing could be before that just like nothing can be colder than 0K.

For a while there was also the theory that the universe is cyclical: it eventually collapses and from that compressed state a new big bang is born. That seems very unlikely with what we know right now though.

Then there are various forms of the multiverse theory where are kind of spontaneously created in a continuous process. Each universe experiences a big bang in the moment it is created, so talking about "before the big bang" only makes sense outside the universe

But I don't think anything rules out a universe laying dormant and then something triggering the big bang either. Changing fundamental constants might well be that something. They don't even have to change continuously or frequently for this to work

As you might have guessed, testing any of these is really difficult. Not necessarily impossible, but really really difficult


That sounds like a fine idea to me. I'm just trying to point out that if it's true, then that bang wasn't the big one.

We can have several very large bangs, but there can be only one Big Bang™, and nothing comes before it. This is for the same reason that Harry Potter is a wizard, it's not about evidence, it's just defined that way.


Cosmic inflation is a large difficulty for the big bang as there is no mechanics explaining its process.


Maybe Boards of Canada was right, and constants are changing.


Seems like a case of premature naming to me! If we have to test whether or not they change, they shouldn't already be called "constants".


They are definitely used as constants. A static agreed-upon number is assigned to a CONST and used in calculations.


If it does change, for what ever reason, like, what does it actually mean?

Someone big brain explain to me why this is a big deal.


It basically invalidates modern science in the same way Einstein invalidated Newtonian physics. It would mean we have pretty good approximation on how things work, but we are fundamentally wrong. So it would be an exciting time to be a physicist, as it would force us to rethink how things really are from atoms, to stars and the beginning of the universe.


It does not invalidate science. The scientific method is the process by which we build gradually towards a clearer picture of the ground truth if there is one. Even if by “science” you just meant our current understanding of the universe as opposed to the method we gain that understanding, then this would not invalidate that, it only invalidates a small part.

Yes, we are fundamentally wrong, I would hope that all physicists recognise that we don’t have a perfect explanation for how things work yet, this would be just another step in that process, but an exciting one indeed.


Thanks for point it out. I meant physics, not science :)


Doesn't the fact that both General Relativity and Quantum Mechanics don't make correct predictions at all scales already show we're fundamentally wrong?


Maybe. Maybe not. There might not be a Universal Theory of Everything. Everyone hopes there's something and most scientists do believe that something is out there, but the idea that there might be reasons we can't unify them or that there are physical limits that prevent us from gathering the information we need to fully suss things out isn't exactly fringe science.


this is mind blowing to see


"When you absolutely, totally, fundamentally, have to, fundamentally be sure" :)


They probably do change, but extremely slowly. It would feel strange if there were something fixed in the universe.


The fossil reactor at Oklo https://apod.nasa.gov/apod/ap100912.html and https://en.wikipedia.org/wiki/Natural_nuclear_fission_reacto... can be used for that question.

From Wikipedia:

    The natural reactor of Oklo has been used to check if the atomic fine-structure constant α might have changed over the past 2 billion years. That is because α influences the rate of various nuclear reactions. For example, ¹⁴⁹Sm captures a neutron to become ¹⁵⁰Sm, and since the rate of neutron capture depends on the value of α, the ratio of the two samarium isotopes in samples from Oklo can be used to calculate the value of α from 2 billion years ago.

    Several studies have analysed the relative concentrations of radioactive isotopes left behind at Oklo, and most have concluded that nuclear reactions then were much the same as they are today, which implies that α was the same too.


Is there a good explanation of how that isn’t just measuring the expansion and contraction of a ruler with itself? Don’t we know the reactor is 2 billion years old because of radio dating?


No, those are separate processes.

The isotopes produced during the natural nuclear reactor 2 billion years ago were produced in certain ratios because of the relative sizes of their nuclear cross sections, which depend on the fine structure constant.

The isotopes used in radio dating are produced by spontaneous transmutation over time, which is governed by entirely different processes.


No, because you’re comparing the various proportions, it’s like comparing the contraction of various rulers made from different woods.


Well, it's dated against pulsars and stars. But those sources of information have a bit of an error bar on time-space distance.

Which is why a synthetic clock is needed here. That will have a known inception date and the changes if any can be compared.

The problem with both is they're not exactly fully closed systems anyway so there will be some margin of error ever with the length of the operation.

And during the test, we might just find out something completely unaccounted for in current physics... That isn't a universal constant related at all.


It would be somewhat hard to tell if there's circularity somewhere, but you should be able to date it somewhat with the quantity of oxygen in the atmosphere at various times and general geological processes.


Why would it be "strange"? What reference can we possibly use to compare?

This sort of thing tends to be so far from "common sense" it probably doesn't make sense to try to reason about it from that perspective.


It's possible to measure the ratios of the constants, like mass_of_proton/mass_of_electron . Another is the fine structure constant, that is related to the charge of the electron (divided by a lot of other constants to cancel the units). Both of them are related to the spectral lines of the light emitted and absorbed by atoms, so if they changed the "color" of the other galaxies should have changed a little. https://en.wikipedia.org/wiki/Dimensionless_physical_constan...


I know nothing about this: what if the color did change, to be slightly redder?


I don't remember the details, but I took a look at Wikipedia to remember some details.

The redshift of far away galaxies is calculated multiplying the frequencies by 1+z, so you get the same displacement if you draw the spectrum in the correct logarithmic scale https://en.wikipedia.org/wiki/Redshift

The emission/absorption lines of Hydrogen are calculated using the Rydberg constant https://en.wikipedia.org/wiki/Rydberg_constant that use the mass of the electron. But it's not the actual mass of an isolated electron, you must use the "reduced mass" because the electron moves around the proton but the proton also moves a little [1]. So the reduced mass of the electron is

reduced_mass_electron = 1 / ( 1 / actual_mass_electron + 1 / actual_mass_proton )

The proton is much heavier [2], so the difference of the reduced and the actual mass of the electron is only .1%.

If you magically change the mass of the electron to be equal to the mass of the proton, then the reduced mass of the electron would be 1/(1/1+1/1)=1/2 of the actual mass of the magical electron, and all the lines in the spectrum of Hydrogen will change.

But the spectrum astronomer get includes other elements and isotopes. For example Deuterium that has the same charge but the double of mass. In the real word, the reduced mass of the electron in Deuterium is also almost equal to the actual mass of the electron, the difference is like .05%.

In the magical world where the mass of the electron to be equal to the mass of the proton, then the reduced mass of the electron would be 1/(1/1+1/2)=1/3 of the actual mass of the magical electron, and all the lines in the spectrum of Deuterium will change but in a different way.

I think it's possible to see all these lines in a spectrum, but to be sure ask an expert. Anyway, a magical change to make the mass of protons and electrons equal will change the spectrum of all the other atoms in different and strange and weird and unexpected ways. [3] So it will be easy to spot. Smaller changes will change thing and be visible too, unless they are too small.

[1] If you want to go down the Quantum Mechanics rabbit hole, they don't move and you must use orbitals. But the correction is the same than in the fake/simplified classical calculation.

[2] * massive

[3] A heavy [2] electron is very similar to a muon, and in Hydrogen molecules with muons both Hydrogen are very close so you get fusion reactions https://en.wikipedia.org/wiki/Muon-catalyzed_fusion so the magical would be very strange.


Toooo late to edit. Fixing it for posterity:

The correct number is

1/(1/1+1/2)=2/3

I incorrectly wrote 1/3 instead.


Then we ( and Hubble ) may over estimate the speed of the expansion of the universe, an the galaxies that we have measured their red shift and estimated their relitive speed based on less of a red shift. It is certainly possible.


If nothing remains constant then there's no identifying feature to point at and conclude that my experience yesterday and my experience today occurred in the same universe. Surely that feels even weirder than letting there be something that can be used as primary key for universe identification.


this is a bit tangential, but I once had a physics professor describe light waves as standing still and everything else is just moving around it.


It's kind of silly to take the perspective of light, because it doesn't experience time (obviously, but you know what I mean). Maybe there will be new physics on that like there was with neutrinos, but it can't be too much of an effect.


> it can't be too much of an effect.

That is the problem with any argument for some new physics - it might exist, but it can't have much effect or we would detect it. Generally I only see people arguing for new physics because they really want faster than light travel (typically also without all the weird time effects, but a small minority would accept it with time effects)


Also many people want to find libertarian free will somewhere in new physics.


In case anyone else is curious about this fact: it has to do with time dilation. As your velocity through space approaches c, your velocity through time approaches zero.

Since photons move at c, they experience zero time between creation and destruction.


This implies a paradox: what if a photon is emitted such that it does not ever get destroyed? Fired into a void of deep space?


From what location do you think the photons from the cosmic microwave background were fired?


> your velocity through time approaches zero

Proper time, τ. But your proper time and my proper time are different, and are only coordinate times, monotonic labels on timelike curves. That means τ is an affine time.

There are other affine times available. And we have to choose an affine time other than τ for null geodesics, the curves which photons (in vacuum) travel along. So, instead of proper time, for photons there's the affine parameter.

The same rule applies: there is nothing special about the affine parameter. I don't have to use a photon's affine parameter when describing physics any more than I have to use the proper time of an ultrarelativistic electron. And I can convert physics done in one splitting of spacetime into space+time to a different splitting of the same spacetime into space'+time'. The coincidences in the spacetime are unchanged by the switch of how we split -- splitting is just a change of coordinates.

If we divide up spacetime into space+time where the choice of time axis along which to order spatial volumes is my proper time, the emission of a photon in the photosphere of the sun and its subsequent destruction in my eyeball happen in a definite temporal order: there is a time delay. There is still a temporal order if we divide up space+time using your proper time, or that of an ultrarelativistic electron.

There are some special technical aspects of dividing up spacetime into space+time using a photon's affine parameter as the time axis, but it's certainly doable. A photon shares a different 3d spatial volume with many other things at each affine parameter labelled point on the photon's curve. Phases of orbits, stages of an elastic collision, and so forth are among the things "snapshotted" into each 3-d spatial volume the photon finds itself in. Those evolve from one 3-d spatial volume to the next to the next.

So returning to the previous picture: if we split up space+time according to the affine parameter of a photon, there will be some 3-d volumes at which the solar photon is much closer to the sun than to my eyeball, and some 3-d volumes where it is much closer to my eyeball than to the sun. The same is true for the solar photon itself: some affine parameter values put it closer to the sun than to my eyeball, and at some affine parameter the photon's journey ends.

If a photon's trajectory is through curved spacetime, there will be a difference in momentum between two points on the trajectory (we are not restricted to the photon's creation or its destruction), which we can calculate using the affine parameter. The physical interpretation is that the photon undergoes a redshift or blueshift between two points in spacetime.

Note that I did not take the lim v->c approach you did in your first paragraph because there are geometrical differences in a Lorentzian spacetime between a null geodesic and a timelike one, even if the timelike geodesic is associated with a speed arbitrarily close to c. The photon is almost always on a null geodesic. A non-massless observer will never be on a null geodesic.

> Since photons move at c

Photons can be made to move slower than c, in which case they are not on null geodesics, and therefore proper time might be suitable for them.

Photons moving at c must be on null geodesics.

Proper time -- one particular affine time -- is undefined on null geodesics. However, they experience a different affine time. One can interconvert, so it does not make sense to say that photons "experience zero time".

Lastly, if the totality of the photon's curve through spacetime is on a null geodesic, the photon won't be able to experience much of the universe evolving around it as it flys away from its creation. However, segments of a photon's curve can be other than null geodesic motion as (for example) they cross through wispy gas clouds or strong magnetic fields in space. Temporarily slowed light <https://en.wikipedia.org/wiki/Slow_light> can in principle receive news of the world. This could have happened for a photon emitted billions of years ago from a high-redshift quasar en route to the JWST.

Extra reading:

https://en.wikipedia.org/wiki/Initial_value_formulation_(gen...

Blau, Frank, Weiss 2006 (section 3 "Brinkmann coordinates are Fermi Coordinates:", 4 "Null Fermi coordinates, general construction", 5 "Expansion of the metric in null Fermi coordinates") arxiv version https://arxiv.org/abs/hep-th/0603109 (link to publication in Class.Quant.Grav. is on the abstract page).

"What is the physical meaning of the affine parameter for null geodesic?" https://physics.stackexchange.com/questions/17509/what-is-th...


Makes sense really. If velocity is the derivative of position with respect to time and photons don’t experience time how would they have velocity?

It reminds me of my silly One Photon Conjecture. That is, there’s only one photon that pops in an out of space as required by coupling events. Since it doesn’t experience time saying it can’t be in two or more places at the same time isn’t meaningful!


"There are only two bits in the universe. 1b0 and 1b1. All instances of 0 and 1 are merely the same bits, traveling forward and backward through time."


well no, photons move at the speed limit of causality in this universe

they actually arrive slightly later than neutrinos to observers on earth because neutrinos just plow through virtually anything including stars and planets while photons have to travel the path affected by gravity

photons aren't affected by gravity directly because massless but their path, their limit of causality, is affected


Even if it had a rest frame, Schrödinger is a pain.

An object at full rest is according to its wave/path equation literally everywhere at all times.

However superconductivity has a bunch of truck sized holes for this. Specifically we don't quite understand Bose-Einstein condensate completely. Funky entities like time crystals appear in the mathematics, etc.


> neutrinos just plow through virtually anything ... while photons have to travel the path affected by gravity

why would you think that neutrinos can magically ignore the curvature of spacetime? completely wrong.


Neutrinos are known to have mass thanks to flavour oscillation. In empty space neutrinos will always be outraced by photons.

What you seem to be trying to remember is that certain types of extragalactic supernovae produce a tremendous number of neutrinos, and those can be detected on Earth before the associated light does. The reason is that both are produced deep within the dying star, and while the star's outer layers are largely transparent to the neutrinos (not completely: it's neutrino pressure that makes the star explode[1]), the deep-in-the-star supernova photons bounce around inside. That's very much not empty space.

https://www.astronomy.com/science/in-a-supernova-why-do-we-d...

We can detect type SN1a supernovae out to about z=4 (redshift of about four in light-years is about twelve billion years of light travel time from the supernova to us). That's not really enough for the delayed pulse of light to catch up to the neutrinos produced also produced in the dying star's interior at the same time. Also, not all of the emitted photons are likely to scatter off gas and dust in any interstellar medium along the way, so the relative delay at Earth of the bright electromagnetic flash is dominated by the dying star's outer layers.

(There are more complicated electromagnetic signals like light echos <https://en.wikipedia.org/wiki/Light_echo> that can follow on much later; there aren't any neutrino equivalents really).

> photons aren't affected by gravity directly because massless but their path, their limit of causality, is affected

Not sure what you are trying to say here, but photons certainly both feel and source spacetime curvature. In empty space, photons always travel along null geodesics. The distribution of all matter, energy and momentum, the expanding background of inter-galaxy-cluster space, and the collapsing background of galaxy clusters (and galaxies, and components of galaxies like their central black holes and stars) picks out what geodesics fill spacetime. Some are null, and massless things can find themselves on ("couple to") them. Some are timelike, and massive things can find themselves on them.

Geodesics are free-fall trajectories, so are inertial as in "things in motion tend to stay in motion", and barring any further accelerations a photon coupled to a null geodesic will stay on that null geodesic and a neutrino coupled to a timelike geodesic will stay on that timelike geodesic.

Some neutrinos and photons start on parallel geodesics within the supernova's exploding core. Each neutrino stays coupled to its timelike geodesic all the way to detection on Earth. The photons are all forced off their initial null geodesic by mostly scattering off nuclear matter near the star's core, find themselves on a second null geodesic, more nuclear scattering, possibly some scattering off ions in outer layers, and each ultimately might end up on a geodesic that it stays on until it reaches Earth.

- --

[1] on neutrinos driving the explosions and how alot of them stick around as they are captured into heavier chemical elements and isotopes https://www.mpg.de/11368641/neutrinos-supernovae


Brilliant. Your professor for saying that, and you for recognizing it's significance.


Like the Planet Express ship? Sounds like professor Farnsworth.


The most recent Kurzgesagt video (on time travel) https://youtu.be/dBxxi5XAm3U had this passage:

> To explain how this actually works without making a math video, we have to make a lot of physicists grumpy, so please keep in mind that we are simplifying and lying a bit.

And that simplification / lie is that everything moves at the speed of light in spacetime. We are moving at basically 0 in the space coordinates and 1s/s in the time dimension (which is "light speed" in the time dimension). However... (1:45 in the video)

> Photons, light particles, move at the speed of light through space. They don’t experience any time passing because their speed in that time dimension is 0. In the time dimension they are frozen in place. If you see light on earth, from the photon’s perspective it was just on the surface of the sun and then suddenly crashed into your eye with nothing happening in between.

... and this falls into the Lie-to-children domain. https://en.wikipedia.org/wiki/Lie-to-children#Examples_in_ed...


Yeah isn’t it a simplification of the idea an object at rest has has a four-velocity where U^0 = c (so a velocity of c entirely the time direction) but a photon doesn’t have a rest frame to do this calculation?


If they changed in a way to have meaningful impacts on how astronomical bodies operate we should be able to observe the change as some of the oldest light we observe is billions of years older than the newest light.

In fact, based on this we can tell that the fundamental constant the speed of light has not changed which I agree is very strange.


It comes down to what time is. I.e. what was before the Big Bang? If time didn’t exist before big bang, then speed of light emerged after big bang, and as such “changed”.


Either there is some unversal constants, or everything constantly change.


Could be both. Some things determined by some mathematical constraints will always be followed. Ex things like group theory and statistics will always be followed by any object subject to them, but how that manifests if the objects those rules act upon changes in form


Most so called fundamental constants appear in the relationships between physical quantities only as a consequence of choosing arbitrary units.

It is possible to eliminate almost all fundamental constants by choosing so-called natural units for the base physical quantities, for instance the elementary charge as the unit of electric charge.

For all fundamental constants that can be eliminated by choosing natural units it makes no sense to discuss about changes of them.

Nevertheless, even when a natural system of units is used, there remain 2 fundamental constants (plus a few other fundamental constants that are used only in certain parts of quantum field theory).

The 2 important fundamental constants that cannot be eliminated are the Newtonian constant of gravitation, which is a measure of the intensity of the gravitational interaction, and a second fundamental constant that is a measure of the intensity of the electromagnetic interaction, which is frequently expressed as the so-called constant of the fine structure.

The meaning of the constant of the fine structure is that it is the ratio between the speed of light in vacuum and the speed of a charged particle with unit charge, like an electron, that rotates around another charged particle with unit charge, which is much heavier, like a nucleus, in the state with the lowest possible energy, i.e. like the ground state of a hydrogen atom, but where the nucleus would have infinite mass. The speed of the rotating particle is a measure of the strength of the electromagnetic interaction between two elementary charges.

So the only fundamental constants for which there could be a evolution in time are those that characterize the strengths of the electromagnetic interaction and of the gravitational interaction (and also the fundamental constants that characterize the strengths of the nuclear strong interactions and nuclear weak interactions).

The values of these fundamental constants that characterize the strengths of the different kinds of interactions determine the structure of the Universe, where the quarks are bound into nucleons, the nucleons are bound into nuclei, the nuclei are bound into atoms, the atoms are bound into molecules, the molecules are bound into solid or fluid bodies, which are bound by gravitation into big celestial bodies, then into stellar systems, then into galaxies, then into groups of galaxies.

Any changes in the strengths of the fundamental interactions would lead to dramatic changes in the structure of matter, which are not seen even in the distant galaxies.

So any changes in time of the true fundamental constants are very unlikely, while changes in the constants that appear as a consequence of choosing arbitrary units are not possible (because such fundamental constants are fixed by conventions, e.g. by saying that the speed of light in vacuum is 299,792,458 m/s).


In natural units, the Newton gravitational constant can be set to 1 as well.

You do still need a term to characterize the strength of gravity. They sometimes use η, which can be defined in terms of G, c, Planck's constant, and a fundamental mass like the electron. The result is a truly fundamental unitless constant.

The Standard Model has a dozen or so other fundamental constants, describing various mixing angles and fundamental masses (as ratios).


Nope. While the Newton gravitational constant can be set in theory as 1, it cannot be set in practice.

The so-called Planck system of units where Newton's constant is set to 1 is an interesting mathematical curiosity, because in it all the physical quantities become dimensionless.

Nevertheless, when Newton's constant is set to 1, the number of fundamental constants is not reduced, but another constant that was 1 in other systems of natural units becomes a fundamental constant that must be measured experimentally, for instance the elementary charge.

Besides not having any advantage, because the number of fundamental constants in non-nuclear physics remains 2, the system where Newton's constant is set to 1 cannot be used in practice.

The reason is that the experimental measurement of Newton's constant has huge uncertainties. If its value is forced to be the exact "1", then those uncertainties are transferred to the absolute values of all other physical quantities. In such a system of units the only values that would be known precisely would be the ratios of two quantities of the same kind, e.g. the ratios of 2 lengths or of 2 masses. Any absolute value, such as the value of a length or the value of a mass, would be affected by huge uncertainties.

So the use of such a system of units is completely impossible, even if it is mentioned from time to time by naive people who know nothing about metrology. The choice of units for the physical quantities cannot be completely arbitrary, only units that ensure very low uncertainties for the experimental measurements are eligible.

Currently and in the foreseeable future, that means that one of the units that are chosen must be a frequency. For now that is the frequency corresponding to a transition in the spectrum of the cesium atom, which is likely to be changed in a few years to a frequency in the visible range or perhaps in the ultraviolet range. In a more distant future it might be changed to a frequency in a nuclear spectrum, like this frequency that has just been measured for Th229, if it would become possible to make better nuclear clocks than the current optical atomic clocks, which use either trapped ions or lattices of neutral atoms.

Some of the parameters of the "standard model" are fundamental constants associated to the strong and weak interactions. It is debatable whether it makes sense to call as fundamental constants the rest of the parameters, which are specific properties of certain objects, i.e. leptons and quarks.


What about the constants that describe the (relative) rest masses of elementary particles? Since we don’t know the order of magnitude of neutrino masses, it seems improbable that even an order of magnitude change of those masses over time would lead to “dramatic changes in the structure of matter.”


The masses of the particles and other specific properties, like magnetic moments, are not fundamental constants.

They are the properties of those particles. There are such properties for leptons, for hadrons, for nuclei, for atoms, for molecules, for chemical substances, for humans and so on.

Any object, either as small as an electron or as big as the Sun is characterized by various numeric properties, such as mass.

The fundamental constants are not specific to any particular object. As I have said, after eliminating the fundamental constants that are determined by conventional choices of the system of units, the only fundamental constants that remain are those that characterize the strength of each fundamental interaction, as expressed in a natural system of units.

Because most objects are composed of smaller subobjects, it should have been possible to compute their properties from the properties of their components. Starting from the properties of leptons and quarks, it should have been possible to compute the properties of hadrons, nuclei, atoms, molecules and so on.

Unfortunately we do not have any theory that can compute the desired properties with enough precision and in most cases even approximate values are impossible to compute. So almost all properties of particles, nuclei, atoms or molecules must be measured experimentally.

Besides the question whether the fundamental constants can change in time, one can put a separate question whether the properties of leptons and quarks can vary in time.

Some of the properties of leptons and quarks are constrained by symmetry rules, but there remain a few that could vary, for instance the mass ratio between muon and electron. It is likely that a future theory might discover that this mass ratio is not an arbitrary parameter, but the muon is a kind of excited state of the electron, in which case this mass ratio could be computed as a function of the fundamental constants, so the question whether it can vary would be reduced to the question about the variation of the fundamental constants.


I think you are wrong; for example the Standard model has 26 (or 25?) fundamental constants that can be made dimensionless, so they are not dependent on the choice of units. Also, the masses of fundamental particles are connected to the strength of the coupling of the particle's field to the Higgs field, and those coupling strengths are fundamental constants.


Not all the parameters of the standard model can be made dimensionless, and for most others their dimensionlessness is illusory.

When you have 10 absolute masses, you can also represent them by one absolute mass and 9 dimensionless mass ratios. Then you can claim that your model has 9 dimensionless parameters, but this changes nothing, because in any formula that relates different physical quantities you must multiply the mass ratios by an absolute mass.

There are cases when using dimensionless mass ratios does not bring any advantage, but there are also cases when using mass ratios can improve the accuracy. For very small things, like particles, nuclei, atoms or molecules, it is possible to measure with much higher accuracy the ratio between their mass and the mass of an electron, than their absolute mass.

So in particle/nuclear/atomic physics it is usually the best to express all masses as dimensionless ratios vs. the mass of an electron and to multiply them with the absolute mass of an electron only where necessary. There are methods to measure with high accuracy the absolute mass of an electron, based on measuring the Rydberg constant and the constant of the fine structure.

Most parameters of the standard model are dimensionless because they are ratios that eventually need to be multiplied with an absolute value for the computation of practical results.

In many research papers about the standard model they get away with using only "dimensionless" parameters because they take advantage of the fact that they cannot continue the computations far enough to obtain practical results, so they use some ad hoc system of units that is not clearly defined and which is disconnected from the remainder of the physics.


> When you have 10 absolute masses, you can also represent them by one absolute mass and 9 dimensionless mass ratios. Then you can claim that your model has 9 dimensionless parameters, but this changes nothing, because in any formula that relates different physical quantities you must multiply the mass ratios by an absolute mass.

What? So, what is the mechanism in nature that produces those 9 ratios? If you haven't got one, you haven't got a theory; you've got to admit that those are experimentally confirmed fundamentals.

So, I interpret what you're saying like this: you have got a pet theory where the mass ratios are calculable from some other, more fundamental assumptions, but (charitably: because the theory is numerically too hard, or you lack compute resources) you don't have the actual means to calculate them.

Otherwise, I scratch my head. If you don't manage to enlighten me, I deem you a crank.


I did not say anything about any mechanism in nature.

This is just a mathematical manipulation of the numeric values. If you have the absolute masses m1, m2 and m3, you can store instead of those 3 numbers other 3 numbers, m1, m2/m1 and m3/m1. This stores the same information, because whenever you need e.g. m3, you can recover it by multiplying m3/m1 with m1.

You can do the same thing with one hundred absolute masses, replacing them with a single absolute mass together with 99 dimensionless mass ratios.

As I have already said, sometimes there are good reasons to do so, e.g. in particle/nuclear/atomic physics you can increase the accuracy of computations if only a single absolute mass is used, the mass of the electron, for which there are accurate measurement methods, while all the other masses are replaced with dimensionless ratios between the corresponding absolute masses and the mass of the electron, because such mass ratios can be measured accurately based on the movements in combined electric and magnetic fields.

This includes the standard model, where the absolute mass of the electron is necessarily one of the parameters of the model, but all the other masses can be replaced with dimensionless mass ratios, e.g. the mass ratios between the muon mass and the electron mass or between the tauon mass and the electron mass.

Similarly, any parameter used to characterize the strength of the electromagnetic interaction is not dimensionless (in any system of units where the elementary charge is one of the base units, and such systems are better than the alternatives), and one such parameter must be included in the standard model. The parameters that characterize the weak interaction are also not dimensionless, but they can be converted to dimensionless parameters in a similar way with the masses, by using ratios (vs. the electromagnetic interaction) that are incorporated in the mixing angles parameters.

The end result is that the standard model is expressed using a large number of dimensionless parameters together with only a few parameters that have dimensions, because this representation is more convenient. Nevertheless, there are alternative representations where most parameters have dimensions, by rewriting the model equations in different forms, so there is nothing essential about having dimensions or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: