Summary: the 'Magic Number', i.e. the fine structure constant, α, which has been known to be about 1/137, has been measured more accurately. From the article:
> α = 1/137.03599920611. (The last two digits are uncertain.)
Implications: this constant underpins much of why the universe is the way it is. A tiny bit higher or lower would result in drastically different universes. Measuring it more precisely allows scientists to make more accurate predictions and hence rule out / refine potential models of our universe.
Is it really a constant or can there still be a (a little) variation in the number when measured over astronomical distances, or over large timescales?
We don't know for sure, and we probably never will. It's impossible to prove that something is actually a constant over large distances and timescales. You can always go a bit further or wait a bit longer, where it might not be as constant as we expected.
However, most experimental data so far is in agreement on it actually being a constant.
One of the potential benefits of improving the measurement of a "constant" is that you can now uncover fluctuations that were previously buried in noise. Not that I can think of a way of doing it in this case, but perhaps the spectra of distant stars might tell us something.
One of the things the cosmologists are always doing is looking for anomalies that might indicate that some "constant" really has changed over time or distance.
> Surprisingly, her new measurement differs from Müller’s 2018 result in the seventh digit, a bigger discrepancy than the margin of error of either measurement.
Apparently there can be differences based on if you measure in Paris or Berkeley? Though it’s more likely there is some other source of error. I think having the technical ability for a precise measurement may one day help us look for fluctuations.
Not sure why you're being downvoted, as by definition, if this constant is valid for how this universe is as it is configured now then significantly changing the universe will likely result in a different constant. How you might change the universe enough to cause a variation is a bigger question. Concentrating all the stars (mass) to one place might do it. Maybe not.
The flip side of that question is the larger and generally accepted as obvious statement: a constant with value "x" determines or reflects how the universe is as of now. The duration of that "now" is likely much longer than the earths entire existence.
"stable matter, and therefore life and intelligent beings, could not exist if its value were much different. For instance, were α to change by 4%, stellar fusion would not produce carbon, so that carbon-based life would be impossible. If α were greater than 0.1, stellar fusion would be impossible, and no place in the universe would be warm enough for life as we know it."
I know. But which direction? Higher, lower, or both? If it's 4% in both directions that's quite a coincidence.
And since we're talking about percentages of α, I will emphasize that going from .007 to .1 is a change of well over a thousand percent. That example is the exact opposite of the anthropic principle showing that we "could not exist if its value were much different"!
I would highly recommend Martin Rees' Just Six Numbers. I've not read the actual book, but the author-narrated audio is a perennial mind-expanding yet strangely soporific bed time favourite of mine.
> The constant is everywhere because it characterizes the strength of the electromagnetic force affecting charged particles such as electrons and protons
well for example when it says " Physicists have argued that if it were something like 1/138, stars would not be able to create carbon, and life as we know it wouldn’t exist."
I might like to read more conjecture as to what would exist if it were 1/138
“ Table 1 of the new paper is an “error budget” listing 16 sources of error and uncertainty that affect the final measurement. These include gravity and the Coriolis force created by Earth’s rotation — both painstakingly quantified and compensated for. ”
On a related note, I highly recommend Sean Carroll's series of videos: The Biggest Ideas in the Universe [0]. The fine structure constant comes up in the 10th video on Interactions [1].
"Instead, the measurements match beautifully, largely ruling out some proposals for new particles" from what I understand this is some kind of Occam's razor argument saying that we don't need anything more in the standard model to explain the measurements but is this only qualitatively reasoning or does it actually gives some quantitative constrains to the possibility of new particles? I mean some particles have been predicted before being discovered (the Higgs and neutrinos I have read but there must be more) in those cases have the physicists actually gone and calculated constraints to the possibilities of that particle existing depending on the measurement of the fine structure constant?
LOL! I first read that as: (1 + 1) * (3 * 7) = 42 =O, guffawing because it follows the sequence of digits of the rational number 1/137 expressing the real fine structure constant.
Sure, ~0x42 (modulo something) may well be a fixed-point representation of 1/137, but the margins of the book I am trying to read are too small for me to check this...
Speed of light is roughly 186 000 miles per second, and roughly 300 000 kilometers per second, and as it turns out, 186 000 is very much different from 300 000. The beauty of fine structure constant is that it comes out to be the same numerical value regardless of what units you choose for your measurement.
In those subjects they choose different units so that the speed of light is 1 of that unit, but it still has that unit associated to it. So it is not dimensionless.
The reason it is indeed left dimensionless is that in formulas it is impossible to keep track of the dimension of an implied factor that is equal to 1. In d = t, therefore, distance and time are actually measured in the same units (usually of time).
> α = 1/137.03599920611. (The last two digits are uncertain.)
Implications: this constant underpins much of why the universe is the way it is. A tiny bit higher or lower would result in drastically different universes. Measuring it more precisely allows scientists to make more accurate predictions and hence rule out / refine potential models of our universe.