Very cool.
Unrelated, but the technology and engineering behind ATLAS is immense. ~98 million data channels streaming out at 2Ghz (at least that is what I recall). Cut down to a manageable level by hundreds of FPGAs in the L1 trigger.
Yes, the FPGAs! Atlas and other experiments at CERN require such complex and hard-read time requirements, making them great applications of FPGAs. Right now, I am working on a project to try and deploy graph neural networks for track reconstruction and filtering (on the faster L1 trigger), as well as reconstructing other quantifiable measurements using graph neural networks (I believe this is not on the L1 trigger but requires some element of real-time). What's cool is they have copies of the FPGA boards that are at CERN at Fermilab, which makes remote development easier. Also, a fun fact: the detectors that surround the core are not all perfect concentric circles and are a big mix of different geometries and types of detectors, making the problem more interesting.
Oh that is really cool!
Way back in my undergrad I worked on a tiny tiny part of the insertable B layer, writing some very bad FPGA firmware to control the T3MAPS test chip (I think this was for Phase 2 upgrades?). FPGAs everywhere in this domain!
And yeah the detectors are arranged in a really complicated way, I can imagine that is a hide headache.
I recall watching Marco Reps' video[1] where he built the (relatively) high-speed high-precision ADC they designed and published as an open hardware project[2], to be used to accurately monitor current in the magnets.
A small but important piece of a giant machine puzzle.
I love how nearly everything hardware at CERN is published under the open hardware license. I think even the pixel chip designs maybe, but I could be wrong.
Timepix is not public but licensed to commercial vendors as far as I know.
But anyway I appreciate they make such (a readout chip for) detectors available so we could benefit in one of our previous projects (not about particle physics).
The Timepix is a TSMC chip, I do not see how you could make that opensource without breaking NDAs with the fab.
Something like the Skywater PDK would be ideal for open science (provided it could perform as well under radiations of course).
"For this new analysis, ATLAS physicists revisited its data collected in 2011 at a centre-of-mass energy of 7 TeV (corresponding to 4.6 fb-1, also used in ATLAS’ previous measurement). Researchers employed improved statistical methods and refinements in the treatment of the data, enabling them to reduce the uncertainty of their mass measurement by more than 15%."
Hah, my Bachelor thesis from 2009 explored a measurement channel for the W and Z bosons a bit (qq->WZ->llνl, so two quarks become a W and a Z which then decay into in total 3 charged leptons and a neutrino, "Di-Boson Events with Leptonic Final States"), sadly with simulated data as the restart after the issues from 2008 was delayed. The simplest way to measure the W and Z masses was to "pattern match" potential events in the data (which were already heavily preprocessed with "pseudo particles" derived from the tracks in the inner detector, myon detection and calorimeters), in my case Z->µµ or Z->ee, try ones best to filter out parallel channels (e.g. Z->t tbar) and then derive information from the histograms of the "transverse mass", which is essentially the effective mass with the momentum component longitudinal to the beam deducted (https://en.wikipedia.org/wiki/Transverse_mass).
This is ludicrous, but this kind of thing have been going on in science and physics in particular for a long time. Let's call the phenomenon maths side-blinders.
One experiment finds the mass of thw W boson to be 80370 +/- 19 MeV, another 80434 +/-9 MeV. Clearly, both results are incompatible, their rnage don't overlap. Of course, these are statistical ranges. But even with 95% certainty, their differences is many times the uncertainty, so it's not just that they're a bit off. IOW, we can be 100% (not 95%) sure that at least one, if not both, are incorrect.
Yet they are boldly reported with those uncertainty ranges, even though, clearly , those ranges cannot be correct. And then ATLAS double down by "apply more statistical analysis" to narrow their uncertainty range!
There should not be work on "improved stats analysis", but more work on finding where the systemic error between the two experiment lies. I truly don't see the point of retreading the same data set to change the value and uncertainty range when clearly there is something wrong with the data, the science, the experiment or all of the above.
PS: what I'd like to see, is the labs to say something along the line: given result A and B being incompatible, statistically there is a (say) 99.99% chance that one or both experiment has a hidden flaw or that there is a major flaw in the standard model.
I am pretty sure that what you say in your PS is so obvious, at least to anyone who cares what the mass of the W boson is to a precision of ~±2e-4, that it does not need saying. What does need saying is how bad the disagreement is, and when you are at the point where this difference has arisen and cannot quickly be resolved, publishing all the papers seems to be the best way to give the most complete information, available at that time, about the problem.
Once the contradictory Fermilab results were made available, what should the ATLAS group do? Well, one thing would be to double-check their own analysis, and it seems plausible that this is what led to the latest result. Using improved methods might just as well have revealed a problem as end up as it did, supporting the original analysis.
This experiment is way out of my depth, (and I don't know if that's going on here? <shrug>) but I think what you're describing is an interesting phenomenon, which I'll try to explain.
I think it's important to keep in mind is those cases certain models are being tested. In good science, you do experiments with certain models in mind, and of course associated models of your apparatus and machines themselves, and then you compare the results for consistency. You have to be very careful with any other additions in your statistical analysis that wasn't generated by the model, including the models of the machines (say some kind of 'epistemic uncertainty' or 'procedural uncertainty' or something like that) to correct after the fact, as I believe it potentially invalidates the base models by itself.
For example, say you measure gravity at sea level with 1 apparatus and report gravity is 9.9 +/- 0.1. Then you get a second apparatus, and measure 9.2 +/- 0.1 (i.e. something went wrong). The difference is significant. Then you realize there must be some error, so you add a 'experimental error parameter', which you can tune, and it has implications on both measurements: you adjust it until the uncertainties are compatible (which is to be expected from consistency of experiments), and arrive say at 9.9 +/- 0.6 and 9.2 +/- 0.6 for the first and second experiment. This new parameter clearly doesn't belong in the model, and there's no model for the parameter itself: there's no explanatory mechanism involved, only a new free parameter. Something you could say honestly, is that we known there's experimental error in one or both of the experiments, or the base models are significantly incorrect. But you can't say take an average of both results and say gravity is 9.55 +/- (..), because the existence of either experimental error or base model error (at least to a few sigma of certainty) invalidates this procedure -- that is, unless you just want a guess for some sort of immediate practical application and the "experimental error" is acceptable.
Another common and well known effect in experiments is knowing the result you want to get, and trying different "adjustments" or redoing analysis until (subconsciously or not) the analysis yields the result that agrees with previous observations. This has been reported by Feynman in his books. I believe some modern experiments shield against this by, among other ways, not seeing the results of an analysis until you're sure the experiments/analysis is good (so you can't fine tune the experiments analysis to get known results).
What are the current implications of the experiments at CERN and for ATLAS? My current interpretation is that these are massive engineering marvels that are basically just data generators for some physicists to run statistics on. That is a pessimistic view, but I am just not aware of what's all going on there.
Compare this to LIGO, where one could make a similar argument, but LIGO is producing very explicit data that is driving tons of research and discoveries and is providing tandem support for electromagnetic observations.
It's ostensibly "pure science", where we "just" gain more information on how the most fundamental building blocks of reality work.
It's hard to predict what new technology can come of this. For example, who could have predicted the transistor?
I think the information gained is valuable in itself, but way smarter people than me will be looking way harder at it, and suddenly a real-world marketable application pops out. If we could predict it happening, that would be the pop.
Also, don't underestimate the massive amounts of learning being done in engineering just by designing experiments and building/maintaining accelerators. That, alone, might be worth it.
Those are fairly general sentiments that apply to a lot of things, though. However, funding is not infinite, and CERN has received a lot of funding over the years.
It's not so much the world that's stopped caring, but there's been no real advance. The Higgs was expected, and found. All the other theories were refuted by the LHC. No new particles, no super-symmetry, nothing. The field of particle physics is at a dead-end currently.
And even if it isn't a dead end, moving forward requires EVEN LARGER accelerators for higher energies, which are expensive, which requires buy in from the public, and the public DGAF about "understanding particle physics". Also, very little of this possible future advancement could be meaningful to everyday people, as it's astoundingly unlikely for something that "breaks physics" to actually have significant effects, otherwise we should have found it ages ago. Most likely, any new physics from particle physics would likely just add more decimal places of accuracy to existing models.
> moving forward requires EVEN LARGER accelerators for higher energies
Right, I recently saw a presentation my an experimentalist arguing that after HL-LHC[1] (a major upgrade to LHC), the next sensible size for a LHC-style collider would be one that occupied the Gulf of Mexico. And they meant literally[2], the proposal was to have submerged segments, using existing technology from the offshore oil industry, spanning the gulf.
It's incredibly depressing. There's no path to fully exploring particle physics without basically a worldwide utopia to fund ludicrously expensive, probably unsuccessful projects. Sure, there's not likely to be anything incredible to come out of further research in the that field, but the idea of being unable to learn about our universe because nobody cares and politicians don't want to pay for it sucks.
Well, it seems multi-messenger astronomy might provide an avenue to further particle physics exploration. After all, we got some giant particle accelerators around us in the form of black holes.
> The CDF measurement was performed over the course of many years, with the measured value hidden from the analyzers until the procedures were fully scrutinized. When we uncovered the value, it was a surprise.
Which we can interpret. One way is: We were working blindly on this, so our value is not biased towards hitting a particular number.
I haven't read the paper yet, but I can give a general idea based on my experience working on similar experiments.
The W boson doesn't live long enough to make it to the detector. All they can see are the decay products. The detector is like an onion, with different layers measuring the energies and/or momentum of different decay products.
So what a search like this is trying to do is look at these decay products, figure out which ones came from a W boson, combine the various energy and momentum measurements, and use that and relativity to determine the rest mass.
The measurement process always has some error, so you want to combine multiple measurements. This is how you get the statistical uncertainty. As an oversimplified example, imagine averaging multiple measurements. The more measurements you get, the smaller this error gets as your average gets closer to the true value. In actuality they're fitting some distribution to the mass measurements, but the same idea applies. The key thing is that the more data they collect, the smaller the statistical uncertainty gets.
But there's also systematic uncertainty. Lots of things contribute to this, and it's effectively an indicator of how well they understand the detector and how well they understand the decay process. For example, the various systems need to be calibrated to convert their measurements to actual energies and momentums. These calibrations aren't perfect, and so the measurements aren't perfect. Or, when they determine which events look like W boson decays, they might be selecting some small fraction of other processes, or rejecting events in a way that biases for higher or lower mass. These will throw off the final measurement. They try to correct for these, but there's never enough data to do so perfectly.
The key thing about systematic uncertainties are that no amount of measuring more decays will overcome them. There are other things you can do to bring them down, such as improving your detector simulations and doing additional calibrations. But there's a limit to how much they can bring this down.
As an example, imagine trying to measure the height of a building with a long measuring tape. You might measure it many times and then average it, and that would give you some very small statistical error. But you don't know that the measuring tape was printed completely accurately, you don't know that it isn't stretching due to gravity or temperatures, you don't know that you got the tape perfectly lined up with the building. Those are all systematic uncertanties, and so when you report your measurement, those will be included in your total uncertainty.
It's not that great. But it's the difference between "in accordance with the best theory" and "suggestive of unknown physics".
Any unknown physics is likely to be a very small effect, at least numerically. That small effect could require a radical change in the understanding of physics -- like the tiny, hard-to-measure difference between Newtonian gravity and general relativity.
Unfortunately, it's yet another case where a small apparent effect turned out to be just noise rather than an actual novelty.
The problem with the statement "is the standard model incorrect" is that "the standard model" is a bit of a moving target.
Neutrinos for example are not fully understood since we found neutrino masses, and this is an active area of research[1]. However depending on who you ask, they may say neutrino masses is part of the standard model or it's beyond the standard model (ie "new physics"). AFAIK those that say it's part of the standard model include some extension or modification[2] in their definition of the standard model which the other folks consider as something extra.
So from what I can gather asking "is the standard model incorrect?" is a bit like asking "can I run this RISC V program on this RISC V processor?" The answer kinda depends on which extensions you include in the base definition.
There have been many "hmm, that's odd from a Standard Model point of view". The only one that has definitely been verified is nonzero neutrino masses, which were easily incorporated into the SM. There are still many open puzzles and tantalizing hints.
I know this is anthropomorphizing, but the Weak force always seemed like the weird hack or cludge of the fundamental forces. Yeah it works, but do we really need two different W bosons and also a Z boson? The electromagnetic force does just fine with just the photon.
More anthropomorphizing, the fact that there are three generations of matter feels like a premature optimization or someone applied too much abstraction and extensibility to the universe. One or two generations makes sense, but three feels like someone thought the system would need to scale to an infinite number of generations and then that didn’t happen.
Keep in mind that the photon is the fourth electroweak boson, giving a nice power of two. W+, W-, Z0, and the photon are all mixed from W1, W2, W3, and B, after symmetry breaking. The photon doesn't couple to Higgs, but still, at higher energy there's much more symmetry here than appears at first glance.
Worth noting that the electromagnetic force does not actually just use the photon.
The photon is the force carrier for hypercharge, and what we experience as electromagnetism on the macro scale is really a combination of hypercharge and the weak nuclear force.
Caveat: I haven't used this stuff in years, it's possible I am talking shit.
https://cds.cern.ch/record/2745767/files/ATL-INDET-PROC-2020...