They claim a 7.2 sigma result, but the distribution of the null-hypothesis (called PDF(exp) in the paper) was determined from simulations of one atomic transition and "external pair creation on the target backing", and the only source of systematic error they took into account was the position of the target relative to the beam-line. It seems a little paltry to me in terms of trying to take into account possible sources of error... And given that the excess appears in only a particular angular region, they seem to be ignoring the Look-elsewhere effect ("oh, no excess signal here? Let me look elsewhere...") and over-stating their significance. The paper also is riddled with spelling and grammar mistakes, and while that doesn't have anything to do with the science, it does say something about the sloppiness of the researcher, I think.
Anyway, I can't imagine this passing the muster of peer-review in its current state. But I'll be curious to see the revised paper with more details included (if they can actually get it published).
EDIT: They did explain how they determined the background distribution, I edited my comment above.
> The paper also is riddled with spelling and grammar mistakes, and while that doesn't have anything to do with the science, it does say something about the sloppiness of the researcher, I think.
I have reviewed many papers that were originally quite badly written, but were not bad papers at all. Many non-native speakers are bad at English, so they rely on some external editing service, and they do not want to pay this service until the paper has been properly reviewed and they have made the proposed changes. Of course it does not send a good signal, but the content of the article should not be judged in basis to that.
Yes, they are normal... but they also aren't even peer-reviewed yet.
Ergo, you can treat it as more significant than a random physics crank posting on his blog, but not as trustworthy as something that has in fact been properly vetted.
>> ArXiv pre-prints are pretty normal in physics and math.
Correct, and they are usually followed up by a published, peer-reviewed paper as well. I look forward to the day this one also gets reviewed and published.
CERN's NA64 experiments have been looking for a similar particle and so far haven't found it. They haven't completely ruled it out, but their results so far don't seem to support a claim that this particle has definitely been found.
Agree. This new boson business has been declared and punted on since 2015/2016. Still I hope something new breaks lose... We could use a shot in the arm.
No, the Higgs was considered the most probable thing the LHC could find, including the null result of "nothing". People are always going to be skeptical of particular measurements because this stuff is so tricky and the incentives are huge for discoveries, but the prior of experts was that the Higgs would exist.
I studied the philosophy of science a bit in university and one of my professors did some work on reactions in the scientific community to the Higgs boson discovery. I remember her remarking about how "this is a huge discovery" and "we knew we would see this; it's not a big deal" narratives were popular simultaneously.
I don't actually think those narratives are in strong tension, especially if each were stated precisely. In short, "surprisal" is not the sole measure of importance for a scientific discovery.
That is a better comparison, but not a great one. A new boson (and associated force) would be exciting but completely in line with known physics. Speed-of-light violations would be absolutely revolutionary, threatening almost everything we know about fundamental physics.
> personally I don't believe faster-than-light neutrinos exist.
You don't have to believe they they exist, they were quickly and conclusively disproven. As I recall, GPS was used as a time source and there was timer inaccuracies on Earth due to relativistic effects.
I wouldn't go so far as to say it was expected, it was more of a placeholder. It was more: 'Well, something doesn't add up here, so the 'boson' should fill that gap. And people went and searched for it.
I feel something simular to what we see here, the problem is trying to tie everything else that we know back into it.. I say problem but I really mean excitement to understand a bit more!
Again, this could all be completely wrong and it could even draw down to a bad input along the chain, but the idea is definitely something worth investigating!
Remember, the Higgs Boson brought us the imagination of changing a light particle (photon) to a particle with mass.. It's a quite wacky idea if you look at it from a point we knew many moons ago!
Surprisingly confident tone in the title for an experimental result that is far from confirmed! Experimental research is damn hard and it is very easy to make tiny oversights resulting in signals like this (in an absolute sense — with all respect to the Hungarian lab which has made this claim). While the possibility is exciting, public facing communication should be very careful to not jump the gun.
Not really directly related to the article, but YouTube recently recommended to me a series of videos from "Strange Loop" which is some kind of tech conference. One of the talks was "Jagged, ragged, awkward arrays" by Jim Pivarski [1]. It is fairly narrow in scope talking about a very particular data modelling problem but it introduced me to the kind of data processing physicists do to make these kinds of discoveries. Just wanted to share in case anyone else finds this kind of thing interesting.
Heh, I went to grad school with him. Haven't heard what he's been up to in over a decade.
I was solidly in the "C++" column back then, but have since become a data scientist who now uses numpy/Python for all machine learning. That talk was a very interesting, helped me to understand what they're doing in my old field these days. Thanks for sharing.
I was extremely hyped the last time something about this news was posted; someone explained to me that potentially model-breaking things happen all the time, and that this needs super intense cross validation before it's actually considered valid.
This research group has claimed previous discoveries of different bosons before [0]. They have never succeeded in reproducing a result. The scientific community is right to be skeptical. If there is real physics there, then follow up experiments should be able to reproduce the effect.
If I'm reading this correctly it seems like it's pretty interesting even if it's not validated. They either discover a new particle or they discover a new way to arrive at anomalous results.
I'm thinking back to the team that had the result that appeared as if it was faster than light communication (IIRC). Even they didn't fully buy into their result but they published because they checked it every way they could think of and couldn't explain it. As I recall, the underlying error was pretty interesting on its own and in some sense contributed something new to our understanding of "how not to be wrong".
Well that's rather interesting. Sounds like 2 separate detections by the same team and no other confirmations yet. And the presumptive new particle doesn't sound like it's predicted by any of the current beyond-standard-model theories. So we're gonna need a few more detections by a few more teams. Going to be very interesting results for theoretical physics if it holds up though.
If this is substantiated, it will be retrodicted by any number of theories.
I keed, I keed... but I'm also partially serious. There's been a ton of theories over the years that would produce new particles, whose only flaw is that those particles weren't found. It wouldn't be at all surprising if there are past theories that with just a little tiny sensible tweak will turn out to contain this particle, and possibly also some other explanation as to why it wasn't found until now. In a sense, that's just a rephrasing of the fact that particle physics has been starved for data over the past couple of decades. No shortage of ideas, but no data to test them against. It really wouldn't be that surprising that several of the tons of ideas will be able to easily accommodate this new data, even as others are completely destroyed by it.
Well I can't speak to any number of theories, but the beauty of E8 is either the boson was predicted or it wasn't...if it wasn't, they don't have the luxury of revising E8 to create another boson or particle.
"Essentially, the scientists took some lithium and shot protons at it."
How does this actually work? Presumably it's a very small amount of lithium. What is it held in? How are the protons so accurately steered towards it? The accelerator ring is comparatively very wide, right?
Lithium is cheap. It prob'ly has to be chilled while it's being blasted so they can do it long enough. Steering proton beams is what accelerators are for. You need a small target spot just so the detectors get things from a fixed angle.
This will probably get much more traction once an independent group confirms it. I don't believe this article presents any new information on X17 that we haven't already heard.
It is possible for researchers to have limited institutional budgets and relatively low wages. Of course, final versions should be properly corrected. But please keep in mind that some might not have the best means / fortunes. Science does benefit from the substance of the paper getting reviews before the final version that will be there for the ages.
>> why should the reviewers be obligated to give you the benefit of the doubt?
That is a technical paper with 5 pages of text. Their PI has a few hundred papers published so he probably just does not care. The reviewers usually don't care as well as long as the core contribution is legit.
> it is not generally (ever?) in the job description.
You're wrong! Just one counter-example I could find in seconds - 'participation on program committees, advisory panels, and editorial boards'. And that's not even an academic position - it's industry.
None of those activities are actually the act of peer reviewing papers.
It’s well known that getting sufficient peer reviewers is a problem in computer science (and I’d bet most fields) specifically because it’s not actually anyone’s job. Anyone doing peer review is either doing it on their own time or taking the time away from their “real” job. I’m sure there are employers that will say “yes, you should be doing peer reviews as part of your job” but those same employers probably won’t reward you any differently regardless of whether you do peer reviews.
You are also technically getting paid for browsing Hacker News right now. The company employing you probably wouldn’t say it pays you for that.
> None of those activities are actually the act of peer reviewing papers.
Being on a program committee is reviewing papers. That’s what the program committee do.
> you should be doing peer reviews as part of your job
If it’s part of your job, which you agree it is, and it’s literally in the job description that companies post, which we saw that it was, then you’re being paid for it. Baffling that people still say it isn’t.
> those same employers probably won’t reward you any differently regardless of whether you do peer reviews
I’ve got a colleague at another company who gets a bonus for every program committee he’s on. He is literally rewarded more if he reviews papers than if he doesn’t.
I get review requests from journals and it is my personal decision to accept them. My employer (university) does not even know if I review papers or not. I do not have any obligation or get any kind of compensation for doing them. This is normal, and I consider it not being paid for reviewing. Baffling that you find it baffling.
And, although it is true that being in a program committee may involve reviewing, that is not the same thing as peer review for a journal. I find weird that you have so strong opinions about this if you do not understand the difference.
Shrug. Once I was a published grad student I got asked to do a fair bit of peer review and was always told they really needed help because they didn’t have enough peer reviews and that most people avoided it because it was not actually their job (not in reality, regardless of whether it hypothetically counted). Maybe I was misinformed.
If you can't be bothered to communicate clearly then you have expressed a lack of interest in being understood clearly. If you don't want to publish in English then don't publish in journals that require English.
Whatever. There are ways you can signal that you care about what you're doing, and there are ways you can signal that you don't. Blowing off the grammar of the language you're writing in is one of the latter signals.
In a world where there's always more stuff to read than time available to read it, signals of diligence and competence are important if you want to be taken seriously.
There's a difference between bad grammar/spelling and ambiguity; ambiguity is the worse issue. Some journals have fields in their review report where reviewers can say if the text is unambiguous, but no field about grammar or spelling in general.
It is not unusual (at least on some fields) seeing specific questions about grammar. It must be checked and maybe fixed, and in some extreme cases it can even imply rejection independently of content, but in most cases a revision is enough.
Oh, by field I meant it as a field in a record, not an area of study. For example in some reports I had to check a box for unambiguous text but I had no check box for grammar or spelling. That's some ambiguous text from my part! :)
Your text was fine! I understood what you mean. I have seen the specific box about grammar in some journals. But my field is not computer science or physics, so my experience may be quite different from most people here.
I work for a journal which is in this field. Grammatical errors are OK for a manuscript as long as reviewers can understand what you are reasoning. We will copy edit for grammar or style later if accepted. We won't change the significant findings of a manuscript though, so if the paper is unclear and gets rejected by referees, that is how it stands. Referees will usually ask for revision if something is not particularly clear. But if a manuscript came in that was not comprehensible it runs the risk of being summarily rejected.
https://arxiv.org/abs/1910.10459
They claim a 7.2 sigma result, but the distribution of the null-hypothesis (called PDF(exp) in the paper) was determined from simulations of one atomic transition and "external pair creation on the target backing", and the only source of systematic error they took into account was the position of the target relative to the beam-line. It seems a little paltry to me in terms of trying to take into account possible sources of error... And given that the excess appears in only a particular angular region, they seem to be ignoring the Look-elsewhere effect ("oh, no excess signal here? Let me look elsewhere...") and over-stating their significance. The paper also is riddled with spelling and grammar mistakes, and while that doesn't have anything to do with the science, it does say something about the sloppiness of the researcher, I think.
Anyway, I can't imagine this passing the muster of peer-review in its current state. But I'll be curious to see the revised paper with more details included (if they can actually get it published).
EDIT: They did explain how they determined the background distribution, I edited my comment above.