Hacker News new | past | comments | ask | show | jobs | submit | knzhou's comments login

The Physical Review is a gigantic set of journals, and like anything of its size, many of its published results are wrong. At this very moment I'm writing a rebuttal to a PRD paper that arrived at nonsensical conclusions due to some basic algebra mistakes.


Indeed, the author's 2000-page online textbook, heavily promoted on the internet, is a classic trap for unwary students.

It looks alright at first: volume I is light on math, but full of neat examples. But it's full of intuitively plausible but slightly wrong statements which fall apart in more general situations, reflecting the author's lack of technical expertise. This problem steadily gets worse: volume IV is an oversimplified introduction to quantum mechanics which contains almost no math, and serious conceptual errors on almost every page. Volume V covers a bizarre mix of particle physics, consciousness, and sexual reproduction. And volume VI is the author's almost math-free personal theory of everything. Because the change is gradual, a student can get seriously misled without noticing, like the proverbial boiling frog.

On HN, people are always asking how to get started self-learning topics like physics. The tragedy is that this has been a completely solved problem for decades: the standard textbooks are excellent. But people don't hear that message because self-promoters pollute the discourse.


> the standard textbooks are excellent

A caveat. Some years ago, at a first-tier university, some physicists and mathematicians were munching. A physics professor described how days earlier he thought he had found a case of a well-respected intro physics textbook saying something wrong. But, after some hours and days of thought, he realized the textbook was very carefully worded so as to not be incorrect. Yay. Most everyone smiled and agreed it was an excellent textbook.

A bit later, there was a quiet out-of-band question: So... if you're already an expert on the topic, and do a close read, after thinking about it for days, you will escape being misled... and this is a win??

There's an old physics education research joke: If you think your lectures are working, your assessment also isn't. I've found that to apply to much science education content as well.


> the standard textbooks are excellent

Sorry, I have to disagree with this, at least with respect to quantum mechanics. The pedagogy of QM is atrocious because it generally focuses on the single-particle case and relegates entanglement to the sidelines while making a big deal out of the mystery of the measurement problem. This leaves students hopelessly confused. At least, it left me hopelessly confused for about ten years. Even today one hears physicists speak un-ironically of "quantum erasers changing the past" and other associated nonsense. If there's a standard text that inoculates against that, I have not seen it.


Can you point to a standard textbook that does this? The ones I'm familiar with definitely don't shortchange multi-particle problems.

And is the measurement problem not a mystery? If there's convincing explanation, that's news to me.


> Can you point to a standard textbook that does this?

That does what? Focus on the single-particle case and punt on measurement? My two poster children are the Feynman lectures and Griffiths.

> The ones I'm familiar with definitely don't shortchange multi-particle problems.

What does your reading list look like? Maybe things have changed since I last looked.

> the measurement problem not a mystery?

It might be a mystery, but it is not the mystery most commonly presented, namely, that particles change their behavior "when somebody looks." This is nonsense. Measurement has nothing to do with "somebody looking", it is just entanglement + decoherence. The only real mystery is the origin of the Born probabilities.

See https://flownet.com/ron/QM.pdf for a complete discussion.


A) I don't consider the Feynman lectures a "standard textbook." I don't think there exists any university that uses them as the primary reference in their quantum course. They're fine, as far as they go, but I think modern pedagogy is better.

Concerning Griffiths, what do you feel it lacks? You've got the hydrogen atom, fermions, bosons, helium, and probably more stuff that I'm forgetting right now. What else would you stick in an intro course? Hartree-Fock?

B) Decoherence doesn't solve the measurement problem. Even the decoherence boosters admit this. See, for example, Adler's paper on this: https://arxiv.org/abs/quant-ph/0112095.

This isn't to say the decoherence program isn't important. I think it is. It just hasn't solved the measurement problem.


What Griffiths lacks is an explanation of what a measurement is. He, like many other authors, explicitly avoids this because he says that measurement is an ineffable mystery, but it isn't. A measurement is a macroscopic system of mutually entangled particles. The only real mystery is why the outcomes obey the Born rule.

Decoherence does not solve the whole measurement problem. Like I said, it does not explain the Born rule. But it does solve parts of the measurement problem. Decoherence explains why measurements are not reversible (they are reversible in principle but not in practice because you would have to reverse O(10^23) entanglements). It explains why only one outcome is experienced (because you are part of the mutually entangled system of particles that constitutes the measurement, and all of the particles in the system are in classical correlation with each other). I don't know of any standard text that discusses this at all.

Whether or not Feynman is a "standard text" is quibbling over terminology. A lot of people learn QM from it (or at least try to).


I'm sorry, but your description of how decoherence purportedly solves parts of the measurement problem is incorrect.

Even decoherence researchers agree that docoherence theory does not do this. You can find references and details in the Adler paper I linked, or in Schlosshauer's "Decoherence, the measurement problem, and interpretations of quantum mechanics." (Schlosshauer is the author of a main reference on docoherence: http://faculty.up.edu/schlosshauer/index.php?page=books.)

So, the reason that Griffiths avoids giving the explanation of measurement you prefer is that it is wrong. It's a virtue of the book, not a fault. He does discuss decoherence on page 462 of the third edition, though.


> Even decoherence researchers agree that docoherence theory does not do this

Yes, but they are wrong. And it's not hard to see that they are wrong.

The crux of the argument is that the state predicted by QM:

|S1>|A1>|O1>|E1> + |S2>|A2>|O2>|E2>

where S is the system being measured, A is the measurement apparatus, O is the observer, and E is the environment, is not what is observed. What is observed is either:

|S1>|A1>|O1>|E1>

or

|S2>|A2>|O2>|E2>

neither of which is the predicted state above. Except that it is because |S1>|A1>|O1>|E1> is what is predicted to be observed by an observer in state |O1> and |S2>|A2>|O2>|E2> is what is predicted to be observed by an observer in state |O2>. It is not that the prediction is wrong, it is that you, a classical observer, are not sufficiently omniscient to see both observations. You can only see one or the other. And this too can be explained, though by quantum information theory rather than decoherence theory. In order to be a classical observer it is necessary to be able to copy (classical) information. The only way to do that is to discard some of the (quantum) information contained in the wave function. Being non-omniscient (i.e. being unable to directly observe a superposition) is a necessary precondition of being a classical observer.


What are the standard textbooks? Could I just look at the curriculum of any physics program of a respected school and go from there?


Yes. Whatever MIT uses for their OpenCourseWare is going to be fine, for example.


This is a new proposal, fresh on the arXiv today, from a group of U.S. particle physicists. The introduction is very readable and lays out the mission clearly:

> We can now confidently claim that the “Standard Model” of particle physics (SM) is established. At the same time, we are more and more strongly persuaded that this SM is incomplete. [...] It is now common to describe the SM as an “effective” theory that should be derived from some more fundamental theory at higher energies. But we have almost no evidence on the properties of that theory.

> Our successes have become a liability in reaching this goal. Scientists from other fields now have the impression that particle physics is a finished subject. They question our motivations to go on to explore still higher energies. The scale of an energy frontier collider is also challenging to the young people in our field. They need to see qualitatively new capabilities realized during their active scientific careers. [...] That is where the urgency lies.

> [T]he entire C3 program could be sited in the United States. With the cancellation of the Superconducting Super Collider and the end of Tevatron operations the US has largely abandoned construction of domestic accelerators at the energy frontier. C3 offers the opportunity to realize an affordable energy frontier facility in the US. This may be crucial to realize a Higgs factory in the near term, and it will also position the US to lead the drive to the next, higher energy stage of exploration.

The main innovation is that they propose to use non-superconducting cavities, which allow much higher accelerating fields, cooled to increase their quality factor. The resulting shorter length dramatically decreases the cost, to an estimated $4 billion, which is 80% to 90% less than other proposals. Of course, $4 billion is no small amount of money, but for perspective that's about equal to the monthly budget of the National Institutes of Health, a third of the cost of the James Webb Space Telescope, or 2% of the total cost of the space shuttle.


I would expand on this to say that they propose to use an innovative cavity shape. Normal-conducting cavities can reach higher accelerating fields than superconducting cavities, but at much lower duty factors and with high voltage breakdown events. This new design would allow for lower power dissipation than typical normal-conducting structures, ultimately allowing higher RF duty factors.

There are still some downsides/tradeoffs compared to superconducting structures, including a much smaller beam aperture (5 mm diameter vs. ~100 mm for superconducting cavities), which disrupts the quality of the beam. Superconducting machines can also be run in continuous wave mode (100% duty factor), and state-of-the-art niobium cavities have been driven at ~50 MV/m in CW.


I dropped out of my theoretical particle physics Ph.D. program when the SSC was cancelled. Experimental results for the stuff I worked on was delayed 20 years.


Don't worry the legacy of the LHC is likely that the SSC would not have given us much. (Unless there are funny resonances at 20TeV+)


Agree. Had much the same thought. Ssc maybe have found higgs as predicted but nothing new. Now, that's not to give the CERN ppl a hard time. They tried and are trying very hard to go beyond SM but mom (mother nature) isn't making it easy for them. In the US I'm hoping neutrino work will give us the edge there


Where are we at with femtosecond laser driven accelerators? It seemed like there were promises of table-top accelerators in the near future.


I was just thinking that laser-driven plasma wakefield accelerators have produced some promising results recently. Particle physicists still seem stuck in the big accelerator mentality when maybe there are better uses for that money, even in their own field.


I would say that 8km is actually quite small, and besides, LPWA has its own problems.


Of course it has its own problems, the question is whether a novel approach is likely to yield more novel insights than the standard approach for the past 50 years. Answer seems kind of obvious.


One should keep in mind that most dark matter "alternatives", including this one, actually include dark matter. It says so right on the 2nd page of their paper:

> Consider requirement (iii), that is, successful cosmology. In (2) we have a new d.o.f. φ [...] What should the expectation for a cosmological evolution of ϕ be? The MOND law for galaxies is silent regarding this matter. There is, however, another empirical law which concerns cosmology: the existence of sizable amounts of energy density scaling precisely as a^(−3).

In other words, they are saying that to get the cosmology right, they need to add stuff that behaves exactly like dark matter -- that is what they are alluding to with the "sizable amounts of energy". They make their φ field play this role. It's just like TeVeS, the other major relativistic MOND theory, where the scalar "S" field does the same thing.

The popular press likes to frame the debate as "dark matter vs. modified gravity", but it's really "dark matter vs. dark matter plus modified gravity", which is much less dramatic.


is that right? You're referring to the fact that in general in modern physics a "field" implies a field-carrier particle. Lambda-CDM dark matter theories specifically posit that various astronomical discrepencies can be explained using particles that only interact gravitationally, with a huge number of degrees of freedom (e.g. the difference between the bullet cluster and the milky way alone represents a LARGE set of degrees of freedom).

This seems rather different from a "single" field which may have a particle (that may or may not itself interact gravitationally), with only one additional degree of freedom.


> The popular press likes to frame the debate as "dark matter vs. modified gravity", but it's really "dark matter vs. dark matter plus modified gravity", which is much less dramatic.

Honestly for us lay folks there isn't a perceptible difference in the amount of drama between the two. :)


> We remark that A_\mu also contains a pure vector mode perturbation which is expected to behave similarly as in the Einstein-Æther theory [90, 91]"

Their [91] is Jacobson & Mattingly https://arxiv.org/abs/gr-qc/0007031 whose §VII (DISCUSSION) contains this, which I struggle to see as helpful for them: "With the action adopted in this paper the aether vector generically develops gradient singularities even when the metric is perfectly regular. We take this as a sign that the theory is unphysical as an effective theory". (That doesn't stop Jacobson from investigating things like (time-independent) black hole solutions https://arxiv.org/abs/gr-qc/0604088 "It is a plausible conjecture that nonsingular spherically symmetric initial data will evolve to one of the regular black holes whose existence has been demonstrated here, but this has certainly not been shown", and worse they show that the aether does not obey the Raychaudhri equation, so the relativistic MOND authors seem to need more ghosts).

For the life of me, I can't figure out the relevance of their reference [90] which I believe is https://www.jstor.org/stable/2414316

I wonder who their Reviewer 2 was.


The question is how much dark matter is required and if it’s little enough to be filled by baryonic dark matter candidates.


And given how hard it has been to find the "Dark Matter", theories that reduce the amount of it seem like valuable contributions to the overall understanding. Dark matter has so many "if its like ... then ..." scenarios that theories like this are effectively "working backwards" on the problem by giving us better constraints on the "then ... " part.


> What happened to the recent work showing that galactic rotation curves are consistent with ordinary GR? Last I read, cosmologists were choosing to ignore it.

Gravitomagnetism is a well-understood and experimentally measured effect. It is also a very small effect, of the order v^2 / c^2 where v is the speed of the sources. In the galaxy, stars move with v/c ~ 1/1000, which means the gravitomagnetic correction is one in a million. So while N-body simulations do sometimes account for general relativistic corrections like these, they're not nearly large enough to remove the requirement for dark matter.

That is the simple reason the paper has been ignored by everyone in the scientific community and rejected from decent journals. Of course, this hasn't stopped hundreds of fluffy pop articles being written on it, or it getting posted every week on HN. The blind leading the blind.


What I am hearing is that nobody has found an error in his derivation; instead, everybody has chosen to continue skating on the v^2/c^2 estimate arrived at without having done the detailed maths.

In general, anytime mathematical rigor is at issue, I will prefer to bet on the plasma fluid dynamicist over the cosmologist.


Gravitomagnetism is a well-understood and experimentally measured effect. It is also a very small effect, of the order v^2 / c^2 where v is the speed of the sources. In the galaxy, stars move with v/c ~ 1/1000, which means the gravitomagnetic correction is one in a million. So while N-body simulations do sometimes account for general relativistic corrections like these, they're not nearly large enough to remove the requirement for dark matter.

The main thing the paper should do is explain why they think the correction is a million times larger than the back of the envelope estimate. But they don't. Instead, they try to solve everything analytically, never plugging in numbers or reasoning about what's big or small, leading to a forest of long combinations of special functions. That's a reliable recipe for making a mistake.

That is the simple reason the paper has been ignored by everyone in the scientific community and rejected from decent journals. Of course, this hasn't stopped hundreds of fluffy pop articles being written on it, or it getting posted every week on HN. The blind leading the blind.


> Instead, they try to solve everything analytically...

This seems like a common refrain it lots of things I see (not just this one paper). Can anyone give a lay man's explanation why we can't just numerically simulate general relativity? As in, plug a simulation with 100 billion stars in to a super computer and see what comes out.


It ought to suffice to simulate the motion of exactly one star, in a circular orbit, for exactly one time-step, just adding up all the effects of each of the 1e11 or so other stars, plus interstellar medium. There are two possible end states: either it follows the circle--no dark matter needed--or it swings wide.


That is not the reason the paper has been ignored. If it really were wrong, somebody would say where. But most astrophysicists are nowhere near as familiar with the maths involved as the paper's author is.

It is ignored because it is inconvenient. There is no practical consequence for continuing to be wrong, in cosmology or astrophysics. You can be wrong and publish papers, be wrong and get hired, be wrong and get tenure. Meanwhile, there is no upside in letting dark matter have no role in galactic rotation curves. Feeling smug knowing everybody else is still deluded is a solitary vice. If it's right, that will probably have to be acknowledged someday, but there is no personal benefit to getting ahead of the curve, only irritation.

Cosmology has found myriad uses for dark matter besides patching up galactic rotation. Accepting reality means you need to explain why all the dark matter you have been using for these other things doesn't clump up into galaxies; or find some other way to explain what you have been using dark matter for. Dark matter is just too convenient: like the Schmoo, it can be almost anything you like, as much as you need, wherever you need it. Your use doesn't even need to be consistent with (almost) anybody else's.

When it finally becomes necessary to accept reality, no one will be embarrassed, because everyone will have lots of company, and it will never be mentioned again, at least anywhere polite.


This is why I left this site. Endless smug engineers explaining condescendingly to physicists why they’re stupid sheep, without knowing the first thing about anything. Intellectual curiosity, my ass. Do you have a reply to my concrete criticism or not?


You don't have a concrete criticism. You just have a complaint. Either the math in the paper is right, or it's wrong. If it's wrong, say where. If you can't find anything wrong, say that.


Thanks, I assume the same cricitcism also applies to other applying-GR-corrections papers? E.g. I saw one about gravitational self-interaction leading to concentrating gravity inside galaxies and starving the outside or something like that.


The same criticism applies for any other crappy paper HN likes. Whenever I check this site, half the time the front page has something even worse. Just assume everything you see here is wrong.


This is cool as always, but in case anybody is seriously contemplating using it: this list is infamous for its complete uselessness for anybody actually trying to learn. It's mostly recommended because of 't Hooft's name, but it doesn't reflect how he actually learned physics himself, nor how anybody ever has, really.

It's been "under construction" (i.e. completely abandoned) for two decades. Half the links are broken, and the ones that aren't tend to be whatever the top Google hit was in the 90s, not what's pedagogically best. If you're serious about learning physics, there are many much better roadmaps, like Susan Fowler's list (https://www.susanjfowler.com/blog/2016/8/13/so-you-want-to-l...).


Even better is to look up the undergrad/grad curriculum from a university and then look at the course webpages (many universities still publish their course materials available to anyone who has the link, without needing to login through canvas or a university portal). Pretty often you can get access to homeworks/exams and solutions, lecture notes, etc. in addition to seeing whatever textbook they're using.

Plus, the added benefit helping limit "analysis paralysis" from having too many possible texts to choose from yourself, just pick whatever was standard for that particular class.


This is a great point. Out of all the ones I've looked into, I think MIT has by far the most complete public curriculum (because of MIT OCW), but Cambridge and Oxford are not far behind, with excellent lecture notes and problem sets.


Agreed, at some point I've used something from all three of those and they're all great! I've seen a surprising amount of great stuff from smaller universities too iirc, if you google around.


I'm sorry I can't take Susan Fowler seriously. She claims she went from zero math knowledge (besides sixth grade) and a philosophy major to studying quantum field theory in a span of something like a year and a half [0].

If this wasn't horsesh*t to begin with, she went on to work in non-physics areas after graduation, and never did any research work in physics (no grad school either).

How convenient.

(likely explanation: either her undergrad program was super lax, passing pretty much everyone who shows up in class and exams, hence useless for a serious career in physics, or she's misrepresenting her background)

[0] https://web.archive.org/web/20170314073043/https://fledgling...


Well, I've read almost all the books she lists and I've been a quantum field theory practitioner for years, and I can at least attest the list is good. People actually learn from these books.

I think your comment also directly illustrates what I was complaining about. You really shouldn't source learning recommendations from the highest ranking people, because these people know the least about what it's like to learn something anew. A Nobel prize doesn't automatically make somebody a good teacher.


Carl Weiman would like to have a word with you. Ahahaa.


Not GP but if you mean that

> A Nobel prize doesn't automatically make somebody a good teacher.

is mistaken by pointing to Weiman, could you elaborate on that, please?


Carl took his nobel money for BEC and started a career in education and education research.


Sure, he is admirable that way. But the comment says not necessarily, which is not a throwaway. Personally, it seems to me that being good at teaching is at the least independent of being a good researcher, if not perhaps negatively correlated. That very much does not rule out extraordinary exceptions (ones that deserve a great deal of attention, for sure).


I just think they are not correlated. Both require you to put effort into being good at it. They also require you to have a firm grasp of the source material.


It's believable to me that a smart+motivated person whose reason for not knowing much math is lack of formal education could catch up a lot faster than you might expect. Educational pacing is generally designed for people who aren't smart and aren't motivated, so if you're both, you can go much faster.

Additionally, she was doing this at around 22 years old, which is in the age range that your brain reaches its optimum performance at learning new things.

She also wasn't starting from sixth grade math knowledge, more like spotty knowledge: she says she had learned some logic, algebra, and set theory.

It's annoying that she characterizes herself as a person who isn't smart/mathy/etc., when her story implies she has plenty of talent for it and just lacked the formal education. The vast majority of people do get a public school education or equivalent, and if they consider themselves bad at math, it's because they were having trouble learning it. If anything the story just demonstrates the dominance of talent+motivation over amount of educational background.

Edit: To elaborate, she says she expected math to be difficult because "I had heard throughout my life that math and physics were really difficult", not because she wasn't able to do well in her math classes. She says "I had the most difficult time possible taking intro physics and the beginning calculus courses", and yeah it's going to be challenging and a lot of work, but she doesn't say her grades came out bad in the end. The takeaway _should_ be that you need to be careful with second-hand opinions about what's difficult, because people vary so much in their aptitudes and interests.


Well, there's "learn" and then there's "Learn." One of my undergraduate QFT courses was taught by a nuclear physicist who wanted to spend the whole time talking about nuclear shells and mass gaps, so he crammed all the QFT in to the last half of the semester. In a blaze of glory, we ran though a bunch of linear algebra, got showed how to do Feynman diagrams and compute cross sections, and saw some vacuum solutions for the Dirac equation. After taking that class, I wouldn't say I knew QFT, but I could say I knew QFT without lying.

If you taught someone how to do derivatives in a half-semester blaze of glory like that, I bet you could combine it with the half-semester blaze of QFT glory to technically qualify as teaching a high school student QFT in half a year.

(I don't regret the professor's decision at all, by the way, I liked the nuclear stuff.)


Barton Zwiebach makes quantum mechanics pretty accessible https://www.youtube.com/playlist?list=PLUl4u3cNGP60cspQn3N9d...


There is a sort of qualitative approach to QFD that can abstract away the difficult math and become a sort of kids geometry game.

It's a little bit like programming Arduino using the high level scripting language and thinking your a hardware hack0r.


Has Susan Fowler proved herself to be a good theoretical physicst? How does one determine that her roadmap is good but OPs is bad?


I personally know the vast majority of the material in both roadmaps, so I know that 't Hooft's is far harder to learn from. Anybody can check this for themselves. There's plenty of broken links, extremely rough drafts of lecture notes, and wild fluctuations in sophistication. The ordering puts graduate-level stuff before its sophomore-level prerequisites.

My statement would only be controversial if you believed that arbitrary adversity in learning was necessary to be a good physicist -- and for my own sake I hope that isn't the case!


Yeah, well, t'Hooft's is harder to learn because it's actually a serious curriculum which is worth knowing rather than training wheels from someone who never did anything serious in physics. Title is how to be a good theoretical physicist; not "what someone might study as an undergraduate."


I mean sure but t' Hooft also denies every interpretation of QM other than superdeterminism, which is almost anti-science (no indepedence of experimenters).

I'd rather learn mainstream before I go solo


Susan Fowler isn't and never was a physicist. t'Hooft won the Nobel Prize in physics. The title is "how to become a good theoretical physicist" -not "what they taught me as an undergraduate." The end.


Wtf are you talking about. All the recommendations here are look at what physics departments teach and do that. Instead of listening to some Jack ass who thinks the only real physics is theoretical physics.


Another good source for quantum and linear algebra in particular is "Looking glass universe" https://youtu.be/r0plv_nIzsQ


Time to get a VPN if you want to communicate across the Great Firewall of America.


Curious about the downvotes. Do people believe that this curtailment of freedom can't happen in America? I know people already rushing to prepare for it.

Or perhaps they believe that it will happen, but that it's a good thing?

Or perhaps, as is increasingly common, they believe both simultaneously: "China's firewall restricts freedom; our copy of it promotes freedom."


> "China's firewall restricts freedom; our copy of it promotes freedom."

Sadly, I think there are a lot of people who would believe that.


spot on


Sort of the difference between a wall that keeps you in versus a wall that keeps unwelcome people out. But then again, some people believe that the us-mexico border wall is morally equivalent to the Berlin Wall, so who knows how that argument will go.


> Sort of the difference between a wall that keeps you in versus a wall that keeps unwelcome people out.

And how do you think the Chinese state describes the Chinese firewall?


I don't know, but it is clear that the Chinese firewall is about preventing Chinese people from accessing the wider internet, not about preventing non-chinese people from accessing the Chinese internet.


> the Chinese firewall is about preventing Chinese people from accessing the wider internet

It's not, really. Most of the "wider Internet" is accessible, only specific (mostly US or politically-oriented) properties are inaccessible. Very much like what is being planned in the US.


> To me, again as an outside observer, it feels so counter-intuitive to _invent_ a new type of matter you can't observe than to just say that your calculation is close but not right and to start over. Is it not a crutch?

Physicist here. If you're doing applied physics or engineering, this certainly would be a crutch. But when we're talking about fundamental physics, talking about new kinds of matter that nobody has seen before is not a crutch -- it's literally the core thing we do. That's what makes it fundamental!

Saw a track in the bubble chamber curving the wrong way? Invent a new kind of matter: antimatter.

Saw short-lived particles in the bubble chamber that shouldn't have made it there? Invent a new kind of matter: mesons that decay into the observed particles.

Problems with getting solar reactions to work out right? Invent a new kind of matter: neutrinos.

Amount of neutrinos detected not quite right? Invent multiple neutrinos and neutrino oscillations.

Saw some weird long-lived particles? Invent a new kind of matter: "strange" mesons and baryons.

Want to explain the pattern of mesons and baryons? Invent a new particle: "quarks", along with the stipulation that they can never be observed, even in principle.

Standard Model seems a little off-balance at this point? Invent a new particle: "charm" quarks to balance out the strange ones, at an energy high enough that nobody has seen them yet.

But the mesons and hadron patterns still aren't consistent with the Pauli exclusion principle! Invent a new force: color charge, carried by "gluons", which are also postulated to be unobservable.

Some particular meson and baryon decays acting weird? Invent a new force: the weak force, carried by "weak bosons", which are too heavy to be observable at the time.

Can't get the weak bosons to have mass? Invent a new interaction, the Higgs interaction, carried by an invented new field, the Higgs field, which gets a vev from an invented new function, the Higgs potential, whose elementary excitations are an invented new particle, the Higgs boson.

Of course, not every weird thing is explained by a new type of matter; many anomalies fade away after careful checking. But the anomalous observations that motivate dark matter persisted for almost a century, they're been only building in strength as we get more data, and all attempts we've made to explain them in terms of "normal" physics have failed. So the case for explaining it in terms of something new is at least as strong, in fact far stronger, than the examples I gave above.


I guess what is different about dark matter is that it has to outmass regular matter by a large factor. It feels unparsimonious to invent four-five times the mass of the known universe just to patch a discrepancy between observations and a theory of gravitation. It feels like the theory would better be adjusted to match observation than to patch observations to match theory.

Today I learned that the mass of the neutrinos we know about (which were similarly invented, though since detected) about matches the mass of all the stars.


Actually, in the context of astrophysics, that exact objection has been employed many times. For example, the most famous argument against heliocentrism was that it would require the stars to be ridiculously far away and ridiculously big to patch away the lack of parallax, which felt unparsimonious. Similarly, people believed that galaxies weren't galaxies, because it seems unparsimonious to expand the universe far beyond the Milky Way just to patch up some weird features of fuzzy nebula. And even in our galaxy, the mass in dust and interstellar gas exceed that in stars.

Literally all progress in fundamental physics is "just" "invented". Each time it must triumph against the objections of the same, thousand-year-old philosophical arguments.


Agreed.


This is one of those completely false things that people only believe is true by repetition. Go back and actually read the full set of WHO statements in mid-January. They have a bunch of statements saying that nations should get prepared, one saying that specific studies haven’t yet found hard evidence for person-to-person transmission (because at that point most of the cases they’d managed to find were tied to the market). The WHO never, ever said that it can’t be transmitted, and they absolutely never said that people should do nothing about COVID-19. They were urging nations to act for months before they actually did.


I’m in the “WHO is ineffective at best” camp, but I agree with you here.

They have been conservative in their statements. I don’t recall them ever saying “it doesn’t spread person-to-person” - I do recall them saying “there is no conclusive evidence of person-to-person transmission”. At the time, given the evidence they had, that was true. From their perspective saying that it did in fact spread person-to-person and later concluding it didn’t would have been much worse; I assume they take this approach to protect their reputation of being certain before making a public statement.

The problem seems to be that lay people seem to expect WHO to be on the bleeding edge and providing comprehensive information on the latest investigation and data. That’s not what they do. They report the findings, and that’s very different.


It seems that people expect the WHO to be clairvoyants rather than reporting evidence. It's disappointing that a scientifically minded community like HN believe absence of evidence is evidence of absence.


Respectfully, Could you link to the documents you were referring to?

I feel like a trend I see is folks say to go out on their own to find some document that proves their point. There are several statements by the WHO on those dates, and I do not know which one you mean.

I have unfortunately seen the trend also where someone does link a source that is very lengthy and same thing. In one case, the source actually contradicted the person citing it.

I am not accusing you of that, merely it makes your argument much more credible when it is easy to see where you cited your sources.


Wikipedia has 3 clear link from Jan 14th - the Telegraph (UK), Straits Times (Singapore) and Reuters, all conveying the quote that there was indeed human to human spreading, but at the time it seemed limited. Which is right.

The WHO further said

  "It is still early days, we don’t have a clear clinical picture."

https://www.reuters.com/article/us-china-health-pneumonia-wh...


Fair enough....but the grandparent comment said "Go back and actually read the full set of WHO statements".

I only found two WHO statements, and they are discussing the minutes of emergency meetongs on COVID in January. Are those what they are referring to? If so, which one?


Here's a statement from Jan 9th

https://web.archive.org/web/20200115194156/https://www.who.i...

"Coronaviruses are a large family of viruses with some causing less-severe disease, such as the common cold, and others more severe disease such as MERS and SARS. Some transmit easily from person to person, while others do not. According to Chinese authorities, the virus in question can cause severe illness in some patients and does not transmit readily between people"

That seems very responsible reporting given that only 10 days after being made aware of it, WHO wouldn't have been able to gather any independent evidence. They said it could transmit easily from person to person, but it might not.

Statement on Jan 13th

https://web.archive.org/web/20200115230651/https://www.who.i...

Certainly doesn't say anything along the lines of "there will be no human-human transmission", just calls (which were mostly unheeded in the west) for active monitoring and preparedness

Taiwan didn't get a first case until Jan 21st, which was someone who travelled from Wuhan. It wasn't until Jan 28th they had a domestic case which wasn't linked to Wuhan, so expecting the WHO to have independent evidence in mid January isn't realistic. They reported the evidence they had and pleaded with the world to take it seriously. Taiwan listened, Europe and America didn't.


Tweet on January 14 [0].

"Preliminary investigations conducted by the Chinese authorities have found no clear evidence of human-to-human transmission of the novel #coronavirus (2019-nCoV) identified in #Wuhan, #China"

0. https://twitter.com/WHO/status/1217043229427761152


https://www.telegraph.co.uk/global-health/science-and-diseas...

14 January 2020 • 5:40pm

> While clear information about the mysterious virus remains hazy, the WHO said on Tuesday that transmission between humans has not been ruled out.

“From the information that we have it is possible that there is limited human-to-human transmission, potentially among families"

The WHO report findings, and on Dec 14th there was limited evidence that it was spreading from human to human. All of that is true.


My point is WHO failed the world by trying to be politically correct with China and not listening to Taiwan. Here is how Taiwan dealt with it from the outset [0] and what Taiwan says about WHO [1].

0. https://www.telegraph.co.uk/global-health/science-and-diseas...

1. https://www.ft.com/content/2a70a02a-644a-11ea-a6cd-df28cc3c6...


CCP has told WHO to not acknowledge the existence of Taiwan.

Here's what it looks in action, with senior leadership at WHO following the orders from China (video interview)

https://twitter.com/ezracheungtoto/status/124386977441046937...?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: