Hacker News new | past | comments | ask | show | jobs | submit | tripletao's comments login

He published an estimate of SARS-CoV-2 antibody seroprevalence in Santa Clara county, claiming a signal from a positivity rate that was within the 95% CI for the false-positive rate for the test. Recruitment was also highly non-random.

https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaw...

Such careless use of statistics is hardly uncommon; but it's funny to see that he succumbed too, perhaps blinded by the same factors he identifies in this paper.

Beyond that, he sometimes advocated for a less restrictive response on the basis of predictions (of deaths, infections, etc.) that turned out to be incorrect. I don't think that's a conspiracy theory, though. Are the scientists who advocated for school closures now "conspiracy theorists" too, because they failed to predict the learning loss and social harm we now observe in those children? Any pandemic response comes with immense harms, which are near-impossible to predict or even articulate fully, let alone trade off in an unquestionably optimal way.


Furin cleavage sites are common among coronaviruses, but rare among sarbecoviruses (SARS-like viruses). The presence of the FCS is not determinative, but it's notable, especially since the EHA had proposed in DEFUSE to engineer sarbecoviruses with a human-designed FCS.

In that proposal, the WIV would have worked only on collection of novel viruses from nature, with genetic engineering at Baric's lab at UNC. DARPA also declined to fund that proposal, on safety grounds. The Times reported an intelligence leak stating the work continued with other funders at the WIV only, though I haven't seen confirmation beyond that one article.

> The investigators spoke to two researchers working at a US laboratory who were collaborating with the Wuhan institute at the time of the outbreak. They said the Wuhan scientists had inserted furin cleavage sites into viruses in 2019 in exactly the way proposed in Daszak’s failed funding application to Darpa.

https://www.thetimes.com/uk/article/inside-wuhan-lab-covid-p...


Miller's claim rests on the assumption that the WIV would have published every virus it had collected. That's a facially odd claim--there's no research group in the world without unpublished work in progress, since even with no deliberate attempt to conceal, it takes time to write stuff up. We also know it to be specifically false here, since Deigin and some other authors found an unpublished merbecovirus in contamination of published datasets from agricultural sequencing on equipment shared with the WIV:

https://pubmed.ncbi.nlm.nih.gov/36865340/

That merbecovirus is unrelated to SARS-CoV-2; but with unequivocal evidence that the WIV had at least one unpublished virus, it's hard to reject the possibility they had two.


I guess this meme will never die? It absolutely was not, to the point that Dr. Shi used blood samples from Wuhan as negative controls. The closest known bat viruses were collected around Yunnan (~900 miles away), and in Southeast Asia.

EDIT: And since my comment is downvoted, here's a reference from Dr. Shi herself:

> We have done bat virus surveillance in Hubei Province for many years, but have not found that bats in Wuhan or even the wider Hubei Province carry any coronaviruses that are closely related to SARS-CoV-2. I don't think the spillover from bats to humans occurred in Wuhan or in Hubei Province.

https://web.archive.org/web/20210727042832/https://www.scien...

No one expected natural spillover near Wuhan. That doesn't mean it didn't happen, but the claim that the WIV was situated based on that is unequivocally false.


The paper itself was already submitted and briefly discussed, https://news.ycombinator.com/item?id=41593098

As for previous attempts from the same authors (e.g. Pekar's "The molecular epidemiology of multiple zoonotic origins of SARS-CoV-2"), it starts with low-quality data from the early pandemic, does some complicated math, and concludes that the pandemic originated from zoonotic transmission in the Wuhan market, "beyond reasonable doubt". It therefore wasn't a research accident, and there is no need for increased regulation of high-risk research of the type performed at the WIV.

The details can and should be criticized, and I linked a detailed rebuttal in the other post. But fundamentally, how could any model reach any conclusion with the confidence they express? Western authorities had prior warning, much more advanced surveillance, and good knowledge of where the virus might get introduced (by necessity, an airport or seaport)--and yet they succeeded in tracing almost zero cases back to their introduction. So how could anyone possibly believe that a much cruder set of tools has succeeded at the more difficult task of determining the pandemic's earliest origins?


Anecdotal as it may be, an investor in a company I used to work for was from Wuhan. He paid $80k USD to get out of the country in December 2019. They knew it was bad then and anyone who could was fleeing.

The same set of authors has brought us at least two prior attempts ("proximal origin", "multiple zoonotic origins") to reject the possibility that the SARS-CoV-2 pandemic arose due to a research accident. All have been covered credulously in the popular media, contributing to the false consensus that e.g. caused Facebook to delete opposing arguments. This false consensus has now broken to some extent, but apparently not yet among high-impact journals.

As for prior attempts, their result is grossly overstated. Biosafety Now has published a detailed call for its retraction:

https://biosafetynow.substack.com/p/crits-christoph-et-al-20...

I don't think the details are really necessary, though. Approximately zero cases outside China were traced to their introductions, despite the forewarning, and despite the restricted set of options (an airport or seaport). This isn't for lack of trying--it's just really hard to do that from epidemiological data that's necessarily scarce and biased, especially for a virus whose frequently mild symptoms mean most cases never get ascertained.

So why would anyone believe they'd succeeded at the much more difficult task of tracing that very first introduction? The usual answer seems to be "because the paper was full of math that I didn't understand, and I trusted the authors"--but that's a pretty bad reason, especially when the authors are funded by and coordinated closely with the agency that advocated for (and funded!) the high-risk research in question.


Thank you for the context. My reasons for thinking it was credible have nothing to do with "math I didn't understand."

I actually think the larger problem is how it spread via travelers and that some of the actions taken nominally for purposes of controlling the spread actually made things worse. People were herded together in airports to be checked or some nonsense.

We may never know the origin story and I still don't know what to suggest in practical terms for preventing something similar from happening again, but I do think it needs to be addressed someplace other than "wear face masks and use hand sanitizer" while otherwise doing the same stuff that helped spread the virus around the globe.


I hadn't read your top-level comment here before I wrote mine, but I think you're responding to a different question from the one the authors intended to answer. The paper's language is rather muddy (even vs. the preprint), I assume because Cell required the authors to weaken their claims. The authors' comments to the popular media express their intent more clearly:

> "This paper slots into many other studies over the last few years that have been building the case for this very clearly being a natural virus that spilled over, very likely at the Wuhan seafood market," Kristian Andersen, co-corresponding author and professor at Scripps Research, told Newsweek.

https://www.newsweek.com/scientists-shed-light-wildlife-spec...

This paper is about that initial introduction of the virus into humans, not about subsequent human-to-human spread. The authors are arguing that SARS-CoV-2 was "very clearly" natural, and thus not a research accident. This forms the basis for arguments that additional regulation of high-risk biological research is unnecessary, since it's much harder to say that with the possibility that such research just killed ~30M people.


I didn't assume it was some kind of rebuttal of my comment. I'm generally looking for genuine, meaty discussion.

The pandemic impacted the entire globe and a lot of internet comments were driven by fear, not genuine curiosity or interest in problem solving.

While I understand why that is, it doesn't go good places.

I am perfectly happy to accept your assertion that context suggests this is basically a politically motivated piece trying to dismiss claims that it originated in a lab.

I wrote a piece elsewhere that boils down to "Christmas travel brought us the global pandemic." Regardless of where the issue originated, it spread globally and didn't remain a local crisis thanks to global travel and how that gets handled.

I don't have answers but I don't like the way the whole thing was handled and it's nigh impossible to have meaningful discussion of that with anyone anywhere on the Internet.

And given the lack of quality discussion, it's impossible to develop a good framework for how to even see the problem space.

My marriage was a case of opposites attract and we were once shopping for a bookcase and I hated the bookcase he wanted and he hated the bookcase I wanted. So I finally had the sense to ask why he liked it.

I wanted something pretty. He wanted something sturdy that wouldn't collapse under the weight of the books.

Armed with this information, it was possible to find a bookcase we both liked.

Decades later, most internet discussion seems to still be stuck in that space before I asked that question where we both thought the other person was clearly an idiot. Only I don't know how to get past it online.


"...we were once shopping for a bookcase and I hated the bookcase he wanted and he hated the bookcase I wanted. So I finally had the sense to ask why he liked it"

"Decades later, most internet discussion seems to still be stuck in that space before I asked that question where we both thought the other person was clearly an idiot. Only I don't know how to get past it online."

Thank you


>This forms the basis for arguments that additional regulation of high-risk biological research is unnecessary >such research just killed ~30M people. It was a lab leak.. I should know. The Chinese government has admitted it in secret and let's say they have made agreements to make affected nations whole, behind closed doors and with diplomacy. This in turn has trickled into media and social media indirectly and directly from China inducements, making sure that the lab leak theory is both underplayed and framed in a "we can't know for sure" light. Textbook water muddying where all sides have something to gain. If it's any consolation the party responsible for screwing up and killing more people than Hitler, Genghis Khan, and Stalin combined.... they have been dealt with appropriately by China

How is Substack blog authority here?

Substack is an openly-available hosting platform with almost zero editorial standards, so I'm not sure why anyone would consider it to have "authority"? It's like asking which brand of ballpoint pen writes the most trustworthy content.

I linked to that article because I read its content, and I believe that content to be correct. If you're looking to go by prestige instead of content, then the authors and signers are professors of molecular biology and adjacent fields, many from highly-ranked universities.

The Cell authors think SARS-CoV-2 arose naturally, beyond any reasonable doubt. The Biosafety Now authors think there's a possibility that SARS-CoV-2 arose from a research accident, and that tighter regulation of enhanced potential pandemic pathogens is therefore required. These are directly opposing views, on a question that may correspond to millions of past deaths, and yet more in future. What do you think?


To emphasize, the "GPT Tutor" kids didn't do worse on the exam than the control kids, but they didn't do better either. The effect was slightly negative, statistically insignificant:

> Student performance in the GPT Tutor cohort was statistically indistinguishable from that of the control cohort, and the point estimate was smaller by an order of magnitude (-0.004), suggesting minimal impact to performance in the unassisted exam.

"GPT Base" would provide complete answers, but "GPT Tutor" was prompted to provide only hints. So the result is perhaps that:

1. Given the option to let a machine ("GPT Base") do their homework, many kids will lazily take it. These kids won't learn as much.

2. A machine that refuses to do their homework ("GPT Tutor") doesn't cause that problem. It doesn't seem to help either, though.

I'd guess that laziness would explain most of the harm rather than mistakes made by "GPT Base", though I have no particular evidence for that. Maybe someone will repeat this study in a domain where "GPT Base" makes fewer mistakes, allowing those two effects to be distinguished. (Though would that pass ethics review, now that "GPT Base" is known to impair learning in at least some cases?)


These comments are filled with misunderstandings of the result. There were three groups of kids:

1. Control, with no LLM assistance at any time.

2. "GPT Base", raw ChatGPT as provided by OpenAI.

3. "GPT Tutor", improved by the researchers to provide hints rather than complete answers and to make fewer mistakes on their specific problems.

On study problem sets ("as a study assistant"), kids with access to either GPT did better than control.

When GPT access was subsequently removed from all participants ("on tests"), the kids who studied with "GPT Base" did worse than control. The kids with "GPT Tutor" were statistically indistinguishable from control.


Changing things almost always improves results, that is the first rule you need to remember during education testing. Most of the improvements disappear when you make it standard.

This effect likely comes from novelty being more interesting so kids gets more alert, but when they are used to it then it is the same old boring thing and education results go back to normal. Of course things can improve or get worse, but in general it is really hard to say, you need to have a massive advantage over the standard during testing to actually get any real improvements, most of the time you just make things worse.


Reading comprehension is really awful nowadays or people tend to just comprehend what confirms their prior beliefs. The sad part is that none of those people will ever realize the errors in their comprehension of the article. That's exactly one of the mechanisms how people form wrong opinions, it's compounding and almost impossible to change.

The takeaway should be (besides the research still needing reproduction) to encourage the control of the type of AI agent that is given to students, ones that don't just give answers to copy but provide tutoring. OpenAI should be forced to develop such a "student mode" immediately and parents and educators need to be made aware of it to make sure students are using it, otherwise students are going to get much worse in tests, as they just ask it for answers to copy in assignments.


Assuming that the kids with "Human Tutor" were statistically better than control (they were not in the study so we will not know) - this is a very poor showing for ChatGPT.


Vaccines designed at a genomic level target the spike because that's the protein at the surface of the virus, and thus believed to be the most effective target for your immune system. That belief seems to have been correct (at least as to infection), since mRNA vaccines against the spike had the highest efficacy. The spike is unfortunately also under heavy selective pressure for that same reason, leading to the rapid evolution that you note; but that's the inherent and unavoidable tradeoff.

Inactivated virus vaccines (like Sinovac) don't target the spike, since they don't target anything. They do induce anti-nucleocapsid immunity, but there's no evidence that corresponds to any clinical benefit.

I don't think the word "immunogenic" means what you think it does. Vaccines are supposed to be immunogenic (i.e., to generate an immune response); if they weren't then they wouldn't do anything. I've seen papers proposing that the spike protein is toxic in certain cases, but no evidence that the exposure from the vaccine minus the exposure from averted infections is net harmful.


I think you and p51-remorse are discussing different parts of the article. They're saying the updated analysis is suspect because of the risk of false discoveries. I believe that's probably true in the usual way--if we study 20 subgroups with no actual effect, then we expect one to show an effect with p < 0.05. There's no mention of preregistration or anything like a Bonferroni correction to manage that risk.

You're saying the original analysis was wrong due to a coding error. I believe that's also true, but that's not what they were discussing. The variable names are inscrutable, but the article text also seems to imply that line (mis)codes divorce, not severe illness:

> People who left the study were actually miscoded as getting divorced.

So they actually found a correlation between severe illness and leaving the study. That's perhaps intuitive, if those people were too busy managing their illness to respond.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: