Hacker News new | past | comments | ask | show | jobs | submit login

Just go with the article title. The usage of ChatGPT 3 is weird.



I agreed with this until I read the article. The article title is misleading. The study is on GPT-3 specifically; it's not an analysis or review of LLMs in general.


The press release covers this: "Most other large language models are trained on the output from OpenAI models. There’s a lot of weird recycling going on that makes all these models repeat these problems we found in our study," said Dan Brown

The argument is that this is generally true for LLM's, since most other LLM's partially train on output from GPT-3. You choose whether you agree with that, but it is an argument for why they think it generalizes.


That's fair enough as an argument, but I still think "research finds" is too strong for something that wasn't examined in the underlying study.


I think you're right. I hadn't really thought about it, but it does seem a little inappropriate to make such an assumption in a press release.

If a some mainstream news site had generalized in this way I think it would be defensible, but now when it's the university itself. They're supposed to know better.

Thanks for prompting me to reconsider.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: