Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rebuttal: https://aisnakeoil.substack.com/p/a-misleading-open-letter-a...

Summary: misinfo, labor impact, and safety are real dangers of LLMs. But in each case the letter invokes speculative, futuristic risks, ignoring the version of each problem that’s already harming people. It distracts from the real issues and makes it harder to address them.

The containment mindset may have worked for nuclear risk and cloning but is not a good fit for generative AI. Further locking down models only benefits the companies that the letter seeks to regulate.

Besides, a big shift in the last 6 months is that model size is not the primary driver of abilities: it’s augmentation (LangChain etc.) And GPT3-class models can now run on iPhones. The letter ignores these developments. So a moratorium is ineffective at best and counterproductive at worst.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: