Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> OpenAI says it has evidence DeepSeek used its model to train competitor.

> The San Francisco-based ChatGPT maker told the Financial Times it had seen some evidence of “distillation”, which it suspects to be from DeepSeek.

> ...

> OpenAI declined to comment further or provide details of its evidence. Its terms of service state users cannot “copy” any of its services or “use output to develop models that compete with OpenAI”.

OAI share the evidence with the public; or, accept the possibility that your case is not as strong as you're claiming here.



Also, there are so many innovations in their papers (Deepseek math, Deepseek v2/v3, R1) that I honestly wouldn’t even care. They figured out a way to train on only 2048 H800s when big companies are buying them in the hundreds of thousands. They created a new RL algorithm. They improved MoE. They improved the KV cache. They built an super efficient training framework.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: