Hacker News new | past | comments | ask | show | jobs | submit login

This was my take.

I think absolutely anyone claiming that detecting LLM generated text is easy is flat out lying to themselves, or has only spent a few tokens and very little time playing with it.

Take semi-decent output, give it a single proof read and a few edits... and I don't fucking believe anyone who says they'll detect it. They absolutely will detect some of the most egregious examples of it, but assuming that's all of it is near willfully naive at this point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: