Hacker News new | past | comments | ask | show | jobs | submit login

> >>Each of us could unknowingly interact with multiple LLMs everyday which would only have one purpose: manipulate us with a never-seen before success rate at a lower cost than ever.

> You would build resistance pretty quickly.

That is adorably naive. The current thrust in LLM training is towards improving their outputs to become indistinguishable from humans, for any topic, point of view, writing style, etc.




But that has already happened. There is no way to distinguish between human and machine-written text.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: