Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> to ensure it never does something

But there are many systems for which you cannot predict/control the behavior with just a few experiments because they are simply, probabilistic. Isn’t it also the case with LLMs? If not, why?



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: