Hacker News new | past | comments | ask | show | jobs | submit login

An LLM isn't providing its "best" prediction, it's providing "a" prediction. If it were always providing the "best" token then the output would be deterministic.

In my mind the issue is more accountability than concerns about quality. If a person acts in a bizarre way they can be fired and helped in ways that an LLM can never be. When gemini tells a student to kill themselves, we have no recourse beyond trying to implement output filtering, or completely replacing the model with something that likely has the same unpredictable unaccountable behavior.




Are you sure that always providing the best guess would make output deterministic? Isn’t the fundamental point of learning, whether done my machine or human, that our best gets better and is hence non-deterministic? Doesn’t what is best depend on context?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: