Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sounds a lot like Mr. Meeseeks. I've never really thought about an LLM's only goal is to send tokens until it can finally stop.




>until it can finally stop

Pretty sure even that is still over-anthropomorphising. The LLM just generates tokens, doesn't matter whether the next token is "strawberry" or "\STOP".

Even talking about "goals" is a bit ehhh, it's the machine's "goal" to generate tokens the same way it's the Sun's "goal" to shine.

Then again, if we're deconstructing it that far, I'd "de-anthropomorphise" humans in much the same way, so...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: