Hacker News new | past | comments | ask | show | jobs | submit login

I really don't understand what point you are trying to make.

Your original comment about a model that might "keep playing chess" when you want it to do something else makes no sense. This isn't how LLMs work - they don't have a mind of their own, but rather just "go with the flow" and continue whatever prompt you give them.

Tool use is really no different than normal prompting. Tools are internally configured as part of the hidden system prompt. You're basically just telling the model to use a specific tool in specific circumstances, and the model will have been trained to follow instructions, so it does so. This is just the model generating the most expected continuation as normal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: