Hacker News new | past | comments | ask | show | jobs | submit login

No I don't - I'm saying that tool use is no panacea, and availability of a chess tool isn't going to help if what YOU need is a smarter model.



Sure, but how do you train a smarter model that can use tools, without first having a less smart model that can use tools? This is just part of the progress. I don't think anyone claims this is the endgame.


I really don't understand what point you are trying to make.

Your original comment about a model that might "keep playing chess" when you want it to do something else makes no sense. This isn't how LLMs work - they don't have a mind of their own, but rather just "go with the flow" and continue whatever prompt you give them.

Tool use is really no different than normal prompting. Tools are internally configured as part of the hidden system prompt. You're basically just telling the model to use a specific tool in specific circumstances, and the model will have been trained to follow instructions, so it does so. This is just the model generating the most expected continuation as normal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: