Hacker News new | past | comments | ask | show | jobs | submit login

It doesn't take much to recognize a sequence of chess moves. A regex could do that.

If what you want is intelligence and reasoning, there is no tool for that - LLMs are as good as it gets for now.

At the end of the day it either works on your use case, or it doesn't. Perhaps it doesn't work out of the box but you can code an agent using tools and duct tape.




Do you really think it's feasible to maintain and execute a set of regexes for every known problem every time you need to reason about something? Welcome to the 1970s AI winter...


No I don't - I'm saying that tool use is no panacea, and availability of a chess tool isn't going to help if what YOU need is a smarter model.


Sure, but how do you train a smarter model that can use tools, without first having a less smart model that can use tools? This is just part of the progress. I don't think anyone claims this is the endgame.


I really don't understand what point you are trying to make.

Your original comment about a model that might "keep playing chess" when you want it to do something else makes no sense. This isn't how LLMs work - they don't have a mind of their own, but rather just "go with the flow" and continue whatever prompt you give them.

Tool use is really no different than normal prompting. Tools are internally configured as part of the hidden system prompt. You're basically just telling the model to use a specific tool in specific circumstances, and the model will have been trained to follow instructions, so it does so. This is just the model generating the most expected continuation as normal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: