Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yup, I find LLMs are fantastic for surfacing all kinds of "middle of the road" information that is common and well-documented. So, for getting up to speed or extracting particular answers about a field of knowledge with which I'm unfamiliar, LLMs are wonderfully helpful. Even using later ChatGPT versions for tech support on software often works very well.

And the conversational style makes it all look like good reasoning.

But as soon as the wanders off the highways into little-used areas of knowledge (such as wiring for a CNC machine controller board instead of a software package with millions of users' forum posts), even pre-stuffing the context with heaps of specifically relevant documents rapidly reveals there is zero reasoning happening.

Similarly, the occasional excursions into completely the wrong field even with a detailed prompt show that the LLM really does not have a clue what it is 'reasoning' about. Even with thinking, multiple steps, etc., the 'stochastic parrot' moniker remains applicable — a very damn smart parrot, but still.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: