Hacker News new | past | comments | ask | show | jobs | submit login

I'm really curious, are you able to demonstrate reasoning, not reasoning and the illusion of reasoning in a toy example? I'd like to see what each looks like.



Have you met someone who is full of bullshit? They sound REALLY convincing, except if you know anything about the subject, their statements are just word salad?


Have you met someone who's good at bullshitting their way out of a tough spot? There may be a word salad involved, but preparing it takes some serious skill and brainpower, and perhaps a decent high-level understanding of a domain. At some point, the word salad stops being a chain of words, and becomes a product of strong reasoning - reasoning on the go, aimed at navigating a sticky situation, but reasoning nonetheless.


The finest bullshitter I knew had serious skill and brainpower; and he BS'd about stuff he was expert in. It was really a sort of party trick - he could leave his peer experts speechless (more likely, rolling on the floor laughing).

His output was indeed word-salad, but he was eloquent. His bullshit wasn't fallacious reasoning; it didn't even have the appearance of reasoning at all. He was just stringing together words and concepts that sound plausible. It was funny, because his audience knew (and were supposed to know) that it was nonsense.

LLMs are the same, except they're supposed to pretend that it isn't nonsense.


Which would be a good test - and by that test ChatGPT is not reasoning, since it cant get out of sticky situations.

Yeah, I think you've got a good example that improves the analogy.


Are you able to give some examples? I'd like to know what it looks like w r.t. LLMs.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: