Hacker News new | past | comments | ask | show | jobs | submit login

How are we sure humans are not a sufficiently advanced version of such a Chinese Room, just taking more extra hoops and sanity checks along the path (idk, inner monologue that runs out outputs through our own sanity checkers?), so our outputs are saner?

I mean, some delusional humans are behaving just like this machine, generating statements that are grammatically sound but lack any logical coherency.

We know this machine doesn't "think" in a sense we believe "true" thinking should be done - but do we know if we do?




Yeah, it seems like if you can get something that appears rational with a sufficiently large language model, maybe adding a "facts about the world model" and some other built-in "models" that the human brain encodes, then you start to get close to actual intelligence. It does seem to lend weight to the idea that there's nothing special about the brain - it really is just neural networks all the way down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: