Hacker News new | past | comments | ask | show | jobs | submit login

> Philosophers have always been misinterpreting AI research's goals, which is why nobody in AI has ever paid them any attention, and which is also why they'll never be relevant to anything. Even if they're right, they're not asking questions that anyone cares about.

That's totally unfair. Serious philosophers object to the misapplication of AI research to answer philosophical questions about the mind (not necessarily even by AI researchers), not AI in general. It's basically the the same complaint you have against Searle being invoked against AI. Don't sink to the level of the person you're replying to with mindless tribalism. Your definition of 'anyone' appears to be AI researchers.

Fwiw, although Searle made it onto the undergraduate phil mind courses, he isn't really taken that seriously by contemporary philosophers.




That's fair - I'm biased by having too many conversations with self professed philosophers telling me that any push towards AGI is wasted effort because of X, where X just means that it wouldn't satisfy whatever they think is special about humans.

I don't think AI researchers have anything to offer philosophy. The thing is, AI researchers rarely engage at all except when philosophers pop up and tell them that what they're doing is impossible. AI researchers generally don't give a shit about philosophy, whereas there is a ton of noise coming from the other direction.

You may be right in your implicit suggestion that the people bringing up Searle are really just amateurs, though. I don't ever recall anyone with bona fide credentials in philosophy ever mentioning the guy as anything more than a sad amusement...


Philosophy often borrows examples from other fields in order to provide concrete examples of quite abstract ideas.

Unfortunately, this often gets misunderstood (both by practitioners of those fields and people with an axe to grind) as being critical of the field. The criticism is usually really directed at another philosophical position.

I think the reason the Chinese Room argument gets so much attention is that it's an argument against a position that was popular in the 1970s --- that mental states are identical (as in, strict identity) to classical computational states --- while being easy to understand and criticise. As you say, it assumes its own conclusion.

To be fair to Searle, I shoul point out that while the chinese room argument isn't taken seriously, he did other unrelated work that is still relevant!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: