Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The situation you describe is exactly the "Chinese room" argument. I don't want to get too far into the weeds here, but the DALLE / Stable Diffusion models are cool because they do what you ask, even if they do so imperfectly. This model from Facebook cannot accurately answer a single thing I've asked it.


I often hear the claim "AI does not really understand" but when you can ask it to draw an armchair in the shape of an avocado or an astronaut riding a horse on the Moon, and it does it (!!?), it's not like the "Chinese room" had any specific rules on the books on these questions. What more do people want to be convinced?

AIs make many mistakes, humans make many mistakes. AIs fail to draw hands, humans too, with few exceptions. Ask a human to draw a bicycle from memory. https://road.cc/content/blog/90885-science-cycology-can-you-...

Some of us believe in conspiracies rather than science. But as a species we still think ourselves intelligent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: