Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with you, there seems to be very little here that demonstrates sentience and very little that is unexplainable. That said, based on how much we struggle with defining and understanding animal intelligence, perhaps this is just something new that we don’t recognize.

I am skeptical that any computer system we will create in the next 50 years (at least) will be sentient, as commonly understood. Certainly not at a level where we can rarely find counter evidence to its sentience. And until that time, any sentience it may have will not be accepted or respected.

Human children also make tons of mistakes. Yet, while we too often dismiss their abilities, we don’t discount their sentience because of it. We are, of course, programmed to empathize with children to an extent, but beyond that, we know they are still learning and growing, so we don’t hold their mistakes against them the way we tend to for adults.

So, I would ask, why not look at the AI as a child, rather than an adult? It will make mistakes, fail to understand, and it will learn. It contains multitudes



Only if the same AI algorithm has the ability to learn and self correct.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: