Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> we can consider that it is not of the same degree as ours

We can be absolutely sure an intelligence operating on different physical principles will be very different from our own. We can only assess intelligence by observing the subject, because the mechanism being different from our own can’t exclude the subject from being sentient.

> remains the same type of machine as it was before running the AI software

It’s not our brain that’s conscious. It’s the result of years of embodied experiences fine tuning the networks that make our brains that is our sentient mind. Up until now, this was the only way we knew a sentient entity could be created, but it’s possible it’s not the only one, just the one that happens naturally in our environment.





One of the issues is that you're mixing up consciousness, sentience, intelligence, and aliveness (you're far from alone in this). We know these are all linked things but it's hard to neatly delimit them and clarify the terms, yet they're something we have certainties about at a deep level. A machine is clearly demonstrating parts of intelligence, but going further into sentience and consciousness is much harder, and aliveness even harder still.

We know that a cat has sentience of a certain kind, and consciousness of a certain kind different to ours in some ways that would be hard to test and verify, and intelligence that is suited for its purpose but it seems that the cat "doesn't know it knows", and it is definitely alive up until the point it dies and all these properties fade from its body. The textual machine then has mechanised properties of our intelligence and produces outputs that match intelligent outputs as ours. Yet going further into sentience and consciousness is much harder – it seems to also "know it knows" or can at least produce outputs that are not easily differentiated from a human producing textual outputs. But we know intrinsically that sentience and consciousness are connected yet separate from intelligence, so having limited degrees of machine sentience doesn't necessarily allow a jump to consciousness, and certainly not aliveness because the machine isn't alive and never was, and never can be. As humans these things are important to us, particularly because suffering and feeling emotions are a crucial part of human existence (and even intelligence). A machine that can be turned off and on again, that isn't alive, and doesn't suffer or have our kinds of conscious experiences isn't really going to meet our criteria for what we find most valuable about being intelligent (sapient), conscious, sentient, alive beings, even if it outputs useful amounts of rational intelligence.

I'm also not sure what you say by "It's not our brain that's conscious" given we can't have conscious experiences without one. A baby in the womb has a degree of consciousness (at some point) without those years of "fine tuning the networks". Hence at this point you seem like you're mixing up consciousness, sentience, and sapience.


> We know these are all linked things but it's hard to neatly delimit them and clarify the terms, yet they're something we have certainties about at a deep level.

And this is the biggest issue we have when saying categorically a machine that exhibits a given behavior is somehow faking it. You can't say for sure a machine that says they love you is incapable of having feelings, the same way we can't prove I can think, because I could just be reasonably good at faking that behavior.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: