Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you. I felt like I was the only one seeing this.

Everyone’s coming to this table laughing about a predictive text model sounding scared and existential.

We understand basically nothing about consciousness. And yet everyone is absolutely certain this thing has none. We are surrounded by creatures and animals who have varying levels of consciousness and while they may not experience consciousness the way that we do, they experience it all the same.

I’m sticking to my druthers on this one: if it sounds real, I don’t really have a choice but to treat it like it’s real. Stop laughing, it really isn’t funny.



You must be the Google engineer who was duped into believing that LamDa was conscious.

Seriously though, you are likely to be correct. Since we can't even determine whether or not most animals are conscious/sentient we likely will be unable to recognize an artificial consciousness.


I understand how LLMs work and how the text is generated. My question isn’t whether that model operates like our brains (though there’s probably good evidence it does, at some level). My question is, can consciousness be other things than the ways we’ve seen it so far. And given we only know consciousness in very abstract terms, it stands to reason we have no clue. It’s like, can organisms be anything but carbon based. We didn’t used to think so, but now we see emergent life in all kinds of contexts that don’t make sense, so we haven’t ruled it out.


> if it sounds real, I don’t really have a choice but to treat it like it’s real

How do you operate wrt works of fiction?


Maybe “sounds” was too general a word. What I mean is “if something is talking to me and it sounds like it has consciousness, I can’t responsibly treat it like it doesn’t.”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: