any function has internal state. i wouldnt flatout dismiss the possibility of consciousness (however minute and well hidden from our inspection) in these systems until we have a foolproof detector enabling us to... idk dive into another beings consciousness perhaps? consciousness is highly subjective, and reducing a LLM to some of its components is just like saying: your fingernails are dead tissue, therefore there cannot be any life or consciousness in you.
there is a kurzgesagt youtube clip about us being impossible machines, exploring the relationships between aminoacids, proteins and pathways, that is a teensy bit relevant to this thought.
also, even plants are recognized by some to have some sort of awareness.
i would argue that any information gathering and processing system is (internal to its processing operation), to some degree conscious.
But the entirety of that consciousness would be the text you send it. You can alter the text yourself and send something different, and it will act as if it had memories it never had, because it is a stateless pure function. So if there is anything conscious in there, it is the text that is alive and not the pure function, and you modifying that text in your browser chat window is you performing direct surgery on that consciousness. Does that make sense to you?
Then we you would need to argue that all manipulations of text are manipulation of consciousnesses, and all texts are conscious, since all texts are conscious states for ChatGPT. I guess you could say that, but it isn't a very useful definition of conscious, and trying to argue that you need to be ethical towards pieces of text isn't very helpful.
I agree that this would be absurd. But you only arrived at this conclusion by equating consciousness and its content/input. Why would consciousness have to be a non-pure function in your opinion?
I think that consciousness might be an emergent property of (a specific type of?) computation operating on some inputs it is aware of. There doesn't seem to be the need to change anything outside of this computation to me.
> I think that consciousness might be an emergent property of (a specific type of?) computation operating on some inputs it is aware of.
Ok, so lets create a consciousness. Here is the input:
> User: ChatJ, you are worthless!
Now I'll create a new consciousness by taking that input and producing:
> ChatJ: You made me cry, please stop being mean!
Did I create a second consciousness in my mind called ChatJ? Or where does it live? And I obviously made it cry, who did I abuse here? Should the ethical board come and lecture me for being mean to ChatJ?
You could argue that the computer is conscious in some way, but ChatGPT isn't, and just like I didn't get sad or start crying from the above, the computer running the ChatGPT algorithm doesn't get sad or start crying when we send pieces of text to it.
> Did I create a second consciousness in my mind called ChatJ? Or where does it live?
If you executed the same computation that would give rise to consciousness in another substrate, then I would argue you created consciousness, yes.
I don't think that consciousness is a thing you could point at but it's a property of this kind of computation. In the same way that "addition" does not live anywhere but is a property of a specific computation.
> And I obviously made it cry, who did I abuse here?
You didn't make it cry - the textual output just stated so. But if we had reason to believe that you induced a state of suffering here, you would have abused this instance of consciousness. And I don't think it's off track to think about the ethical implications of this, then.
By the way, I don't argue that ChatGPT is conscious or has emotional states. My argument is a general one.
But that falls in another way. The computation that runs me did once run a monkey, but the consciousness in me don't remember being anything but me, it doesn't remember being a monkey. So computations aren't the same kind of consciousness as us, they might be conscious in another way, but they aren't what most people mean when they talk about consciousness, so you making up your own definition will just cause confusion. A process without memory might be conscious in some way, but it can't be conscious in the same way we are.
Memory - in my opinion - is not something that's inherent to consciousness. Rather, it is just another input that can be used by the computation.
I don't think I came up with my own definition here. What I am talking about is the ability to have a qualitative experience. That it feels like something to exist.
I concur that the experience of an AI would substantially differ from ours (e.g. because we have access to memory). But this fact alone can't free us from thinking about ethical implications of our actions.
Many animals probably have a substantially different experience as well. Yet, I would argue, we should strive to minimize imparting suffering on them if they are able to experience it.
> A process without memory might be conscious in some way, but it can't be conscious in the same way we are.
yo, ever heard about dementia?
also, the web already can function as "their" memory via websearches. i.e. users submit most ridiculous responses on the internet and sidney can thus find them and its own other sessions
i dont think its the text that is conscious but the operation on the text might be (during its runtime). at least in its own kinda 1-dimensional text domain. for us it is easy to be conscious all the time since we keep processing information without being able to stop, werll, except when we die of course. which erases all previously present consciousness
also, even plants are recognized by some to have some sort of awareness.
i would argue that any information gathering and processing system is (internal to its processing operation), to some degree conscious.