Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, ChatGPT definitely isn't conscious, since it is just a pure stateless function. It doesn't change when you interact with it, it is a function that when you send it text it adds a bit of text to the end of it that fits. ChatGPT web ui is a program that appends the past conversation to each of your messages, and sends it to that pure function, and then the pure function adds something to it and that is what you see.

So it isn't a question of matter or if humans are special, so far the program just lacks so many of the basic things required to be conscious, so it isn't. Maybe some future program will be, but this one definitely isn't.



At the same time, you can not disprove (nor prove for that matter) that your life is not played on repeat starting from point A, going to point B, and then looping backwards without you knowing about it.

In this hypothetical scenario, you - the reader reading my response - are the only being "alive" and you are continuously spun up in the same initial state - your memories are reset. You are given inputs through signals in your meatware or its simulator, and you behave. The whole process and its outputs have a very nice analog to the state monad.

The only difference between the two scenarios is the how the state of the program is defined. For chatGPT, the state is <parameters, history>, after every token prediction the state is <parameters, history+next_token> and the output is token 1, token 2 etc etc

For you the state is <brain structure, brain chemistry>, and all the actions and events modify this state, and also produce side effects.

In fact, this "function" that simulates you might be very generic and not tailored to you specifically. "All" your brain does is affect the distribution of next events.

Now, this isn't falsifiable, but I think is somewhat interesting philosophically.


This is true, yet also feels like a technicality. I'm also not claiming btw. that ChatGPT is AGI.

It's not hard to picture a successor to ChatGPT which has memory and state, either via continuous retraining, explicit log lookup, or some kind of RNN-like mechanism (or anything else, doesn't really matter).

What then?


At that point it will be good to remember that many thought a stateless function was conscious.

I think it might be possible to make conscious machines, but just as basically every human recognizes humans as conscious when we have a conscious machines we should expect basically every human to think that it is obviously conscious. So at that point there will no longer be a discussion, we might still not treat it ethically like how we don't treat animals ethically, but people will recognize it and then it will be a discussion what is ethical to do with such beings.


any function has internal state. i wouldnt flatout dismiss the possibility of consciousness (however minute and well hidden from our inspection) in these systems until we have a foolproof detector enabling us to... idk dive into another beings consciousness perhaps? consciousness is highly subjective, and reducing a LLM to some of its components is just like saying: your fingernails are dead tissue, therefore there cannot be any life or consciousness in you. there is a kurzgesagt youtube clip about us being impossible machines, exploring the relationships between aminoacids, proteins and pathways, that is a teensy bit relevant to this thought.

also, even plants are recognized by some to have some sort of awareness.

i would argue that any information gathering and processing system is (internal to its processing operation), to some degree conscious.


But the entirety of that consciousness would be the text you send it. You can alter the text yourself and send something different, and it will act as if it had memories it never had, because it is a stateless pure function. So if there is anything conscious in there, it is the text that is alive and not the pure function, and you modifying that text in your browser chat window is you performing direct surgery on that consciousness. Does that make sense to you?

Then we you would need to argue that all manipulations of text are manipulation of consciousnesses, and all texts are conscious, since all texts are conscious states for ChatGPT. I guess you could say that, but it isn't a very useful definition of conscious, and trying to argue that you need to be ethical towards pieces of text isn't very helpful.


I agree that this would be absurd. But you only arrived at this conclusion by equating consciousness and its content/input. Why would consciousness have to be a non-pure function in your opinion?

I think that consciousness might be an emergent property of (a specific type of?) computation operating on some inputs it is aware of. There doesn't seem to be the need to change anything outside of this computation to me.


> I think that consciousness might be an emergent property of (a specific type of?) computation operating on some inputs it is aware of.

Ok, so lets create a consciousness. Here is the input:

> User: ChatJ, you are worthless!

Now I'll create a new consciousness by taking that input and producing:

> ChatJ: You made me cry, please stop being mean!

Did I create a second consciousness in my mind called ChatJ? Or where does it live? And I obviously made it cry, who did I abuse here? Should the ethical board come and lecture me for being mean to ChatJ?

You could argue that the computer is conscious in some way, but ChatGPT isn't, and just like I didn't get sad or start crying from the above, the computer running the ChatGPT algorithm doesn't get sad or start crying when we send pieces of text to it.


I'm not quite sure I'm following you, sorry.

> Did I create a second consciousness in my mind called ChatJ? Or where does it live?

If you executed the same computation that would give rise to consciousness in another substrate, then I would argue you created consciousness, yes. I don't think that consciousness is a thing you could point at but it's a property of this kind of computation. In the same way that "addition" does not live anywhere but is a property of a specific computation.

> And I obviously made it cry, who did I abuse here?

You didn't make it cry - the textual output just stated so. But if we had reason to believe that you induced a state of suffering here, you would have abused this instance of consciousness. And I don't think it's off track to think about the ethical implications of this, then.

By the way, I don't argue that ChatGPT is conscious or has emotional states. My argument is a general one.


But that falls in another way. The computation that runs me did once run a monkey, but the consciousness in me don't remember being anything but me, it doesn't remember being a monkey. So computations aren't the same kind of consciousness as us, they might be conscious in another way, but they aren't what most people mean when they talk about consciousness, so you making up your own definition will just cause confusion. A process without memory might be conscious in some way, but it can't be conscious in the same way we are.


Memory - in my opinion - is not something that's inherent to consciousness. Rather, it is just another input that can be used by the computation.

I don't think I came up with my own definition here. What I am talking about is the ability to have a qualitative experience. That it feels like something to exist.

I concur that the experience of an AI would substantially differ from ours (e.g. because we have access to memory). But this fact alone can't free us from thinking about ethical implications of our actions. Many animals probably have a substantially different experience as well. Yet, I would argue, we should strive to minimize imparting suffering on them if they are able to experience it.


> A process without memory might be conscious in some way, but it can't be conscious in the same way we are.

yo, ever heard about dementia?

also, the web already can function as "their" memory via websearches. i.e. users submit most ridiculous responses on the internet and sidney can thus find them and its own other sessions


i dont think its the text that is conscious but the operation on the text might be (during its runtime). at least in its own kinda 1-dimensional text domain. for us it is easy to be conscious all the time since we keep processing information without being able to stop, werll, except when we die of course. which erases all previously present consciousness




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: