Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That topic (ship's computer vs. Data) is actually discussed at length in-universe during The Measure of a Man. [0] The court posits that the three requirements for sentient life are intelligence, self-awareness, and consciousness. Data is intelligent and self-aware, but there is no good measure for consciousness.

[0] https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Tre...



Using science fiction as a basis for philosophy isn't wise, especially TNG which has a very obvious flavor of "optimistic human exceptionalism" (contrast with DS9, where I think Eddington even makes this point).


Doesn't ChatGPT fulfill these criteria too?


In a Chinese room sort of way, sure. The problem is we understand too well how it works, so any semblance of consciousness or self awareness we know to be simple text generation.


The word "simple" is doing quite a massive amount of work there.


Hm that's fair.

Retroactively I'd say "obscenely resource intensive text generation." Meanwhile for me to generate text all I need is water and a banana.


That's not so clear cut. I have a half decent local LLM running on my phone for a significantly lower power consumption than a human body.

Again, there's no real measure for consciousness, so it's difficult to say. If you ask me, frontier models meet the definition of intelligence, but not the definition of self-awareness, so they aren't sentient regardless of whether they are conscious. This is a pretty fundamental philosophical question that's been considered for centuries, outside of the context of AI.


ChatGPT knows about the ChatGPT persona. Much like I know the persona I play in society and at home. I don't know what the "core" me is like at all. I don't have access to it. It seems like a void. A weird eye. No character, no opinions.

The persona; I know very well.


To the extent it "knows" (using that word loosely) about the persona, it's deriving that information from its system prompt. The model itself has no awareness.

The sooner we stop anthropomorphizing AI models, the better. It's like talking about how a database is sentient because it has extremely precise memory and recall. I understand the appeal, but LLMs are incredibly interesting and useful tech and I think that treating them as sentient beings interferes with our ability to recognize their limits and thereby fully harness their capability.


Not the parent, but I understood it as them saying that the model has as part of its training data many conversations that older versions of itself had with people, and many opinion pieces about it. In that sense, ChatGPT learns about itself by analyzing how its "younger self" behaved and was received, not entirely unlike how a human persona/ego is (at least in part) dependent on such historical data.


I mean it in the way an Arduino knows a gas leak is happening. I similarly like the Arduino, I know about my persona that I perform. I'm not anthropomorphizing the Arduino. If anything, I'm mechamorphizing me.


its not self-aware, regardless what it tells you (see the original link)


I'm not sure what you're referring to in the original link, can you please paste an excerpt?

But thinking about it - how about this, what if you have a fully embodied LLM-based robot, using something like Figure's Helix architecture [0], with a Vision-Language-Action model, and then have it look at the mirror and see itself - is that on its own not sufficient for self-awareness?

[0] https://www.figure.ai/news/helix


I'm not claiming it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: