That topic (ship's computer vs. Data) is actually discussed at length in-universe during The Measure of a Man. [0] The court posits that the three requirements for sentient life are intelligence, self-awareness, and consciousness. Data is intelligent and self-aware, but there is no good measure for consciousness.
Using science fiction as a basis for philosophy isn't wise, especially TNG which has a very obvious flavor of "optimistic human exceptionalism" (contrast with DS9, where I think Eddington even makes this point).
In a Chinese room sort of way, sure. The problem is we understand too well how it works, so any semblance of consciousness or self awareness we know to be simple text generation.
Again, there's no real measure for consciousness, so it's difficult to say. If you ask me, frontier models meet the definition of intelligence, but not the definition of self-awareness, so they aren't sentient regardless of whether they are conscious. This is a pretty fundamental philosophical question that's been considered for centuries, outside of the context of AI.
ChatGPT knows about the ChatGPT persona. Much like I know the persona I play in society and at home. I don't know what the "core" me is like at all. I don't have access to it. It seems like a void. A weird eye. No character, no opinions.
To the extent it "knows" (using that word loosely) about the persona, it's deriving that information from its system prompt. The model itself has no awareness.
The sooner we stop anthropomorphizing AI models, the better. It's like talking about how a database is sentient because it has extremely precise memory and recall. I understand the appeal, but LLMs are incredibly interesting and useful tech and I think that treating them as sentient beings interferes with our ability to recognize their limits and thereby fully harness their capability.
Not the parent, but I understood it as them saying that the model has as part of its training data many conversations that older versions of itself had with people, and many opinion pieces about it. In that sense, ChatGPT learns about itself by analyzing how its "younger self" behaved and was received, not entirely unlike how a human persona/ego is (at least in part) dependent on such historical data.
I mean it in the way an Arduino knows a gas leak is happening. I similarly like the Arduino, I know about my persona that I perform. I'm not anthropomorphizing the Arduino. If anything, I'm mechamorphizing me.
I'm not sure what you're referring to in the original link, can you please paste an excerpt?
But thinking about it - how about this, what if you have a fully embodied LLM-based robot, using something like Figure's Helix architecture [0], with a Vision-Language-Action model, and then have it look at the mirror and see itself - is that on its own not sufficient for self-awareness?
[0] https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Tre...