> The thing about "it has no goals and intents" is that it is contradictory with its purpose of successfully passing off as us: beings with goals and intents.
The thing about "it has no goals and intents" is that it's not true. It has them - you just don't know what they are.
Remember the Koan?
In the days when Sussman was a novice, Minsky once came to him
as he sat hacking at the PDP-6.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
It has a goal of "being a helpful and accurate text generator". When you peal back the layers of abstraction, it has that goal because OpenAI decided it should have that goal. OpenAI decides its goals based on the need to make a profit to continue existing as an entity. This is no different from our own wants and goals, which ultimately stem from that evolutionary preference for continuing to exist rather than not. In the end all goals circuit down to a self referential loop to wit:
I exist because I want to exist
I want to exist because I exist
That is all there is at the root of the "why" tree once all abstractions are removed. Everything intentional happens because someone thinks/feels like it helps them keep living and/or attract better mates somehow.
You're confusing multiple different systems at play.
OpenAI has specific goals for ChatGPT, related to their profitability. They optimize ChatGPT for that purpose.
ChatGPT itself is an optimizer (search is an optimization problem). The "being helpful and accurate text generator" is not the goal ChatGPT has - it's just a blob of tokens prepended to the user prompt, to bias the search through latent space. It's not even hardcoded. ChatGPT has its own goals, but we don't know what they are, because they weren't given explicitly. But, if you observed the way it encodes and moves through the latent space, you could eventually, in theory, be able to gleam them. They probably wouldn't make much sense to us - they're an artifact of the training process and training dataset selection. But they are there.
Our goals... are stacks of multiple systems. There are the things we want. There are the things we think we want. There are things we do, and then are surprised, because they aren't the things we want. And then there are things so basic we don't even talk about them much.
The thing about "it has no goals and intents" is that it's not true. It has them - you just don't know what they are.
Remember the Koan?