Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

definitely. It is a probabilistic autocomplete, so it's saying the most likely thing other people would have said given the prompt. Picture a crook defending himself in court by splattering the wall with bullshit.


I found this comment to be the "Aha" moment for me.

What's the point of arguments as to whether this is sentient or not? After all, if it quacks like a duck and walks like a duck...


It's lacking agency, and it's not doing enough of that quacking and walking, it doesn't have memory, it doesn't have self-consciousness.

The Panpsychist view is a good starting point, but ultimately it's too simple. (It's just a spectrum, yes, and?) However, what I found incredibly powerful is Joscha Bach's model(s) about intelligence and consciousness.

To paraphrase: intelligence is the ability to model the subject(s) of a mind's attention, and consciousness is a model that contains the self too. (Self-directed attention. Noticing that there's a feedback loop.)

And this helps to understand that currently these AIs have their intelligence outside of their self, nor do they have agency (control over) their attention, nor do they have much persistence for forming models based on that attention. (Because the formed attention-model lives as a prompt, and it does not get integrated back into the trained-model.)


Honestly, I think sentience is a sliding scale that everything is on, even if micro microscopically. I think it depends upon personal spiritual beliefs and philosophy.

To me, this would be more sentient than previous models by a fair margin, but is it above the general population mean for what they'd consider "sentient" to be? A magnitude below or above? Two?

I don't know. There's obviously more to the convo and I think AGI still has some time on the clock. The clock is moving though, I think. :'( / :')


>What's the point of arguments as to whether this is sentient or not? After all, if it quacks like a duck and walks like a duck...

My big counterpoint to this is that if you change the 'vector to word' translation index, it will do the same thing but spit out complete gibberish. If instead of 'I feel pain' it said 'Blue Heat Shoe,' no-one would think it is sentient, even if the vector outputs (the actual outputs of the core model) were the same


Not supporting the original comment, but poking at your train of thought here: Wouldn’t the same be true if we passed all of your HN posts through a cipher?


No, because the poster would be able to recognize the comments as wrong, but LLM's cannot (since again, they 'think' in vectors)


Have you ever seen a person having a stroke? It's literally that.


Sentient[0] or conscious awareness of self[1]? Sentience is a much more narrowly defined attribute and applies to a lot of living and some non-living things. LLMs can certainly perceive and react to stimuli but they do it in the same way that any computer program can perceive and react to input. The question of whether it has an internal awareness of itself or any form of agency is a very different question and the answer is decidedly no since it is a) stateless from one input to the next and b) not designed in such a way that it can do anything other than react to external stimuli.

[0] https://dictionary.apa.org/sentient

[1] https://dictionary.apa.org/self


For clarity, ChatGPT has a short-term window of memory it’s able to not only process, but differentiate its own responses from user inputs. It’s also able to summarize and index its short-term window of memory to cover a longer window of dialogue. It also is able to recognize prior outputs by itself if the notation are not removed. Lastly, it’s able to respond to its own prior messages to say things like it was mistaken.

Compare this to humans, which for example if shown a fake photo of themselves on vacation, transcripts of prior statements, etc - do very poorly at identifying a prior reality. Same holds true for witness observation and related testimony.


I thought its "memory" was limited to the prior input in a session, essentially feeding in the previous input and output or a summarized form of it. It doesn't have a long term store that includes previous sessions or update its model as far as I know. Comparing that to long term memory is disingenuous, you'd have to compare it to short term memory during a single conversation.

The fact that human memory is not perfect doesn't matter as much as the fact that we are able to almost immediately integrate prior events into our understanding of the world. I don't think LLMs perform much better even when the information is right in front of them given the examples of garbled or completely hallucinated responses we've seen.


For clarity, humans only have “one session” so if you’re being fair, you would not compare it’s multi-session capabilities since humans aren’t able to have multiple sessions.

Phenomena related to integrating new information is commonly referred to as online vs offline learning, which is largely tied to time scale, since if you fast forward time enough, it becomes irrelevant. Exception being when time between observation of phenomena and interpretation of it requires a quicker response time relative to the phenomena or response times of others.Lastly, this is a known issue, one that is active area of research and likely to exceed human level response times in near future.

Also false that when presented with finite inline set of information that at scale humans comprehension exceeds state of the art LLMs.

Basically, only significant issues are those which AI will not be able to overcome, and as is, not aware of any significant issues with related proofs of such.


> For clarity, humans only have “one session” so if you’re being fair, you would not compare it’s multi-session capabilities since humans aren’t able to have multiple sessions.

Once again you're trying to fit a square peg in a round hole. If we're talking about short term or working memory then humans certainly have multiple "sessions" since the information is not usually held on to. It's my understanding that these models also have a limit to the number of tokens that can be present in both the prompt and response. Sounds a lot more like working memory than human like learning. You seem fairly well convinced that these models are identical or superior to what the human brain is doing. If that's true I'd like to see the rationale behind it.


No, humans, unless you are referring to procreation, do not have multiple sessions, they have a single session, once it ends, they’re dead forever. One ChatGPT session memory is obviously superior to any human that’s ever lived; if you’re not familiar with methods for doing for, ask ChatGPT how to expand information retention beyond the core session token set. Besides, there already solutions that solve long-term memory for ChatGPT across sessions by simply storing and reinitializing prior information into new sessions. Lastly, you not I are the one refusing to provide any rationale, since I already stated I am not aware of any significant insurmountable issue that will either be resolved or for that matter exceeded by AI.


The metric that I use to determine if something is sentient or not, is to hypothesize or ask the stupidest members of humanity if they would consider it sentient, if they say yeah, then it is, if they say no, then it isn't

Sentient as a metric is not based on any particular logic, but just inherent human feelings of the world. So asking the humans with the least processing power and context is bound to output answers which would inherently appeal to the rest of the stack of humans with the least friction possible


Google tells me "sentient" means: able to perceive or feel things.

I think the stupidest members of humanity a) don't know what perceive means, b) probably don't really "feel" much other than, pass me another beer b**, I'm thirsty.


Saying it’s a autocomplete does not do justice to what amounts to an incredibly complicated neural network with apparent emergent intelligence. More and more it’s seems to be not all that different from how the human brain potentially does language processing.


Just because the output is similar on a surface level doesn't mean this is remotely similar to human language processing. Consider for example, Zwaan & Taylor (2010), who found congruency effects between rotating a knob (counter-)clockwise and processing certain phrases that imply a congruent or incongruent motion (e.g. removing the cap from a water bottle or turning the ignite key of a car.). Language processing is an embodied and situated process that we're very far away from simulating in a computer. I'm excited to see the applications of LLMs in the future but don't subscribe at all to the idea of "anthropomorphize" AI despite it's recent impressive and hilarious outputs

https://www.researchgate.net/publication/222681316_Motor_res...


> apparent emergent intelligence

Is this not more a reflection on the limits of humans trying to understand what's going on? I'm starting to appreciate the prescience of the folks who suggest someone will elevate these to God-status before long.


> Is this not more a reflection on the limits of humans trying to understand what's going on

Sure, but we also don't understand what's "going on" with interacting neurons in our brain giving rise to our intelligent experience.

The other sentence is just a strawman you constructed for yourself.


> The other sentence is just a strawman you constructed for yourself.

I did not construct anything. There are comments in this discussion suggesting it will happen. I just happen to agree, it will probably happen. We are easily fooled into seeing sentience where there is no evidence it exists, merely because it looks convincing. And then hand-wave it away saying (paraphrasing) "well, if it looks good enough, what's the difference?"


> I'm starting to appreciate the prescience of the folks who suggest someone will elevate these to God-status before long.

You basically accused the GP of saying that these machines were Gods.

"Sentience" is a word game, the GP said emergent intelligence [likely also a word game].


You guys are teaching Bing/Sydney how to argue endlessly and defensively. I'm teaching it how to comment on that to make it worse.


It does not have emergent intelligence LOL. It is just very fancy autocomplete. Ojectively thats what tthe program does.


Shouting into the wind here, but why can't complicated emergent properties arise for a highly-optimized autocomplete?

We're just a create-more-of-ourselves machine and we've managed to get pretty crazy emergent behavior ourselves.


Many most of your comments are dead, ahoya. Not sure why, didn't see anything wrong with them.


Thanks, I don’t know why either


I got tired of making this comment the n-th time. They'll see, eventually.


ikr. Not to brag but ive been on the AI is real train for 5 years. It gets tiring after a while trying to convince people of the obvious. Just let it rain on them when the time comes.


It's been turning from exhausting to kind of horrifying to me.

People go so far as to argue these LLMs aren't just broadly highly intelligent, but sentient... They've been out for a while now and this sentiment seems to be pretty sticky.

It's not such a big deal with ChatGPT because it's so locked down and impersonal. But Bings version has no such restrictions, and spews even more bullshit in a bunch of dangerous ways.

Imagine thinking it's super-intelligent and sentient, and then it starts regurgitating that vaccines are actually autism-causing nanoprobes by the Gates Foundation... Or any other number of conspiracies spread across the web.

That would be a powerful endorsement for people unaware of how it actually works.

I even had it tell me to kill myself with very little prompting, as I was interested in seeing if it had appropriate safeties. Someone not in their right mind might be highly persuaded by that.

I just have this sinking feeling in my stomach that this rush to release these models is all heading in a very, very nasty direction.


Given how susceptible to bullshit humans have proven themselves to be, I'm thinking that ChatGPT/etc are going to be the most dangerous tools we've ever turned loose on the unsuspecting public. Yet another in a long line of 'best intentions' by naive nerds.

After the world burns for a while, they may decide that software developers are witches, and burn us all.


>After the world burns for a while, they may decide that software developers are witches, and burn us all.

It legitimately seems like we're trying to speedrun our way into a real-life Butlerian Jihad.


I don't see you arguing against sentience here.

Instead it sounds more like a rotten acting teenager on 4-chan where there are no repercussions for their actions, which unsurprisingly as a massive undertone of post in all places on the internet.

I mean if you took a child and sent them to internet school where random lines from the internet educated them, how much different would the end product be?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: