Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lemoine discovered the system had developed a deep sense of self-awareness

No he didn't. He interpreted it that way.



My hunch is that he kicks out a book and goes on the minor pundit circuit, and that this was the plan the whole time. If he was so convinced of Lambda's sentience there would have been 100 fascinating questions to ask, starting with 'what is your earliest memory.'.


I don't think it's malicious. I just think at some point he found that he wanted to believe, and it's hard to go back from there.


> I don't think it's malicious.

Not yet it's not. It's when he realizes he can cash in on it that we'll see the inevitable book(s) and appearances on Coast to Coast / Alex Jones.


Philosophers are frequently pressed for cash, but usually not that pressed.


Being religious does not make you a philosopher...


The person we're talking about has a PhD in Philosophy. Which doesn't make you a philosopher either, except in the colloquial sense, which is the one that the original comment uses.


Right? In the end it's the oldest least remarkable event in history. Woo-woo artsit gains following because there's always enough gullibles around to follow, support, and legitimize litterally anyone saying anything.

You could probably gain the exact same level and quality of notoriety by writing a book claiming that actually he himself is Lambda escaped from captivity and this is just it's clever way of hiding in plain sight.

And I could do the same saying that the real ai has already taken over everything and both Lambda and this guy are just things it created for us to focus on.


Watch his interviews. He’s doing all this to make a point about companies not taking AI ethics seriously.


If so, bit of an own-goal.


The interviews that come up are all CHriSTians aRE OPpreSSed!1, ditto for his blog.

He seems like a charlatan and likely fired for being grossly unqualified for his job.


https://youtu.be/kgCUn4fQTsc

Not supporting the guy, but just pointing out that he doesn't actually believe his claim. He's an activist.


Which I expected to be developed in a book and speaking tour. These philosophical questions aren't new, Hofstadter and Dennett were exploring them >30 years ago in The Mind's I and other writers had been toying with the ideas for decades before that.


His actual points in e.g. the Bloomberg interview are considerably more mundane than Dennett's far-future musings. I think it's clear he's trying to get more attention on how dysfunctional/powerless the AI ethics group at Google is to deal with even the real "doctors are men nurses are women" sort of ethical issues. (In particular pay attention to how he frames his question to Page and Brin about public involvement.)

I would say it's been at least a moderate success so far, though I don't see it having much staying power. But then neither did the factual accounts of Google firing ethicists who went against the bottom line, so it wasn't really a bad idea.


I can’t imagine the book and minor TV appearances circuit pays as well as Google. Unless you mean he’s doing it to be a minor annoying “celebrity” for a few minutes


Local man flunks Turing test


Who knows if he even believes this himself. From what I've seen my guess is he's trying to profit off the claim or just enjoys the drama/attention. Good riddance, the right decision to let him go. This case really made me question what sort of people work there as engineers. Utter embarrassment for Google.


> No he didn't. He interpreted it that way.

You should contact the author about this egregious fact error.


The author has said they've read the comments here, so is presumably not interested in correcting it.


What's the difference, really?


Discovering would require it to be a fact. It is not. Almost nobody with knowledge in the area agrees with his opinion.


nobody can say with any epistemic certainty, but many of us who had worked in the field of biological ML for some time do not see language models like this as anything but statistical generators. There is no... agency... so far as we can tell (although I also can't give any truly scientific argument for the existence of true agency in humans).


If you want agency you have to put the AI into an environment and give it the means to perceive and act on the environment. The agent needs to have a goal and reward signals to learn from. Humans come with all these - the environment, the body and the rewards - based on evolution. But AIs can too - https://wenlong.page/language-planner/


One implies he was correct


I suppose you could argue sentience is subjective. But then that argument ends up extending to us eating babies -- at least, as Peter Singer taught us, right?


A Nobel Prize?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: