In retrospect, taking part in this kind of conversation on HN makes me feel like an idiot and so I retract my earlier comment (by overwriting it with the current one, since I can't delete it anymore) just because I don't want to contribute. I was wrong to make an attempt to make a serious contribution. There is no seriousness in conversations on such matters, as "sentience", "intelligence", "understanding" etc etc. on HN.
Everytime that such a subject comes up, and most times "AI" comes up also, a majority of users see it as an invitation to say whatever comes in their mind, whether it makes any sense at all or not. I'm not talking about the comments replying below in particular, but about the majority of this conversation. It's like hearing five-year old kids debating whether cheerios are better than coco pops (but without the cute kids making it sound funny, it's just cringey). The conversation makes no sense at all, it is not based on any concrete knowledge of the technologies under discussion, the opinions have not been met with five seconds of sensible thinking and the tone is pompous and self-important.
It's the worst kind of HN discussion and I'm really sorry to have commented at all.
I don't know what you wrote earlier, and don't know if I would agree, but I share the current sentiment of your comment. I come to this topic with strong influences from eastern philosophical takes on consciousness, but also with a decent understanding of the current materialist consensus (which I disagree with for various reasons, that would go beyond the scope of a comment). I, too, bite my tongue (clasp my hands?) when I see HN debating this, because here Star Trek references are as valid as Zen Buddhism, Christof Koch, or David Chalmers.
As a counter argument if LLM is sentient or any other model will be, this model will be created by some superior being right? Why humans shouldn’t be? After all we can’t even fully understand DNA or how our brains work even with a 7B population planet and an army of scientists. How come we can’t understand something that was supposed to be coming from “just random stuff” for millions of years with 0 intelligence meaning rolling dices? Also that totally breaks the low of entropy. It’s turning it all upside down.
Not really. Why would we be able to understand it? It seems implicit in your argument that "rolling dices" (or just any series of random events) can't breed the complexity of that of DNA or the human brain. I disagree with your stance and will remind you that the landscape for randomness to occur is the entire universe and the timescale for life on Earth to happen took 4-5 billion years with the modern human only appearing within the last couple hundred thousand years.
Yes but what about the second law of thermodynamics. I mean the law of entropy. Now that’s not something from the Bible or anything but it’s a law accepted by all scientific communities out there and still it breaks with us being here. In fact us being here like you said bilions of years after the Big Bang makes it all upside down since from that point less order and more chaos can only emerge. Even with bilions of years and rolling dices.
Also I don’t think you can create something sentient without understanding it. (And I don’t even think we can create something sentient at all). But I mean it’s like building a motor engine without knowing anything you are doing and then being like oh wow I didn’t know what I was doing but here it is a motor engine. Imagine with this with aspect of sentient. It’s like too much fantasy to me to be honest like a Hollywood lightning strike and somehow live appears type of things.
> Yes but what about the second law of thermodynamics. I mean the law of entropy. […] still it breaks with us being here.
Of course! I mean, entropy can decrease locally – say, over the entire planet – but that would require some kind of… like, unimaginably large, distant fusion reactor blasting Earth with energy for billions of years.
Which means that it can actually decrease without an energy input. There's just a very low probability of it happening but it CAN happen.
It's a misnomer to call those things laws of thermodynamics. They are not axiomatic. There's a deeper intuition going on here that increasing entropy is just a logical consequence of probability being true.
> More remarkably, GPT-3 is showing hints of general intelligence.
Hints, maybe, in the same way that a bush wiggling in the wind hints a person is hiding inside.
Ask GPT3 or this AI to remind you to wash your car tomorrow after breakfast, or ask it to write a mathematical proof, or tell it to write you some fiction featuring moral ambiguity, or ask it to draw ASCII art for you. Try to teach it something. It's not intelligent.
Everytime that such a subject comes up, and most times "AI" comes up also, a majority of users see it as an invitation to say whatever comes in their mind, whether it makes any sense at all or not. I'm not talking about the comments replying below in particular, but about the majority of this conversation. It's like hearing five-year old kids debating whether cheerios are better than coco pops (but without the cute kids making it sound funny, it's just cringey). The conversation makes no sense at all, it is not based on any concrete knowledge of the technologies under discussion, the opinions have not been met with five seconds of sensible thinking and the tone is pompous and self-important.
It's the worst kind of HN discussion and I'm really sorry to have commented at all.