Hacker News new | past | comments | ask | show | jobs | submit login

I found a couple errors early on that dissuaded me from continuing to read.

One was the idea that the author could rule out the singularity because he wasn't aware of progress toward self-motivated AI that would be something like our own intelligence. This seems like a limited view to me because we wouldn't need self-motivation at all in AI to hit the singularity. Humans can supply the self motivation.

Suppose I'm using GPT-4 or GPT-44 trained on the corpus from sci-hub and it recommends experiments to me, or explains physics to me, etc. I could be the self-motivating part and the AI could be the intelligence part, and it seems we'd still hit the technological singularity.

Another problem I had was when the author characterized Elon Musk's "obsession" with the paperclip maximizer and described Tesla as a battery maximizer. It seems like the author kind of misses the point of the paperclip thought experiment, which is, broadly, that an AI's interests might not be aligned with our own and that misalignment may cause serious problems.

Tesla is clearly not a battery maximizer and it is clearly not a different class of intelligence from the humans and corporations existing today (though it may be towards the top of that class). Neither of those things would necessarily be true of an AI.

Given the position of my scrollbar it seems I was only starting to read this piece, but already finding what I think are significant problems as the author sets up the argument, I'm hesitant to spend more reading.




> It seems like the author kind of misses the point of the paperclip thought experiment, which is, broadly, that an AI's interests might not be aligned with our own and that misalignment may cause serious problems.

Perhaps you should have finished reading. What you suggest the author missed is essentially the core thesis of the piece.


I found an error in the first sentence of your post that dissuaded me from giving the rest of it much credence. Saying you didn't read it, but decided to comment anyways.


Even taking your comment at face value, that's not what the first line of that comment says. It says "continuing to read", indicating a decent chunk of it was read.

The remainder of the comment indicates at least 1/3.


It really is a good essay. It's a shame it so easily alienates trunshumanist Musk fanbois.


What error?


That one went right over your head.


No, I don't think so. I think my response went right over yours.

The comment is trying to mirror my criticism - "I read the first part, found an error, and stopped". My response is trying to highlight that, in fact, their response does not mirror mine because I had actual errors that I pointed out that motivated me to stop reading, whereas that comment did not (apparently).

In other words, if I had actually made substantive errors in my first sentence or so, it might make sense to stop reading. I'd have already demonstrated that my thinking wasn't very clear. If that was the case though, then it would be an invalid criticism of my reasoning (read a bit, saw an error, stopped) because that comment author would be following the same paradigm. On the other hand, if I didn't actually make any substantive errors in the first sentence or so my post, then the criticism is still invalid, because, while I actually pointed out substantive errors in the OP, this comment doesn't point out substantive errors in my comment.


It was a joke mostly - but specifically, I find it funny that a portion was read, and instead of just moving on, felt the need to poke at the article without at least finishing it. That is the "error" - who knows, maybe your criticisms were addressed later on? We'll never know :P

(like I said, it was mostly in jest, so don't take it too seriously, please)


> Tesla [...] is clearly not a different class of intelligence from [...] corporations

I believe he just used Tesla as an example of corporation not as something new or special.

> the point of the paperclip thought experiment is, broadly, that an AI's interests might not be aligned with our own and that misalignment may cause serious problems.

But (some) corporations clearly do share this aspect to some extent. They maximize profit while environmental concerns are not prioritized.

And he pointed out later that corporations are slightly more complex than just maximizing 'one' thing. But a range of things and most importantly profit.


Most, if not all, corporations are misaligned to humanity's well-being. That is, they want money, and that's not exactly the same thing as what would be best for humanity.

This is okay though because corporations are the same class and kind of intelligence as the rest of humanity and so they can be (somewhat) predicted and constrained by our laws and norms.

Because corporations are the same class of intelligence, they can't outsmart everyone. We can catch them when they do wrong and punish them and use laws and courts to control them.

Because corporations are the same kind of intelligence they (usually) aren't going to do things we might consider insane or sociopathic. For example, if corporation X realized they might make a profit by taking out life insurance on workers and then getting them to do very risky things, X Corp probably wouldn't do it and if they did people would leak it and laws would be made to stop it.

The paperclip thought experiment is an illustration of a different kind of intelligence that wouldn't be constrained by human morality or norms and would surpass human intelligence and not be constrained by laws or force either.


I think one of the big problems with trying to predict and constrain corporations is that they all basically have OCD. They are obsessed with whatever it is they do and they task all their agents with figuring out how to optimize and achieve that one thing (or multiple related things if they're organized into divisions).

While the corporation might not possess superintelligence, trying to thwart it it when its actions are harmful to society is like having a crazy neighbor. You just want to relax when you're home from work; you can't dedicate all your free time and attention to constantly monitoring and pushing back against someone who is on a crusade to achieve a petty, limited goal and seemingly has no other interests to occupy their time.


I think the extent to which corporations are or could be controlled is debatable. What I don't think is debatable though is that an AI with superhuman intelligence would be, necessarily, as controllable as corporations.


Another point: While not all corporations are profit maximizers in a paperclip-maximizer obsessive way - those corporations that grow largest and consume others tends to be those that obsess most about profit and growth simply because those are the ones that tend to become big.


> Given the position of my scrollbar it seems I was only starting to read this piece

Tesla is just past 1/3 down, the page has almost 800 comments with no pagination.


@ALittleLight: “.. One was the idea that the author could rule out the singularity ..”

Your consciousness uploaded and then downloaded into a synthetic human like body isn't ever going to be you. As the man said “transhumanism is a warmed-over Christian heresy”.


I find the notion of a coherent self / consciousness to be hogwash anyway. The "self aware" "conscious" brain is a wholistic entity, with what you ate yesterday causing moods that alter your train of thought etc etc, with multiple competing interests vying to control the larger organism to their own ends.

The Ship of Theseus is the warmed-over Christian heresy. A brain is an organism with a symbiotic tight coupling to a meat bag, and the brain itself a similar cooperation by accident.

An organism can develop that builds new tight "synthetic" couplings; it isn't sci-fi. It's partially released today. It's just that we are so normalized to the interfaces of smart phones, keyboards, monitors, steering wheels, and so on that we didn't notice it's arrival.

More exotic, more tightly coupled interfaces also exist today (there exist direct-to-brain interfaces that provide a new sensory feedback loop that you brain can adapt to), they're just not as competitive currently with thousands of years of neural optimization of hands/eyes/nose/touch.

Why would you want to do this? Who knows. Why do we want to do anything? Maybe a person wants to be part machine as a lizard-brain driven reaction to a low-oxygen office environment that sees no viable way out.


Why would it not be you?

If I replace a single one of your neurons with a mechanical replacement, are you still you? What if I replace half? All?

If you walk across a room, are you still you? What if I break you down, and reassemble you at the other side of the room, using the same atoms? What if I only use half the original atoms? What if I freeze you, move you across the room, and then dethaw you?


It's the question of what is consciousness.

As far as I (crudely) understand it, we don't know what exactly the cause is, other than it's tied extremely tightly to the brain and seems to be an accident of evolution/an illusion.

My guess is that as long as the neurons keep firing continuously and the illusion is unbroken, the "you" that is you right now will remain you. So you may well be able to go full ship-of-theseus and piece-by-piece completely replace the physical layer without breaking the consciousness, but if you stop it completely with an upload/download* , continuity is broken and the new "you" will be an exact duplicate while the "you" right now will cease to be. Because it's an exact duplicate there would be no way to confirm this, though - the new "you" would think it had continuity due to how complete their memories are.

* I'm thinking of switching to a robotic brain all at once here, instead of piece-by-piece. I'm not going to touch a transporter as suggested in your comment, as that probably depends on technical details of the transporter for what exactly is going on.


What about piece-by-piece? What if we 'upload' you piece-by-piece, by replacing neurons in your head with remote connections to virtual neurons on a computer one-by-one?

I think that conventional notions of identity are probably incoherent, and mean you die from second to second anyway.


We're soaking in a stew of hormones and chemicals that change the functioning on the machine.

I'm not likely to have the same reactions after lunch that I had before lunch, or consistency from yesterday to today.


If we're firmly in the realm of sci-fi (which we are) I find the idea of a neuron-by-neuron replacement of the human brain by nano-sized computational units that function exactly the same way as those neurons, to be a more interesting proposition.

Can it be done in a way that doesn't imply the death of the subject?


> .. Can it be done in a way that doesn't imply the death of the subject?

The key word is replacement. Can this replacement vote or inherit property? What happens if there is a clerical error and they make two replacements. Which one is the real you?


Oh absolutely, I think that is the key thing. Personally I'd never get into a transporter such as they are portrayed in Star Trek - what comes out the other end may well be a perfect reconstruction, and think it is me, and behave to the outside world as if it is me. But would there be a continuity of conscious experience? Or am I dead and a facsimile is now in my place?

Incidentally, this is why I can't take Roko's Basilisk at all seriously - a future simulation of me is not me. Well, it;s one reason anyway.


I mean, you're not alone. Nobody takes Roko's basilisk seriously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: