Hacker News new | past | comments | ask | show | jobs | submit login

It may be superfluous to point this out but this seems to be talking about malicious uses of narrow AI, rather than malicious strong AI. The only defense against a malicious strong AI is, of course, a friendly strong AI.



At the moment "strong AI" is science fiction so what could you possibly say about it? How do you even know what the "only defense" is?


A super-virus that has a 100% mortality rate, is incredibly infectious, and stays dormant for 1 year before killing the host. There's a fiction (I literally just made it up). Is it impossible to say anything about it?

A bomb that is 1,000 times for powerful than a nuclear weapon. Again, currently science fiction. Can't say anything about that either?

The fact that it doesn't exist right now doesn't mean we can't say anything about it, and the fact that it's potentially dangerous means we should be trying, at least, to think about it. Assuming it really is as dangerous as some people claim, do you really want to wait around until it's no longer science fiction, and only then, after it's too late, start thinking about it?


A "superintelligent" AI that is to a human as a human is to a fly. Is it impossible to say anything about the world after it's been built?

Yes. It's not the fact that it doesn't exist, it's the fact that not only is it not clear that it's even physically possible, anything you come up with can just be handwaved away by this mystical property known as "intelligence". Your two examples don't have this problem so they're not similar.

Most of the literature I've seen about AGI is about theories on how to build it, or theories on how to build it safely (e.g. so it has stable preferences that align with human values). Not stuff on "how to defend yourself against a rogue AGI", because there really is nothing you can say about that.


The problem may not have been your point (that we can't reason about strong AI), but the way you justified it.

Strong AI is not hard to reason about because it is science fiction. As edanm pointed out, it is entirely possible to reason about science fiction. The difference is that the examples edanm points out are extensions or modifications of things that do exist and that provides a basis from which we can reasonably extrapolate.

We can't reason about strong AI precisely because we don't have as good of an existing example from which we can extrapolate. The best we have is the human mind. The extrapolation from the human mind is precisely the basis for argument that "friendly strong AI" is the best/only response to to "malicious strong AI" since the thing that keeps malicious humans in check is "good" humans.

We don't even know if or to what degree 'super-intelligence' is possible. For all we know, there may be fundamental trade offs between the different skills necessary for general intelligence that may provide functional limits to the degree to which intelligence can be super human.


> We don't even know if or to what degree 'super-intelligence' is possible. For all we know, there may be fundamental trade offs between the different skills necessary for general intelligence that may provide functional limits to the degree to which intelligence can be super human.

While this is true, it would be extremely surprising if evolution's first stab at higher intelligence (humans) came anywhere near the theoretical limits of what intelligence could be if directly selected for.

Just remove the size and energy consumption constraints imposed on the human brain by what the human body can support, and you should be able to get a significant improvement without changing the base architecture much at all.

Digitize and allow for copy/paste of "brains", and now you've got 30,000,000 copies of Einstein working together on problems but each specializing in their own pursuits, each of which doesn't get tired and lives forever, and can open very high-throughput direct neural communication channels with the other Einsteins whenever they need to share knowledge.

Start tweaking the virtual brain's "chemical" environment, and you can probably supercharge them by doing all sorts of stuff that would kill a real human but is fine when all you care about is making a simulated brain work better (virtual megadoses of Ritalin, massive tweaks to neurotransmitter behaviors, etc.).

These are just a few obvious ideas, but it's hard to imagine that together they couldn't scale intelligence up at least past the point where we could ever hope to keep up, as long as we got it into digital form in the first place. You're correct that there will be an architectural "wall" somewhere, but chances are that human intelligence is more in the "vacuum tubes and punch cards" regime than the "Moore's law is done" one. Whatever is there when that wall is hit would be so far beyond us that it might as well be called super-intelligent, even if it can't improve further.


> We can't reason about strong AI precisely because we don't have as good of an existing example from which we can extrapolate. The best we have is the human mind.

It's actually human minds. Organizations of humans like major corporations and government possess the sort of resources that we imagine a super AI would have.


>Not stuff on "how to defend yourself against a rogue AGI", because there really is nothing you can say about that.

Of course there's at least a couple things you can say. "You should have built a friendly one first", or "convince it you're harmless". Or more grimly, "become harmless".


> Is it impossible to say anything about the world after it's been built?

Well, actually no. It's possible to say one thing (and as far as I'm aware, only this thing): You'd better have an even smarter friendly AI on your side, or you're almost certainly boned.


> You'd better have an even smarter friendly AI on your side, or you're almost certainly boned.

Are you sure? Was the world better off with both the US and Russia having nukes? Perhaps, or perhaps we got incredibly lucky that we didn't destroy ourselves. It is quite possible that a world where we have a single malicious AI is better than a world where a malicious AI and a friendly AI are locked in a battle to the death.


You're missing the point that an active, malicious strong AI is a nuke that's already been detonated.


This made me think of a riddle I once heard recently from a high school student. "What is sexually transmitted, 100% fatal, and literally everyone you know is infected? ...Life."


Except life isn't sexually transmitted. There is no "receiver" of life in the sexual process.

Life is propagated through sexual reproduction.

There is also no "infection".

The joke, as told properly, is: "What is spread through sex, 100% fatal and literally everyone you know has had it?"


A JDAM through it's power supply would be an effective defense against a malicious strong AI.


Lions and Bears : Humans aren't a threat. Sharp teeth through the neck are an effective defense against a malicious human.

If the hypothetical malicious AI gets built, your JDAMs will be about as advanced by comparison as a pointy stick against an M-16.


Whatever. Let me know when someone invents an AI as smart as a lab mouse. Until then this is all just pointless idle speculation.


That's fair, but arguing something isn't possible is very different from arguing that when it happens you can solve it with a bomb. The conversation is about a hypothetical strong AI. You can say strong AI isn't possible, like you can say warp drives aren't possible, and it's still valid to discuss what it might be like if they were possible.


That's exactly like claiming it's still valid to discuss what it might be like if hypothetical aliens invaded the planet. Should we build some big lasers to protect ourselves just in case?

Entertaining perhaps, but ultimately pointless and silly.


Dropping JDAMs on cloud hosting server farms in the US or Europe is pretty fantastic.


you're missing the point assuming we're talking about strong AI - when the paper is more addressing malevolent uses of weak AI.


Ok, that makes the post I'm replying to a non sequitur. Weak AIs in the short term are going to be hosted on AWS or Google cloud.

Is somebody going to take a JDAM to Google's data center in Atlanta or Amazon's data center in Seattle? Blow up Facebook in Palo Alto? Won't the US Air Force have a problem with that?


I'm loving this JDAM thing, tbh. I will use it going forward.


What if it was distributed like bitcoin? If I was writing a dystopian sci fi novel the evil AI would evolve from a cryptocurrency network and have access to billions of dollars and autonomous corporations. Also the support of many humans who were invested in the currency. Just think of the politicians it could bribe/threaten with tens of billions in anonymous crypto.


A strong AI which limits itself to one machine, isn't very strong at all.


> A JDAM through it's power supply would be an effective defense against a malicious strong AI.

For a counterpoint (and not even a particularly malicious strong AI), see The Adolescence of P1.


You're confusing SciFi with reality. It's exactly like worrying about an alien invasion. Silly and pointless.


Only if you recognize its malicious intent before it convinced you to implant the fancy new chips that it invented right into your brain.


Of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: