Hacker News new | past | comments | ask | show | jobs | submit login

Reading the comments on this article make me realize how unserious most engineers and software developers are taking this.

These threats are real.

The weaponization of everything is actually happening. Since I wrote about it a month ago (Self-Crashing Cars) a number of people have reached out including people with actual insight into the military aspect of it. Militaries around the world are getting ready for true AI enabled weapon systems and there building deterrence strategies for mass casualty cyber attack (including nuclear weapons response), whether its from hacked industrial plants or cars it doesn't matter. They're actually talking about the weaponization of cars at the Munich security conference.

We need to stop burying our head in the sand and write to our politicians about this threat. I know it sounds crazy but it's real.

As an aside, my main complaint about the people that truly understand this the inability / unwillingness to accept that the act of subverting systems capable of mass destruction via cyber attack amounts to cyber weapons of mass destruction. We need to assemble all the work / treaties / regulations / research we put into securing nukes into securing AI / robotics and for the identical reasons. We know how this ends otherwise. We need wide-spread government funding and we need to communicate what these things are in language that our governments understand. Not saying something that is true just because it sounds weird is counterproductive.




You can trust that people in the US Military are taking this very seriously. I know because I, alongside LtGen Shanahan and Eric Schmidt briefed Secretary Mattis personally about these issues and is part of my ongoing work within the IC/DoD.

You're correct though that most line DL engineers don't have these issues at top of mind. I don't know any ML researcher worth their salt that hasn't thought about them, but the tendency is to brush the concerns off until we're closer to more generalized systems.

That is not a wholly unreasonable position for many reasons, especially given the history of hype around AI, but I'd like to see more discussion happening between junior and mid-level engineers about these things - and especially more work being done about human-machine interfaces in the AI context, of which very little design thought put in.


Here's the crux of the issue with respect to nuclear WMDs:

In order to prevent citizens and other countries from creating nuclear bombs, our government severely limits access to relevant tools and materials, and actively seeks to censor the knowledge of how to build modern nuclear devices from other parties.

In order to prevent citizens and other countries from creating dangerous rogue AIs, our government _____________

What do you think goes in that blank?


This is the problem. WMDs take a lot of infrastructure and can be tested/inspected.

AIs can be built/modified by anyone in their basement. The genie can't be put back into the bottle. It's like trying to outlaw computer viruses and hoping that will work.

I don't have a solution :(


AI needs compute cycles. These are quite centralized outside of botnets.

I’ve pointed this out before, but the primary commercial use cases of AI right now are malicious: mass population surveillance and behavior manipulation (ads; political and otherwise.) My biggest concern are the ML researchers who dismiss malicious use as being a distant concern while they work on projects that are malicious right now.


Homomorphic encryption + cloud services + VPN chaining. If there's one thing that won't be in shortage for the foreseeable future, it's compute cycles.

And I'm with you, the socioeconomic aspects of AI are far more important to curtail, but it doens't mean we should be ignoring the very real threat of people attacking infrastructure, sewage and power plants. And we shouldn't be waiting until after it's a problem, when it already can be seen as a certainty that it will happen eventually.


Perhaps this is the Great Filter. Nuclear war, societal collapse, supervirus... all these can come from rogue / misused AI


There are really two kinds of attacks: ones that involve subverting existing networked devices to perform malicious acts (hacking), and making one's own malicious devices (building).

Of the two, attacking existing devices sounds scarier because there is a large fleet of vulnerable existing devices that can potentially be exploited. However, of the two problems, it's also the one we know more about: it's basically straight up cyber security. The AI aspect is mostly a tangent, a way to control the system once you have access.

It seems the most straightforward way to defeat these attacks is to increase their cost by promoting better cyber security (sandboxing, use of safer languages, cryptography as appropriate, etc.). That's not to say it'll be easy, but the problems are largely known problems.

On the other hand, anything that involves the attacker actually building something themselves is either going to be much smaller scale, or is going to leave a physical trail that can be traced like we do for any other sort of weapons mitigation.


I think it's a spectrum; or rather, weak and strong ai is a force multiplier :

A hacked car is dangerous, many hacked cars are more dangerous, a pack of hacked cars more so. As is a hacked car that can hack other cars, and recruit them for the pack.

Note this isn't hyperbole as such; we already have experience with computer viri and worms.

It seems quite reasonable that a bot network might work semi-automously toward a "goal". In the same sense ants might "hunt" for sugar.


The answer is regulating data - not algorithms or hardware - the data. I'm not saying it should happen, but if you want to regulate AI then you have to regulate the data that it learns from. That means strict rules on storage, retrieval, scale, uses of, collection etc... of images, video, text, metadata, EM emissions, really anything that produces a measurable "signature" of some sort.

It's how we regulate nuclear [1] - the "source material" is the most important part, while the refineries, delivery mechanisms etc are secondary.

Doing this would put serious brakes on the tech industry so I don't see anything close to this happening honestly.

[1]https://www.nrc.gov/materials/srcmaterial.html


Are you serious? That kind of widespread banning of information is fundamentally incompatible with a democratic society.

It's absolutely preposterous to think that I could prevent someone from downloading and using a database of pictures, EM emissions, basically anything that a government deems dangerous, via threat of State violence.

That is as draconian and authoritarian as it gets. How do you even enforce such a law? You would need unrestricted access to every system.

Otherwise, you are only harming legitimate researchers and enthusiasts, not criminals. Criminals are still going to access and download whatever the hell they want.

How do we determine what can "help" AI and what can't? Is it the number of sources you have that should be restricted? Is there a committee that decides this? Are the proceedings open to civilian involvement? Who decides what is okay and what isn't?

Can corporations or government organizations (military) apply for this data? Why can groups of people with lots of money and infrastructure have it, but not me? How does this prevent a further segregation of socioeconomic classes?

Comparing nuclear "source material" to random data like images, video, metadata, etc is downright deceitful. If you aren't being purposefully deceitful, you really need to reflect upon exactly how such a law would be carried out, the logistics of its enforcement, and the limitations. It's a logistical nightmare and is infinitely more complex to enforce than it should be, not to mention downright anti-democratic.

Damn right it shouldn't happen.


Agreed. Again, it would be terrible for society, but if you wanted to regulate AI it would be the more effective way to go about it IMO.

There was a short story based on an implementation of this, I don't recall what it's called.


It's not at all the most effective. It's only the most effective if you severely limit what factors into that.


> We need to assemble all the work / treaties / regulations / research we put into securing nukes into securing AI / robotics and for the identical reasons.

I disagree. I think we need a fundamentally different approach to AI than to nuclear weapons. The proliferation of nuclear weapons were controllable because the components needed to develop a nuclear weapon included specialized, controllable physical goods and fairly recognizable industrial installations.

I do agree that we need international treaties prohibiting the development and use of AI weapons technology to avoid encouraging an arms race.

However, I think that trying to prevent the spread of AI tools and technology will face similar problems to the US's attempts to prevent the spread of encryption tools and technology. It is fundamentally harder to control the spread of information that the spread of physical goods.


> I do agree that we need international treaties prohibiting the development and use of AI weapons technology to avoid encouraging an arms race.

International treaties prohibiting the development and use of missile technology to prevent an arms race in the fifties and sixties would have prevented man from landing on the moon. We called that arms race the Space Race to sell it to the public. Maybe we do need an AI arms race.


I think you are absolutely correct. I think that people who are more entrenched in the field are even subconsciously unwilling to accept what is to come knowing that the unavoidable regulation will limit their creativity and just make their lives more miserable. But they know...


Read the book Daemon is you want to see creative use of this type of tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: