Reading the comments on this article make me realize how unserious most engineers and software developers are taking this.
These threats are real.
The weaponization of everything is actually happening. Since I wrote about it a month ago (Self-Crashing Cars) a number of people have reached out including people with actual insight into the military aspect of it. Militaries around the world are getting ready for true AI enabled weapon systems and there building deterrence strategies for mass casualty cyber attack (including nuclear weapons response), whether its from hacked industrial plants or cars it doesn't matter. They're actually talking about the weaponization of cars at the Munich security conference.
We need to stop burying our head in the sand and write to our politicians about this threat. I know it sounds crazy but it's real.
As an aside, my main complaint about the people that truly understand this the inability / unwillingness to accept that the act of subverting systems capable of mass destruction via cyber attack amounts to cyber weapons of mass destruction. We need to assemble all the work / treaties / regulations / research we put into securing nukes into securing AI / robotics and for the identical reasons. We know how this ends otherwise. We need wide-spread government funding and we need to communicate what these things are in language that our governments understand. Not saying something that is true just because it sounds weird is counterproductive.
You can trust that people in the US Military are taking this very seriously. I know because I, alongside LtGen Shanahan and Eric Schmidt briefed Secretary Mattis personally about these issues and is part of my ongoing work within the IC/DoD.
You're correct though that most line DL engineers don't have these issues at top of mind. I don't know any ML researcher worth their salt that hasn't thought about them, but the tendency is to brush the concerns off until we're closer to more generalized systems.
That is not a wholly unreasonable position for many reasons, especially given the history of hype around AI, but I'd like to see more discussion happening between junior and mid-level engineers about these things - and especially more work being done about human-machine interfaces in the AI context, of which very little design thought put in.
Here's the crux of the issue with respect to nuclear WMDs:
In order to prevent citizens and other countries from creating nuclear bombs, our government severely limits access to relevant tools and materials, and actively seeks to censor the knowledge of how to build modern nuclear devices from other parties.
In order to prevent citizens and other countries from creating dangerous rogue AIs, our government _____________
This is the problem. WMDs take a lot of infrastructure and can be tested/inspected.
AIs can be built/modified by anyone in their basement. The genie can't be put back into the bottle. It's like trying to outlaw computer viruses and hoping that will work.
AI needs compute cycles. These are quite centralized outside of botnets.
I’ve pointed this out before, but the primary commercial use cases of AI right now are malicious: mass population surveillance and behavior manipulation (ads; political and otherwise.) My biggest concern are the ML researchers who dismiss malicious use as being a distant concern while they work on projects that are malicious right now.
Homomorphic encryption + cloud services + VPN chaining. If there's one thing that won't be in shortage for the foreseeable future, it's compute cycles.
And I'm with you, the socioeconomic aspects of AI are far more important to curtail, but it doens't mean we should be ignoring the very real threat of people attacking infrastructure, sewage and power plants. And we shouldn't be waiting until after it's a problem, when it already can be seen as a certainty that it will happen eventually.
There are really two kinds of attacks: ones that involve subverting existing networked devices to perform malicious acts (hacking), and making one's own malicious devices (building).
Of the two, attacking existing devices sounds scarier because there is a large fleet of vulnerable existing devices that can potentially be exploited. However, of the two problems, it's also the one we know more about: it's basically straight up cyber security. The AI aspect is mostly a tangent, a way to control the system once you have access.
It seems the most straightforward way to defeat these attacks is to increase their cost by promoting better cyber security (sandboxing, use of safer languages, cryptography as appropriate, etc.). That's not to say it'll be easy, but the problems are largely known problems.
On the other hand, anything that involves the attacker actually building something themselves is either going to be much smaller scale, or is going to leave a physical trail that can be traced like we do for any other sort of weapons mitigation.
I think it's a spectrum; or rather, weak and strong ai is a force multiplier :
A hacked car is dangerous, many hacked cars are more dangerous, a pack of hacked cars more so. As is a hacked car that can hack other cars, and recruit them for the pack.
Note this isn't hyperbole as such; we already have experience with computer viri and worms.
It seems quite reasonable that a bot network might work semi-automously toward a "goal". In the same sense ants might "hunt" for sugar.
The answer is regulating data - not algorithms or hardware - the data. I'm not saying it should happen, but if you want to regulate AI then you have to regulate the data that it learns from. That means strict rules on storage, retrieval, scale, uses of, collection etc... of images, video, text, metadata, EM emissions, really anything that produces a measurable "signature" of some sort.
It's how we regulate nuclear [1] - the "source material" is the most important part, while the refineries, delivery mechanisms etc are secondary.
Doing this would put serious brakes on the tech industry so I don't see anything close to this happening honestly.
Are you serious? That kind of widespread banning of information is fundamentally incompatible with a democratic society.
It's absolutely preposterous to think that I could prevent someone from downloading and using a database of pictures, EM emissions, basically anything that a government deems dangerous, via threat of State violence.
That is as draconian and authoritarian as it gets. How do you even enforce such a law? You would need unrestricted access to every system.
Otherwise, you are only harming legitimate researchers and enthusiasts, not criminals. Criminals are still going to access and download whatever the hell they want.
How do we determine what can "help" AI and what can't? Is it the number of sources you have that should be restricted? Is there a committee that decides this? Are the proceedings open to civilian involvement? Who decides what is okay and what isn't?
Can corporations or government organizations (military) apply for this data? Why can groups of people with lots of money and infrastructure have it, but not me? How does this prevent a further segregation of socioeconomic classes?
Comparing nuclear "source material" to random data like images, video, metadata, etc is downright deceitful. If you aren't being purposefully deceitful, you really need to reflect upon exactly how such a law would be carried out, the logistics of its enforcement, and the limitations. It's a logistical nightmare and is infinitely more complex to enforce than it should be, not to mention downright anti-democratic.
> We need to assemble all the work / treaties / regulations / research we put into securing nukes into securing AI / robotics and for the identical reasons.
I disagree. I think we need a fundamentally different approach to AI than to nuclear weapons. The proliferation of nuclear weapons were controllable because the components needed to develop a nuclear weapon included specialized, controllable physical goods and fairly recognizable industrial installations.
I do agree that we need international treaties prohibiting the development and use of AI weapons technology to avoid encouraging an arms race.
However, I think that trying to prevent the spread of AI tools and technology will face similar problems to the US's attempts to prevent the spread of encryption tools and technology. It is fundamentally harder to control the spread of information that the spread of physical goods.
> I do agree that we need international treaties prohibiting the development and use of AI weapons technology to avoid encouraging an arms race.
International treaties prohibiting the development and use of missile technology to prevent an arms race in the fifties and sixties would have prevented man from landing on the moon. We called that arms race the Space Race to sell it to the public. Maybe we do need an AI arms race.
I think you are absolutely correct. I think that people who are more entrenched in the field are even subconsciously unwilling to accept what is to come knowing that the unavoidable regulation will limit their creativity and just make their lives more miserable. But they know...
The disturbing thing about this paper to me - flashy though it may be - is what they left out rather than what they kept in.
OpenAI appears to only be thinking of the crimes-against-individuals segment of malicious AI, rather then the crimes-against-humanity type of malicious AI that the surveillance advertising corporations who are supporting OpenAI are building.
I am far, far less worried about an assassin's drones using AI to find a politician in a crowd than I am about Facebook using pictures of me that other people have posted and tagged me in, so that my face is used to track my movements, and the movements of every other human on the planet, everywhere we go, and selling that information to everybody who wants a copy, and giving it away at the request of the local police.
I'm more concerned about Google using AI to mine every conversation I've ever had or my browsing history to classify me as a dissident before I apply for a visa to travel to China or the United States, or as a deadbeat before I apply for a bank loan, or sick before I apply for insurance, or as unrehabilitatable before I apply for parole.
The hackers-on-steroids narrative is a smokescreen for fully automated corporate fascism.
> I'm more concerned about Google using AI to mine every conversation I've ever had or my browsing history to classify me as a dissiden before I apply for a visa to travel to China or the United States, or as a deadbeat before I apply for a bank loan, or sick before I apply for insurance, or as unrehabilitatable before I apply for parole.
I'm quite paranoid about this, yet whenever I speak to people about it either people don't care, or already accept it's happening and inevitable.
I think part of the problem is many of us already feel we've lost the battle for privacy. Although, I'm not sure we ever seriously attempted to fight for it. Every street in cities in the UK is full CCTV cameras. The underground and buses track where you travel. Our internet is monitored and logged. This isn't a future problem that will manifest from greed and advances in AI, it's something we all accept and deal with today.
In fact, a lot of people will say to this, "if you don't do anything wrong you've got nothing to hide". They welcome it.
I've almost given up. I'm pretty much despondent at this point.
There is a booth on my way to work where they give out free Lays chips in exchange for your photo. The gimmick is that it's like a photo booth. I asked the attendant to read the privacy policy, and it's frightening, to say the least. I have no doubt that they are contracting with some AI consultant to be able to identify your face in the crowd and link it to your email address, which of course can be used to identify you freely on the Internet.
People are freely giving away their identities and their rights for a free bag of potato chips.
I always try to make sure my face is covered when I walk by it, but I can't help but despair a little when I realize that I am still surrounded by security cameras. At least one would hope that the MTA police are too busy or understaffed to do anything malicious with my face.
It's the same as the Microsoft "How old are you?" and Google "What painting do you look like?" apps. People don't think about what's going on behind something that seems trivial.
Somebody, probably a political organization, ran an opinion poll on the UN and got anonymized aggregated results. What part of that made you delete the app? If you get a political phone survey, will you unplug your phone?
In Microsoft's case, my guess is that it was just a research experiment. It worked pretty well on a few family photos I threw at it.
I think that the site is not even working anymore. Anyways, they said they were not keeping the images, and I have no reason to doubt it.
It's a pity that Microsoft is going only after the corporate market, Oracle style, and not trying to make money from consumers any more. That leaves Apple as about the only big independent player in that field. We could have benefited from the competition.
Have you ever stood in line for passport/ID control at an airport or anywhere else any time in the past decade? If yes, that government's security and intelligence services (and anyone they share data with) have enough video frames of your your face to build a high fidelity facial recognition model. And it's linked to your name, nationality and passport number.
Two wrongs don't make a right. Just because one party is doing it, doesn't mean we want everyone doing it, or even that we wanted it to ever happen at all.
I recently noticed when coming back through UK passport/ID control that there are now boxes hanging from the ceiling in the waiting area, I assume to scoop up IMSI's/Wifi/Bluetooth/NFC MAC addresses from phones and tablets. All tied to passport/facial biometrics presumably.
I'm speculating, you could try asking GCHQ. With SDRs so compact and cheap these days the chances are high that they are installed in such places as airports and train stations.
It is fairly safe to assume that turning things off should prevent most data leakage, but for the truly paranoid you would have to remove all the sewn RFID tags from your clothes and not carry contactless enabled cards.
Haven't they had your passport photo on file, like, forever? Probably since before digital archives even existed. The USA is a bit different in this regard, but every country with a national ID (i.e. most of the world) has your face linked to your name, nationality and document number, that's how ID works, it gives an ability to identify people.
The "best" part is how good an AI would be at privilege escalation. Go ahead, build out that security camera network, automate your cars, store your money using cryptography. I'm a geek and I love the way technology increases our collective prosperity, but most of our security relies on the attackers being approximately as smart as any other human. To a sufficiently capable intelligence, encrypted machine code is just as understandable as the most elegant source code. Once you fully understand something, anticipated edge cases are easy to find and exploit... every day we expand the capabilities of a future strong AI.
To a sufficiently capable intelligence, encrypted machine code is just as understandable as the most elegant source code.
Depends on what you mean by "sufficiently" and "encrypted"!
I'm not an expert, but my guess is these are still truly difficult, deep unsolved questions... We don't even know yet if one-way functions exist; and existing modes of analysis increasingly seem insufficient to fully account for the apparent recent successes of this last wave of 'AI' on ostensibly non-convex optimization problems! (Not to mention the debates about the possibility of fundamental physical/computational complexity limits to intelligence explosion, and so forth...)
So who knows at this point; but I think it is still an open question whether a future AI could reasonably 'break' even today's commodity encryption.
The problem is that we don’t know whether our encryption is vulnerable to advanced attacks. Cryptographic hash functions actually have a useful lifespan based on the math and the cost of computing. The more time we spend analyzing the math, the more likely we are to find unexpected weaknesses.
It depends on the rate of improvement of AI and what the intrinsic strength of defence is against attack. Arguably, security will start incorporating AI techniques, and by the time AI attackers surpass human capability, AI defenders would likely be too strong for humans, and quite possibly strong enough to hold against a higher intelligence.
Why? It's pretty simple, if we computerize everything a future AI is going to have more leverage over the world. If you believe Cory Doctorow when he says that everything is a computer, then everything is vulnerable to future AI.
>I'm quite paranoid about this, yet whenever I speak to people about it either people don't care, or already accept it's happening and inevitable.
Most people have no mental capacity to imagine the consequences of someone misusing their private information until either it happens to them or they see it happening in some specific scenario.
What bothers me is that everyone is being carefully conditioned to misplace the blame. Person A meticulously constructs a system designed to screw you over. They willfully neglect its security for years. Then person B comes along, pushes a button and triggers the system. These days 99% of the blame somehow goes towards person B. (Evil Hackers, or that one employee who forgot to install some update, or whatever.) Mass media is totally on-board with this kind of crap.
By the way, the same applies to AI "threat". Should we worry about AI overtaking all the world's smart toasters and simultaneously overloading the to cause mass fires? Or should we worry about the fact that those toasters somehow have processors, are Internet-enabled and run shit software?
* * *
Also, "the battle for privacy" is a bad mental model. There is no single battle you win or loose. It's a gradient. The more information you leak, the worse off you are in the long run.
> In fact, a lot of people will say to this, "if you don't do anything wrong you've got nothing to hide". They welcome it.
Weird that those people want their poops and bedtime routines live on TV globally. Having something private is not bad and anyone who thinks so hasn't taken a second to think about it.
I think the extra dangerous thing here is propaganda along with the AI to convince everyone it is okay, and I'd say that is already happening, as your comment describes.
Aside from electing responsible people to government in order to pass regulations, what can a regular person do to change the profit incentives at scale of the companies who are building the mass surveillance systems? It seems a dire and impossible situation.
> Aside from electing responsible people to government in order to pass regulations, what can a regular person do to change the profit incentives at scale of the companies who are building the mass surveillance systems? It seems a dire and impossible situation.
There's no need to be so fatalistic. The EU is about to enact GDPR; a huge privacy preserving peice of legislation. It seems like the large technology companies are going to give the same protections globally (verification is too much fuss). This is a huge win, and US citizens get it for 'free'.
Here are some ideas for more productive things than despairing on Hacker News:
- Vote with your neurons. Stop using services that you find objectionable, even if you enjoy them.
- Stay abreast of issues via organizations like the EFF, EPIC and the FSF in the US; ORG, Privacy International and FSFE in the UK, etc. Send them your cold hard cash.
- Speak to your representatives, and give them your angle. If they're being paid by the organizations you're worried about, tell them politely what you think of this.
- Socially, as charmingly as possible, let people know what you think about surveillance if it comes up. It's a fun one if framed correctly, as it doesn't divide down left/right lines, and no-one likes to sound authoritarian. Offer help and expertise to people who are wishing to make privacy preserving changes (e.g. move off gmail, set up a pseudonym, configure a VPN, etc.)
I agree with pretty much all of the concerns you and many others have cited, and I worry about that stuff too. On the other hand, I think about the past. How much privacy did most humans have in history?
To be clear: I'm nothing like an expert on these topics.
I'm specifically not considering the wandering nomad types. I think that the vast majority of all humans who have existed lived in fairly insular communities.
So you're a member of a small town where most adults and kids work in agriculture in order to survive. The building you live in is crowded, and not just with immediate family. You sleep in the same room as a number of other people.
What privacy do you have? Cross talk/gossip moves very quickly and is pervasive. I believe if any member of this community does anything out of the norm, every member will very quickly find out about it.
I also think that most any deviance from the norm was met with, at best, suspicion.
Are you free to leave and do your own thing? Sure, but isn't that quite risky? Maybe you could join another community...maybe. But the privacy situation remains the same.
Excluding the romantic but minority cases of hearty frontier settler, and the like, I don't think humans have enjoyed much privacy at all, ever.
I'll say it again: I'm an amateur at best in these topics.
Isn't it basic human nature to, on average, form gossipy, minimally private communities?
I'm not forgetting the enormous differences between what I just described, whether it be accurate or not, and what's happening today with technology.
I'm just thinking that, perhaps, only a small percentage of all humans in history have enjoyed anything like real privacy.
PS: I'd love to receive clarifications/corrections on this from more knowledgeable folk.
One huge difference is that information to a large degree would stay within these communities. Gossip was inherently limited by the bandwidth of the jungle telegraph.
What's scary now, however, is anyone can publish any information about you, tag it to your face, and now everyone walking past you could theoretically face-search you to see any gossip about you.
Before you could travel someplace else, should you do something that required it. Now you can't go anywhere.
In earlier times you could travel somewhere else and try the same thing again, one side-effect of which was the proliferation of serial scammers.
For a good person, in a sense that covers most people, having your past reputation follow you around is a benefit, the ability to wipe out reputation only benefits those with negative reputation. Furthermore, if there's a general assumption that it will follow you around, it acts as a deterrent to avoid doing immoral things, since you won't be able to easily walk away from that.
The obvious limitation to "anyone can publish any information about you, tag it to your face" is the well-established concept of libel. Publishing false harmful information is already forbidden, but they should be able to "warn the world" if the information is truthful.
I think asymmetry plays a role here. In communities as you describe the amount of information I have is roughly equivalent to the amount of information you have.
The amount of data Google has and the amount of data a government will have gives an order of magnitude advantage to whoever gets to harness it.
If you have to collect info on someone "the old fashioned way" (like being siblings with them or marrying them) the power differential in the relationship is bounded by definition. To me, privacy is almost like a vector with two components—the level of privacy, plus the degree of intimacy of the relationship.
But the consolidation of energy (capital) and influence that allows corporate organisms to drive the "norm" has not been practiced on the scale of billions of users. "privacy" seems to be somewhat synonymous with, "finding a viable existence in parallel with the existing system." With massive-scale ability to automate who is within a norm or outside of it, avoiding an arbitrarily defined norm becomes un-viable.
Lots of people care about this. I think the solution is to produce policy papers about this risk complete with proposals for how to go about curtailing those kinds of abuses. The EU still cares a lot about privacy, and other government bodies are happy to help if you can suggest how.
> ...and other government bodies are happy to help if you can suggest how.
Ah, you mean like the US punishing the company that leaked one out of every two social security numbers of its citizens? Or China, a country worldly-known for the Great Firewall? Or Russia, in which you either hand out data about your customers to the government, or your business fails?
I mean, the EU's actually giving a shit about their citizens' privacy, but they are pretty lonely in that fight. My country (bordering EU) doesn't give a shit, and neither does Serbia, a country that's supposed to join the EU in the next decade (yeah, I know, wishful thinking).
Even if you live in a so-called democracy, chances are that you can make exactly 0 effect. And there are all sorts of other problems which have much higher priority. FFS, I can't run to be a president of my own country. It's in our constitution that you have to declare yourself as a member of one out of three ethnicities just to be eligible to run. The country lost three cases in the European Court of Human Rights because of this and similar fuckups in our constitution, oldest of which will be a decade old next year, and our constitution is still unchanged.
I truly do care about my own privacy and about my citizen's privacy. Yet, if I decided to advocate for certain things inside my own country to make it better, I would have at least three problems much higher in my agenda than online privacy.
You have to be strategic if your country doesn't care, but the first step is to create the intellectual edifice activists in the future will lean against. Also even as someone not in your country, I would still enjoy reading any writing on this.
>In fact, a lot of people will say to this, "if you don't do anything wrong you've got nothing to hide". They welcome it.
The usual rebuttal I see to this mindset is "well, there's so many laws out there you're bound to accidentally incriminate yourself for something minor".
Is that the real concern here? Are people more concerned with autonomous monitoring itself, or the potential outcomes/implications that come with it? I'd love to understand this mindset. (As an aside, I've always advocated for modernizing/reforming laws so that doesn't happen, instead of blaming news tools that help law enforcement do its job.)
I think we can agree there is some amount of potential benefit of this kind of pervasive monitoring (right?). I also think we can all agree there are potential downsides as well. In an ideal world, I would think we'd take an approach that minimizes the downsides while maximizing the upsides, but if people are actually taking concern with the loss of privacy to autonomous programming (i.e. non-humans), I have no idea how to measure how "good" such a system is.
(As an additional aside, there's obviously potential for misuse by human actors, but that seems like a separate issue to tackle as well: everything has potential for misuse, and needs the proper precautions and safeguards put in place to ensure it isn't misused.)
>a lot of people will say to this, "if you don't do anything wrong you've got nothing to hide"
That kind of assumes legally OK is everywhere and always the same as ethical, morally OK. Which is I guess easy to do, until life teaches you personally that there's a wide gulf between the two.
"You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists." - Abbie Hoffman
> I'm quite paranoid about this, yet whenever I speak to people about it either people don't care, or already accept it's happening and inevitable.
I have yet to see much in the way of concrete harms from the supposed privacy issues plaguing online advertising. Everyone has a favorite hypothetical situation where Google dobs you in to the Chinese government, but no evidence that anything of the sort is actually happening.
Here's a simple one that you can try at home. This isn't one of those 'the facebook app listens to me' privacy breaches, it's more concrete and provable. Take a friend you haven't talked to in a couple of years, and start chatting with them over WhatsApp. After some days of this, go to your Facebook feed and watch how their posts now appear prominently, even though they didn't before you started chatting.
This is simple and harmless, but this same information has other uses that are not so harmless. They may not be being used right now, but they may start to be used in the future
The problem manifests in different ways though. I'm not so concerned about obvious violations of privacy like the one you mentioned, because it's so obvious that they are wrong.
Let's say you tweet some anti-American stuff from time to time and the US government decides they want you to stop. They probably won't put a bullet in your head, but they might dig up information on you and arrest you for something totally unrelated.
This happens already. It doesn't really matter what you think of them, but people like Martin Shkreli in the US and Tommy Robinson in the UK were obviously targeted by the state because they didn't like what they were doing. They dug up information and they got them on something unrelated.
AI and tech gives governments the ability to do this to new levels. They might crash your self-driving car, or they might cause your smart-TV to catch fire while your in bed, and no one would know. We'd all carry on believing surveillance is in the interest of our safety.
Joe Nacchio [0] is a better example than Shkreli. Convicted of insider trading, allegedly in retaliation for his refusal to give customer data to the NSA.
Martin Shkreli wasn't targeted by the government for prosecution. Shkreli got the max sentence for a crime he committed because he was out spoken about how he was not remorseful his crimes, nor did he show any capacity to even understand the seriousness of his actions. He also encouraged others to commit crimes for him during his sentencing. Judicial discretion isn't always perfect, but enforcing the max on an obvious danger to society was the correct course of action in his case.
I'm not arguing this. My point is simply that they wouldn't have gone after him if he didn't surrounded himself in controversy. What he did happens all the time, but unlike many others his outspokenness exposed the corruption and greed which occurs daily in our economic system.
I think that really only becomes an issue in countries where there are laws passed that contradict each other and to get anything meaningful done you have to break laws and/or give bribes. At any time someone can come after you because you had to break some laws in the course of everyday business whether you wanted to or not.
> I have yet to see much in the way of concrete harms from the supposed privacy issues plaguing online advertising.
I haven't heard of nation-state level repercussions but the persistent profiling has already bitten a few people in the ass, such as the case where Target's profiling of a teenaged girl and their subsequent marketing betrayed her pregnancy status to her parents.
Nothing bad so far... For example there's a lot of homophobic people out in the world, right? Now let's assume this article, https://osf.io/zn79k/, is correct in it's conclusions and boom, you've got machines that could potentially discriminate based on the CCTV footage of the building you entered for a job interview. This is just the start, scientists are heavily looking for those genes that make people smart, more prone to diseases, creating an AI that based on your genes determines if you're fit for high school even is seriously dystopic.
I think this is telling that you don't understand what "risk" is. Do you allow random people into your home? They most likely have not done anything bad, until they do.
let me just call up the director of CIA and ask how frequently they use facial recognition technology to identify drone-strike targets in crowds.
Institutions have a long life. All the data collected today, too, will have an extremely long life. I am sure German Jews thought the census records, tax returns, synagogue membership lists, parish records that identified them as Jewish were harmless as well - until they weren't.
Suggesting that, amongst numerous possible scenarios of abuse, if there is nothing happening now then it has absolutely no potential for harm is an overwhelmingly ignorant position. Not to mention, the public would have absolutely no idea of such abuses even if they were happening.
But to actually answer your question, here is mobile malware [1] found by the lookout team that used 3 different vulnerabilities to complete compromise an iPhone all through a link opened in Safari. This allowed for applications that encrypt messages in transit, but not at rest on the device, to be compromised. Foreign governments used this malware to target investigative journalist and activists.
Well, the good news is that fossil fuels will run out soon and then we'll all be too busy dying of looting and starvation while old datacenters are used by local warlords as death mazes for entertainment.
How is your concern different than what they are calling political security (~1/3 of the paper)?
>Political security. The use of AI to automate tasks involved in
surveillance (e.g. analysing mass-collected data), persuasion
(e.g. creating targeted propaganda), and deception (e.g.
manipulating videos) may expand threats associated with
privacy invasion and social manipulation. We also expect novel
attacks that take advantage of an improved capacity to analyse
human behaviors, moods, and beliefs on the basis of available
data. These concerns are most significant in the context of
authoritarian states, but may also undermine the ability of
democracies to sustain truthful public debates.
I guess it's the difference between what they're calling "authoritarian states", the mustache-twirling dictators of the 20th century targeting their political opponents (a fetish in papers like this), and the _new_ form of corporate-driven mass surveillance and resulting fascism that is emerging.
There is no need for an evil central power targeting threats to itself, but instead we have seen/are seeing the rise pervasive blanket surveillance and classification of _all_ people in a society, even non-political actors, from many different for-profit companies. The is combined with a new type of society that is entirely dependent on corporate services for almost all functions in life (food, travel, healthcare, communications, travel, work, housing, etc). It's a recipe for disaster.
This kind of state is a _new thing_^tm and we have to be aware of it. We're the ones who are going to be oppressed by it, not just the people living under the Saddam Husseins of the world.
You are right on. To confine this worry to 'authoritarian states' is beyond naive.
Could an A.I. program like you are discussing have prevented the school shooting in Florida by alerting the police of the shooter's state of mind and intentions?
If an A.I. could save kids, is there any way we would not be demanding the A.I. protector be installed on every computer and mobile device today? It would be so easy to see how we would voluntarily give up power to this A.I. protector.
An AI could have possibly alerted the police to the shooter's intentions if it was monitoring the right things, however, a human did alert the FBI, and they failed to act.
With correct follow-through, AI could be a useful tool for narrowing down who is a credible threat, but I agree there's a huge risk in relying on it too heavily and punishing people for pre-crime.
I took a class Sociology class in college called "Killing". Not a single time did we once ever talk about "authoritative" or "legitimate" killings of any sort. It turned out that this class was specifically meant for 'Criminal Justice' majors, people who were going on to be police officers and prosecutors. We didn't ever talk about war, the death penalty, genocide, eugenics, abortion, differential reproductive success, famine, none of it. The whole class was about homicide.
the worst part: those people will leave the class thinking they have the whole story.
the part you mention won't ever enter their mind-- it isn't under the heading of "killing" because "killing" only happens to the groups covered in class.
Back when I was an undergrad, I took one sociology class (on a different topic) and the faculty seemed especially sensitive about that sort of manipulation. I can't find a reference at the moment, but it had something to do with the CIA consulting with academics about their activities in South America in the 70s.
It was in 2010 at the University of Minnesota. I actually got a degree in sociology, so I know from experience that the rest of the department was much more critically engaged. In my other classes, we were doing postcolonial studies of globalization, studying race/class/gender and reading Paulo Friere, Marx, etc. It was all solid.
But this "Killing" class is where I discovered that the department was segregated into distinct "tracks". They had one for general sociological studies, and one for people who were going on to be police officers and lawyers: the "criminal justice" track. This "Killing" class was part of the criminal justice track.
Later on, I took a class in CSCL (cultural studies and comparitive literature) called "Aliens" and found myself wishing I'd discovered CSCL sooner.
I'm more concerned that AI will become a modern deity in that it's considered science and ought not to be questioned. Things like predictions of academmic success, criminal behavior, etc that impact the rights of individuals.
Hey, why would you want to think for yourself, when you have a computer do it for you?
A lot of people unfamiliar with Machine Learning are already showing these tendencies. I am by no means an expert in the field, only a interested amateur, but I know about the limitations and I think its important that people are taught the basics of ML. More people should view ML/AI as "a complex process minimizing an multivariable error function over a multivariable input field", instead of a "computer that thinks". This way, more of these applications of it might be questioned.
I think, the only way to stop these tendencies is to teach the fundamental basics and limitations of ML in High School. It might be possible, you don't even have to introduce tensor algebra or even vector spaces to accomplish it at this stage of education.
> More people should view ML/AI as "a complex process minimizing an multivariable error function over a multivariable input field"
I don't know where you're from, but a statement like that here in the United States would surely get you an ass-kicking.
Ok - a bit of hyperbole there, but not far from it; most people would simply hear "wah-wah-wah-wah" and have no clue what you were talking about; others would fall asleep.
Could you teach high schoolers what "minimization" is, and what it means to "minimize" a "multivariable error function over a multivariable input field"? Maybe some of them, but even there I'd bet the majority won't get it, and those that do will only have it for the time it takes to pass a test, then promptly forget about it.
Honestly, there are plenty of other subjects I'd rather see taught in high school (and earlier) to kids, the greatest being critical thinking methods and how to question how you know what you know. Perhaps along with philosophy (general and "of mind"), and a few other similar subjects.
I doubt we'll ever see this any time soon, though, as such knowledge would undermine much of the status-quo, particularly religion.
Yeah well, I kind of know what you are aiming at. I am not a teacher or pedagogue so I might aim to high with this. I don't mean to establish an AI class or an in depth coverage of these concepts, but some fundamentals on this topic, maybe as part of the mathematics curriculum (or philosophy as you suggested) would be prudent.
>> Could you teach high schoolers what "minimization" is, and what it means to "minimize" a "multivariable error function over a multivariable input field"?
By the way I phrased it in this particular way, because it is mathematically correct but also takes the magic and notions of consciousness out of the topic.
I am far, far less worried about an assassin's drones using AI to find a politician in a crowd than I am about Facebook using pictures of me that other people have posted and tagged me in, so that my face is used to track my movements, and the movements of every other human on the planet, everywhere we go, and selling that information to everybody who wants a copy, and giving it away at the request of the local police.
Definitely not a lawyer but, as I understand it, depending on jurisdiction and context, some social media postings may be considered "public" information volunteered without "reasonable expectation of privacy" ---in which case, awfully enough, anything goes...
Again, not a lawyer; but I wonder if there should be a right to clear-language, mandatory warnings ---like in cigarrettes--- whenever you are about to post something that will not enjoy "reasonable expectation of privacy" (and hence could be sold or used against you in the future, etc.)..
I found out Google has been logging my location since I got an Android in 2014. You can see what Google has on you here: https://www.google.com/maps/timeline
My guess is normal people don't know or care enough to turn it off.
>> The disturbing thing about this paper to me - flashy though it may be - is what they left out rather than what they kept in.
I agree 100%
I would feel a lot more comfortable if we had an objective think tank like organization doing the research instead of the actual companies who are developing AI and have a vested financial interest in steering the public in the direction they want in order to lower people's concerns.
I've had concerns about this for a long time. Many people simply discounted me as a conspiracy buff when I brought up the dangers of AI. Now? Not so much.
At least Google and FB are big-enough targets that operate out in the open that they can be crushed by governments if they throw their weight around too much.
The things that scare me are actors that don't have flashy names. The invisible marketing companies funded by lobbyists. Astroturfers. Russian propagandists.
I've always found the group's name amusing.. " Open " AI is disconnected from observed reality since eons. Intelligence always has information asymmetry of some kind.
i dont share your concerns at all. my daily feelings and activities wouldnt change, whether facebook and google know about them or not. Contrast that to being killed by a drone.
privacy concerns is rather an interesting intellectual problem, and not a real one. if it was a real problem, people would change their use of those tools but they dont.
you can argue that loss of privacy allows corporations to manipulate us, but i see only trivial effects. look at how people vote and buy. i see more diversity and conclude less manipulation.
* Your admission, for you or your children, to a school or university is refused because more worthy people have applied
* Your job application is refused,
All because of a fuzzy social score, that is mined from your contacts, activities on social network, meta data gathered from your smartphone in an opaque way, CCTV cameras, etc
I mean if we have a way to rate people, why not use it everywhere !
This is already partially implemented in China, I remember reading an example where with a low social score you have to pay for a deposit, and you don't have to with a high social score.
In a way, that sound efficient and meritocratic, but also ruthless if you end up with a low score.
edit :
add a reference with some every day life examples
Doesn't the vast amount of data we have allow for things like getting a loan without knowing a guy? As for point 2, if someone is more worthy... why is that bad? And why should an employer not use information easily accessible to them to determine the character of a potential employee when they only can have brief correspondence via interviews? I'm not sure if you're talking about future implications or what but it seems like this is the system that we exist in today. I don't think most of these decisions are based on a "fuzzy social score" though. At least not the topics you brought up.
I am talking about a future that is already being built today, piece by piece. I am well aware we are already judged with multiple bias, in a an opaque way. But this is not systematic, hopefully Employer A can have different bias than Employer B.
For me, the dystopic part would be to have a single, automated source of "truth", like the Sesame score, used everywhere.
It means that everything becomes easy or difficult because of that score. Suddenly you cannot rent or at higher price, your insurance rate will increase, it is harder to get a job...
Optimistically, in theory, it could make people "better", but who decides what "better" or "worthy" means ?
Hah, I can agree with a single source of truth. A credit score is essentially what you are referring to. I know for sure you cannot rent without a large deposit with a terrible score, and some jobs will not hire you (especially GOVT jobs) when you have large debts and bad credit. Not sure about the insurance rates, but it certainly would not surprise me.
> As for point 2, if someone is more worthy... why is that bad?
Someone is "more worthy" according to an opaque machine learning algorithm. Your line is exactly the danger: people are going to confuse "the computer says X" with "X is true".
I agree that it could be frustrating, and maybe someone will not make that distinction. But either way, these are decisions that have to be made. What is your alternative to how the decisions are made? In the end, you are either leaving it to a human decision who is going to make a choice based on some heuristics and prejudice or an algorithm which is going to make a choice based on heuristics and prejudice. I'm not sure if you have a problem with the lack of distinction between a human and an algorithm or the idea of a lack of fairness. The decisions that require something like a machine learning algorithm were probably opaque to begin with. Fairness doesn't seem so different in either case...
However, there is a differnce. If I dont get a job offer becuase I dont fit a humans biases of what they want. I can go somewhere else with a human that has differnt biases. The problem is with AI is how many googles are there? We will liekly end up with a handful of companies that rate you. Suddenly it does not matter where you apply for a job. If you did one thing that ruins/lowers these ratings you may not get a job. The problem is only a handful of entities decide what good is. If everyone is using these systems and you fail to meet their definition of good person/hire you like may suddenly become difficult.
Where as before people would have decide that themselves even if its a shallow cursory judgement. At least people are makeing their own decisions instead of blindy following a number/evaluation with criteria they do not know.
Well, that's just part of the problem with the connected world. If you do something wrong it doesn't take machine learning or AI to find out, you just have to google it. Someone hiring you for a job will most likely do this anyways... I have had recruiters even call me out on such things. It has nothing to do with an algorithm.
It could be problematic if there was only one source of subjective information, especially if any perceived past transgressions were irredeemable. But if they are factual, do you have a problem with that? E.G, you go to hire somebody and find out that it's not recommended by hiring_hueristic_x based predominately on factors X,Y,Z (X being he stole from his past 10 employers, Y being he never held a job for more than 6 months, Z being he commonly posts serious threats about people he doesn't like online).
Also, do you have any idea what actually occurs when you make a decision? I don't. I would love to know someone who does. I still follow it.
Did I say I have a problem with people making a decision based off information. Problem is these AI systems will be making decisions and if everyone is only using a handful of vendors and there software just says bad canidate or gives you a low rating. The people are just going to default to that. What it boils down to is small number of systems making decisions.
It turns into a small number making decision for many. Where as without the system it is decentralized and eveyone is making independent decisions. There also more room for nuance.
Do think conpanies are going to develop there own AI and gather datasets on people just evalute potental hires. Train it for the requirements they feel is best for there company?
No they are going to out source it. So what is likely going to happen is only small number of systems making these decisons.
Nothing's changed. People have always been unfairly refused loans or denied jobs. Presumably, these metrics wouldn't be used if they weren't better than the status quo.
> privacy concerns is rather an interesting intellectual problem, and not a real one. if it was a real problem, people would change their use of those tools but they don't.
It is a real problem and plenty of people do change their use of their tools.
I may have been incorrect to use the word 'plenty.' So let me rephrase: people do change their use of those tools. I don't see how those people being insignificant statistically makes a problem any less real.
well you didnt say majority so the word plenty is ok actually
i agree that it is a real problem for lets say millions of people.
but what counts is the majority opinion for me. there are all kinds of contrary (!) opinions among small subgroups of the population, and thus it isnt possible to adjust wide-ranging policies to those opinions
I'm sorry, but you're naive. The day might come when anyone with an 'X' in their online username is considered a threat. Ridiculous? Sure. But how about the things you've purchased? Schools/churches you've attended? Meetings you've gone to? Friends you have? The problem is when the definition of a threat changes, you can't go back and change your past, but are stuck.
Being a Jew in Germany in 1910 wasn't an issue. They had nothing to hide. In 1940 there was no going back for them.
you are naive and live in a fantasy world if you are afraid of this.
regarding the jew example, the problem is not about having stated in public what you are but with societies which discriminate.
you might have a point saying “my life should not be transparent, as to make it more difficult for a potential discriminatory society to attack me”.
but it makes no real sense to worry about it because it is so super minor in comparison to the deep problems this society will have. i know you disagree :)
> the problem is not about having stated in public what you are but with societies which discriminate.
Doesn't history teach us over and over that societies DO discriminate? I don't believe that basic human trait will ever change. If that could change, maybe someday we could do away with 'evil' altogether, but I don't see that happening either.
Just look at Internet justice vigilanties. If you're on the wrong side of a social issue, you could very well be targeted. I have a few unpopular opinions, and I keep them to myself for that very reason. You don't have to look too far to see what happens if you don't.
It may be superfluous to point this out but this seems to be talking about malicious uses of narrow AI, rather than malicious strong AI. The only defense against a malicious strong AI is, of course, a friendly strong AI.
A super-virus that has a 100% mortality rate, is incredibly infectious, and stays dormant for 1 year before killing the host. There's a fiction (I literally just made it up). Is it impossible to say anything about it?
A bomb that is 1,000 times for powerful than a nuclear weapon. Again, currently science fiction. Can't say anything about that either?
The fact that it doesn't exist right now doesn't mean we can't say anything about it, and the fact that it's potentially dangerous means we should be trying, at least, to think about it. Assuming it really is as dangerous as some people claim, do you really want to wait around until it's no longer science fiction, and only then, after it's too late, start thinking about it?
A "superintelligent" AI that is to a human as a human is to a fly. Is it impossible to say anything about the world after it's been built?
Yes. It's not the fact that it doesn't exist, it's the fact that not only is it not clear that it's even physically possible, anything you come up with can just be handwaved away by this mystical property known as "intelligence". Your two examples don't have this problem so they're not similar.
Most of the literature I've seen about AGI is about theories on how to build it, or theories on how to build it safely (e.g. so it has stable preferences that align with human values). Not stuff on "how to defend yourself against a rogue AGI", because there really is nothing you can say about that.
The problem may not have been your point (that we can't reason about strong AI), but the way you justified it.
Strong AI is not hard to reason about because it is science fiction. As edanm pointed out, it is entirely possible to reason about science fiction. The difference is that the examples edanm points out are extensions or modifications of things that do exist and that provides a basis from which we can reasonably extrapolate.
We can't reason about strong AI precisely because we don't have as good of an existing example from which we can extrapolate. The best we have is the human mind. The extrapolation from the human mind is precisely the basis for argument that "friendly strong AI" is the best/only response to to "malicious strong AI" since the thing that keeps malicious humans in check is "good" humans.
We don't even know if or to what degree 'super-intelligence' is possible. For all we know, there may be fundamental trade offs between the different skills necessary for general intelligence that may provide functional limits to the degree to which intelligence can be super human.
> We don't even know if or to what degree 'super-intelligence' is possible. For all we know, there may be fundamental trade offs between the different skills necessary for general intelligence that may provide functional limits to the degree to which intelligence can be super human.
While this is true, it would be extremely surprising if evolution's first stab at higher intelligence (humans) came anywhere near the theoretical limits of what intelligence could be if directly selected for.
Just remove the size and energy consumption constraints imposed on the human brain by what the human body can support, and you should be able to get a significant improvement without changing the base architecture much at all.
Digitize and allow for copy/paste of "brains", and now you've got 30,000,000 copies of Einstein working together on problems but each specializing in their own pursuits, each of which doesn't get tired and lives forever, and can open very high-throughput direct neural communication channels with the other Einsteins whenever they need to share knowledge.
Start tweaking the virtual brain's "chemical" environment, and you can probably supercharge them by doing all sorts of stuff that would kill a real human but is fine when all you care about is making a simulated brain work better (virtual megadoses of Ritalin, massive tweaks to neurotransmitter behaviors, etc.).
These are just a few obvious ideas, but it's hard to imagine that together they couldn't scale intelligence up at least past the point where we could ever hope to keep up, as long as we got it into digital form in the first place. You're correct that there will be an architectural "wall" somewhere, but chances are that human intelligence is more in the "vacuum tubes and punch cards" regime than the "Moore's law is done" one. Whatever is there when that wall is hit would be so far beyond us that it might as well be called super-intelligent, even if it can't improve further.
> We can't reason about strong AI precisely because we don't have as good of an existing example from which we can extrapolate. The best we have is the human mind.
It's actually human minds. Organizations of humans like major corporations and government possess the sort of resources that we imagine a super AI would have.
>Not stuff on "how to defend yourself against a rogue AGI", because there really is nothing you can say about that.
Of course there's at least a couple things you can say. "You should have built a friendly one first", or "convince it you're harmless". Or more grimly, "become harmless".
> Is it impossible to say anything about the world after it's been built?
Well, actually no. It's possible to say one thing (and as far as I'm aware, only this thing): You'd better have an even smarter friendly AI on your side, or you're almost certainly boned.
> You'd better have an even smarter friendly AI on your side, or you're almost certainly boned.
Are you sure? Was the world better off with both the US and Russia having nukes? Perhaps, or perhaps we got incredibly lucky that we didn't destroy ourselves. It is quite possible that a world where we have a single malicious AI is better than a world where a malicious AI and a friendly AI are locked in a battle to the death.
This made me think of a riddle I once heard recently from a high school student. "What is sexually transmitted, 100% fatal, and literally everyone you know is infected? ...Life."
That's fair, but arguing something isn't possible is very different from arguing that when it happens you can solve it with a bomb. The conversation is about a hypothetical strong AI. You can say strong AI isn't possible, like you can say warp drives aren't possible, and it's still valid to discuss what it might be like if they were possible.
That's exactly like claiming it's still valid to discuss what it might be like if hypothetical aliens invaded the planet. Should we build some big lasers to protect ourselves just in case?
Entertaining perhaps, but ultimately pointless and silly.
Ok, that makes the post I'm replying to a non sequitur. Weak AIs in the short term are going to be hosted on AWS or Google cloud.
Is somebody going to take a JDAM to Google's data center in Atlanta or Amazon's data center in Seattle? Blow up Facebook in Palo Alto? Won't the US Air Force have a problem with that?
What if it was distributed like bitcoin? If I was writing a dystopian sci fi novel the evil AI would evolve from a cryptocurrency network and have access to billions of dollars and autonomous corporations. Also the support of many humans who were invested in the currency. Just think of the politicians it could bribe/threaten with tens of billions in anonymous crypto.
I find it quite funny, rather intruiging, that we seem to have gone full circle on trusted sources of information. Historically, a face-to-face meeting was considered as the ultimate legitimate and trustworthy way. Not story or rumors or witnessing, since the courts say people can be "decieved", "traumatised", etc. Then came microphones, cameras, CCTV in the 20th century, and then they became the ultimate trusted sources of information.
And due to AI and it's rapidly increasing misuse by enormous conglomerates, it will be very soon when videos are never trusted but rather treated as comedic rumor and folklore, and we will go back again to how it always was.
...until replicants come.
I'm saddened that there are actual "smart" people who waste their days to work on these malicious forms of AI, be it Google's almost entire arsenal, or anything. However, i'm not surprised they do, but it is still sad.
Your comment reminds me of a silly little graph I saw posted to reddit once, basically stating that the prevalence of things like miracles and witchcraft was high throughout human history until the development of the camera, where it stayed low until the development of Photoshop.
Depends on the community. People are gullible enough on Facebook. I think seeing friends and family like or share a thing tends towards herd behavior. Whereas on, say, 4Chan, where users are generally anonymous and usually have no repercussions for their posts, everything remotely worthy of skepticism is “shopped”.
> I'm saddened that there are actual "smart" people who waste their days to work on these malicious forms of AI, be it Google's almost entire arsenal, or anything. However, i'm not surprised they do, but it is still sad.
Am sure the usual justification to apply salve to your conscience for this sort of activity is the trope that the 'bad guys' will do it anyway, so we need to do it before them to counter them and be the torch-bearer of liberty.
The atom bomb was developed upon that fear and pretext. Compared to that AI is a fairly mild thing.
Trope about bad guys? Nazi Germany and Imperial Japan worked hard to get the atom bomb. If the 'bad guys' had created it first, would the world be a better place?
The bad guys _did_ get it first. If Germany or Japan developed it first, they would be sure to go down as the "good guys" in the history textbooks, and you'd be on hackernews wondering how awful it would be if the demonic United States of America developed it first.
This comment shows either an incredible ignorance of history or a pathological view of right and wrong. Nazi Germany was exterminating whole races in the millions. Imperial Japan was doing the same, averaging 100,000 dead Chinese, Koreans and Vietnamese PER MONTH for over 8 years. Tokyo newspapers regularly published head counts for officers who were in head chopping contests of villagers in areas where the population needed to be suppressed. They would roll into a village and just start lining up people to cut off their heads. The Rape of Nanking by itself stands out as one of the most brutal events of the war.
The US isn't perfect but to say it was the "bad guy" in the war isn't an argument supported by anyone's facts.
Crypto works for trust of computing systems - it can make some claims about the transformations of the captured data or about the integrity of the software handling these transformations, but it fundamentally can't make solid claims about reality that's supposedly being captured. You can't put crypto between reality and a sensor.
At best, crypto can give you a statement like "something possessing the particular secret X claims to have sensed this data at this point of time" combined with "there's a device with secret X that has it's software/firmware integrity verified and signed by entity Y". Crypto can't ensure that the secrets on the device aren't actually leaked by the manufacturer to enable "verifying" of arbitrary data outside of that device, and there's always the option to simply ensure that the sensor "sees" what you want; a camera and all the crypto on it can't tell whether it's pointed at a real event, at a staged event, or at a sophisticated optical device projecting arbitrary photoshopped data.
I would suggest that we shouldn't ever rely on digital archiving of important information. There should always be a copy of the information in analog, that can be dated & verified with analog methods.
This is one feature a blockchain excels at. People have stored the Bitcoin white paper in the blockchain. Anyone can then download it, and verify it is the untouched original.
Going fully digital with info, verifying with blockchain is all well and good, until a malicious actor forbids/prevents people from using blockchain somehow. if you've gone fully digital at that stage, you're in trouble
Block chain still handy to verify that no one tried to rewrite history and pretrnd hash was different in first place. And hashes on larger documents are never safe from collisions.
You can have 1000 of companies that act fair and don't use AI for malicious purposes, but there it that one company or community that doesn't... and then someone sends you gay porn with your face in it.
I'm not too worried about that. The moment that type of technology becomes widely available is the moment this type of blackmail loses all edge. You might even actually start doing gay porn IRL and people will assume that it's been "deepfaked".
The corollary is a little more worrying: any kind of incriminating document about a politician or public figure will be dismissed as a fake immediately. I mean, they already do that, but that'll be even harder to figure out what's real and what's not.
That "grab them by the pussy" tape? Obviously fake. I mean, you don't even see the guy talking, just the audio, how gullible can you be?
That girl running away from the napalm bombing? Obviously fake. I mean you're going to tell me that all of her clothes burned but she's still fit enough to run? Everybody around her wears clothes. Come on man, are you new here?
That chinese guy standing in front of a military tank with groceries? Come on, I can do a more convincing fake in 10 seconds on my smartphone. There, look, I just did.
We have a brave new world ahead of us where you won't be able to trust anything you see or hear through any media, no matter how convincing it seems. That's pretty terrifying IMO.
I remember a while ago stumbling upon a conspiracy theory forum where people were claiming that a video of an interview with Julian Assange was a fake because there were a few strange visual artifacts around his face sometimes. Given that the quality of the video was very good and the oddities were rather minor (possibly encoding artifacts) I dismissed it as the usual tinfoil hattery.
I think in the future I won't be so sure anymore. I'm not sure if the technology to make such a good quality fake already exists but it's probably a matter of years before we get there. If some people with too much time on their hands manage to make somewhat convincing porn montages for free on the internet what can big three letter agencies do? What does the state of the art look like? What will it look like 10 years from now?
I know that "why not blockchain" has become a cliche and I agree that majority of proposed use-cases seem like hammer desperately looking for the nail, but maybe this is an area where it could be indeed useful?
- Create a special "evidence camera" that allows photos taken with this camera to be used as an evidence.
- When you take the photo, the camera posts digital fingerprint of the photo on the blockchain.
- To prove that camera internals have not been tampered with, it also signs the fingerprint with the "camera private key". The private key is destroyed when the camera case is opened: for the sake of the argument let's say that the value of the air pressure inside the tightly locked camera case is the private key.
- The public key of the camera is publicly known, so everyone can verify the validity of the private key.
The challenge of your solution is that these situations are those that most deeply need anonymity.
You want proof of the veracity of the audio or visual evidence but the person(s) taking it and getting it out to public view desperately need to not be identified. Or at least they can't be connected to any network while collecting evidence (i.e. video of something unseemly happening in China/Somalia/DRC).
Deepfakes are a very hard problem and solely technical solutions are unlikely. A mix of context, reputation of the recording provider, and technical analysis is far more likely to be the right approach than any flavor of the month technology, be it block*, Erlang, Rust, or capsules.
At best, that that setup lets you prove that a certain pattern of light was present on the camera sensor at a certain time. It says nothing about the way the pattern was created, whether it was pointed at a real scene, or whether someone projected a faked movie into the camera.
By extension, any scheme to prove the truth of arbitrary measurements (audio, video, anything else) is vulnerable to manipulation of the measured value itself. The only way to be sure that something isn't fake is to experience it yourself (at least until virtual reality improves far enough to make even personal experience unreliable).
> We have a brave new world ahead of us where you won't be able to trust anything you see or hear through any media, no matter how convincing it seems. That's pretty terrifying IMO.
Anyone can correct me if I'm being a bit dramatic but personally, it feels like we are very much already there.
PS:
1. Policymakers should collaborate closely with technical
researchers to investigate, prevent, and mitigate potential
malicious uses of AI.
I never knew that research in Security was so deeply underfunded, that you had to write pleas into a paper. Jobsecurity research - its really that dangerous.
I really enjoy the example set by ClarifAI, the ability to search terrabytes of video with the help of tags is going to be a very nice boon for any totalitarian regime in the future.
Evil startup idea gleaned from paper: Use AI/ML to scour a sales prospect's online persona (social media) and build a 'vulnerability profile' and generate targeted, personalized cold-emails (or even phone calls eventually). Also identifies 'levers' for a particular person that can be used to influence a buying decision.
May even pre-qualify leads for you and tell you when not to waste your time :)
I mean, a good sales person already does a lot of this, but it's time-consuming. Imagine if you could automate this process.
It's not an idea. You are already 5 years too late.
You don't need to scour online personas. Companies such as Acxiom and FullContact are just two of many which house identities of 10s of millions of individuals.
Then all that's needed to do is plug into twitters firehose and facebooks stream and voila. Real-time data based on what they are thinking/doing online.
Oh and connect to an RTB stream and get real-time where they are clicking. With enough data, you could forecast where they are going next.
Finally, have AI compose the content in the sales funnel to secure that conversion.
I can guarantee you, this is actively being worked on as I type.
Within 5 years, the entire marketing aspect will be mostly automated.
Cambridge Analytica already did this for the past U.S. election. It had an app that told canvassers which doors were worth knocking on and what sort of dialogue to use depending on things like the person's estimated neuroticism and other traits. It'll only get more sophisticated.
Still reading the paper and forming an opinion. But my initial thoughts are what exactly is new here that couldn't be done through some other means? I'm sure there will be interesting implications, but right now nothing seems particularly novel.
But is there really anything to report really? What's new that isn't doable with low level automation, I've been able to drive a browser with python for a while.
> Human-like denial-of-service. Imitating human-like behavior (e.g. through human-speed click patterns and website navigation)
It's already easy to simulate activity, clicking a random link on a page would be basically as good. Are there even DoS
> Prioritising targets for cyber attacks using machine learning. Large datasets are used to identify victims more efficiently, e.g. by estimating personal wealth and willingness to pay based on online behavior.
So, like, sum and sort all the spending data?
But then again, maybe policy makers already didn't know what was possible?
I found this to be very fascinating read. I have heard of use of ML to detect, for instance spam or phishing emails, but I've never heard of attackers using ML models to generate phishing emails. How do you differentiate such a message from any other phishing attempt ?
Thinking out loud, in the US, we have seen breaches of OPM, travel, healthcare and insurance companies where seemingly the only motive was to exfil data. Many of these attempts are attributed to state sponsored APT groups. Now that someone has all this data, the next potential move seems to be to train models over this data to understand habits and patterns, frequent locations and friends, and predict social and political leanings...
I only have limited knowledge on this subject, but all this sounds plausible right ?
It's nice to see a discussion of AI risk that addresses concrete scenarios. A lot of the forecasted doom (in other reports) resorts to handwavy arguments, but rarely goes into specifics. The examples they've given[0] seem plausible enough to me (except the persuasive ads one).
[0]: Persuasive ads, vulnerability discovery and exploitation, hacking robots (this one is only tangentially AI related), and AI-augmented surveillance.
I am wondering does anyone have a survey or list of AI exploits or malicious actions done on production services or systems ? For example like if a misclassified image that would target a image recognition system (such as Clarfai)? I have only seen papers that have theoretical attacks so far.
It's out of the bag now - we just have to hope the blue team can defend against regimes where the best maths talent more likely ends up building military apps than doggy photo filters
When AI becomes practically unrecognisable from human, it will get really interesting in finding ways and means to stop it from being used for conning.
I think there's at least some consensus around the idea that Strong AI doesn't necessarily imply consciousness, and without consciousness, it isn't slavery/immoral. Of course, consciousness isn't well-defined or well-understood, which is why we're not really sure about whether that's true, or how to make sure a Strong AI doesn't have consciousness.
According to most humans no, because while it may possess a mind, it does not have a soul and hence can be used and exploited. This is the argument we have given ourselves over millennia to use highly intelligent animals and of course human slavery was based on the myth that Africans were sub-human and slavery was good for them. Why do you think this time will be any different?
In the Roman empire they didn't even bother making a myth that their slaves were sub-human. It was simply a matter of might makes right: we conquered your nation so now we own you.
I think perhaps you're confusing Roman history with some other period of human history, perhaps on a different continent. In the Roman empire there were slaves who taught philosophy, slaves who managed large estates, etcetera, and people could both sell themselves into slavery and be freed from slavery. In fact, selling oneself into slavery was a popular route to becoming a Roman citizen.
If you don't want to read a history book, for which I wouldn't blame you, you might nevertheless enjoy Robert Harris's Cicero trilogy, which gives a fairly accurate impression of Roman society (or so claim many reviewers more competent than me). It's a truly amazing period of human history, when life was so modern in some ways, and yet so different from today in other ways, and the world was small enough for an individual to change the course of history.
I have read some history books. What part do you think I'm confused about? There were slaves in the Roman empire. They weren't considered sub-human and some held positions of responsibility, but they were property. Many slaves came from military conquests.
Because it's 2018 and everyone is looking so cool and we've got our history lessons and we fight inequality and all this stuff? Or, do you mean, this is just another circle of the spiral of human biases?
So a puppy bred to enjoy beatings would be ok to beat? Idk, the ethics seem a bit muddy here if you start form the mindset of a sentient being, rather than a 'robot that feels'.
I don't think these issues have anything to do with AI but 100% with humans.
AI is used everywhere where money is to be made. Let's take ads.
Why do we allow ads to happen all over the place (TV, radio, internet, magazines)? Because they feel stupid, harmless, how else can we sell products, etc? Well, then when a ML system is serving you ads, it's doing it so at the right time and will target you with the bang on content so that you buy exactly what it's needed. For eg. modern ML algos can pick up in which mood and mental state you are (out of 27 different states) by looking at how you read content online, what you read, how many tabs you open or how related the pages you visit are <yes, all cookie(-pool) based>. I know this because I build these things. It works incredibly well. It finds your weak spot, your passion, your habits, your indulgences and makes sure you're always tempted. It can tell more about yourself in a split second than you can think in an hour about who you really are. That bullshit you tell yourself about you doesn't matter because you are the rat the ML algo is baiting you until you bite.
The problem is not that the AI is selling to the diabetic more cupcakes, the alcoholic booze and hipsters new gadgets like no other salesman. The idea of advertising is perverted itself from skin till bone marrow. How can you allow that to be done to the people? And we think this is how things are? OMFG.. AI is just a tool.
WMD are another dead stupid idea only humans could come up with. Me country builds insane big weapons that can whack entire cities in split second just because it can & has more money & and we were first & we won the war & no reason. It also tries to make sure nobody can build it (like Iran) so they all kneel and kiss the ring.
AI kind of levels the field. It's just a bigger nuclear bomb. My is bigger than yours taken to the next level. And that's just the beginning.
AI makes it game on for more players than we like it to be. And it's not just AI (which is just math run by computers), it's mathematics, human genome research, physics, nuclear physics...all STEM subjects. You can come up with a weapon from any mix of these.
There are only two ways out of it.
The humanity dies or we learn how to all work together, chill our entitlement and find our common values regardless of religion, money, skin colour, gender or part of the world. A fart made in any part of the world can kill people everywhere.
If humanity is to turn against itself because countries or corporations play a retarded game in AI/ pharma/ nuclear/fill_in_the_blank .. this only only accelerates a bit the process but the result will be the same.
Let's ditch advertising, pharmaceuticals, OTC, bogus pills created without any fundamentals, weapon industries and all other nonsense industries or acts focused on massive profit at the cost of humanity. Let's ditch all kinds of waste and put people first for once. Let's ditch olympics where we try to prove that my country has it bigger than yours.
Let's invest in education and health, inclusive societies where everyone works hard to solve humanity's tough problems.
We keep building bigger guns but we as a society didn't grow much in empathy, inclusiveness or respect for the planet or other countries.
People are killing people not AI. A machine gun is the safest thing in the world. No single machine gun on this universe loaded itself and started firing precisely to people or animals.
Probably the best thing that can happen to this planet is to have humans vanish while there are still trees and animals around.
It has at least some to do with the AI itself. Of course humans are the ultimate origin of AI, but when your system is run by algorithms created by an AI, you aren't really making the decisions anymore.
I believe much of this can be dealt with, if we start RIGHT NOW to address a serious issue in our systems - the lack of a way to represent Morals and Ethics (the When and the Why) in the systems we are building. This needs to provide important input to, and thus shape the DIKW(D) pyramid.
I've been doing some work with the IEEE on this - and I'm looking here on ycombinator to get some real-world feeedback on what people are thinking and concerned about.
I have some (personal) ideas that might work to address the concerns I'm seeing.
{NOTE Some of this is taken from a previous post I wrote (but kinda missed the thread developing, I was late I don't think anyone read it). It is useful for this thread, so a modification of that post follows.}
First, I think you need a way to define 'ethics' and 'morals', with a 'ontology' and a 'epistemology' to derive a metaphysic for the system (and for my $.02, aesthetics arises from here). Until we can have a bit of rigor surrounding this, it's a challenge to discuss ethics in the context of an AI, and AI in the context of the metaphysical stance it takes towards the world.
This is vital, as we need to define what 'malicious use' IS. This is still an area (as the thread demonstrates) of serious contention.
Take sesame credit (a great primer, and even if you know all about it, it is still great to watch: https://www.youtube.com/watch?v=lHcTKWiZ8sI ). Now here is a question for you:
Is it wrong for the government to create a system that uses social pressure, rather than retributive justice or the reactive use of force, to promote social order and a 'better way of life'?
Now, I'm not arguing for this (nor against for the purposes of this missive), but using it as a way to illustrate that different cultures, governmental systems, and societies, may have vastly different perspectives on the idea of a persons relationship viz a viz the state when it comes to something like privacy. I would suggest that transperancy in these decisions is a good idea. But right now we have no way to do that.
I think the current way the industry is working - seemingly hell-bent on developing better, faster, more eficient, et al ways to engineer Epistemic engines and Ontologic frameworks in isolation is the root cause of the problem of malicious use.
Even the analysis of potential threats (from the article referenced 'The Malicious Use of Artificial Intelligence:
Forecasting, Prevention, and Mitigation' - I just skimmed it so I can keep up with this thread, please enlighten me if I'm missing something important) only pays lip service to this idea. In the executive Summary, it says:
'Promoting a culture of responsibility. AI researchers and the organisations that employ them are in a unique position to shape the security landscape of the AI-enabled world. We highlight the importance of education, ethical statements and standards, framings, norms, and expectations.'
However in the 'Areas for Further research' section, I would point out that the questions are at a higher level of abstraction than the other areas, or discuss the narrative and not the problem. This might be due to the authors not having exposure to this area of research and development (such as the IEEE) - but I will concede that the fact that the note about the narrative shows that very few are aware of the work we are doing...
This isn't pie-in-the-sky stuff, it has real-world use in areas other than life or death scenarios. To quickly illustrate - let's take race or gender bias (for example the Google '3 white kids' vs. '3 black kids' issue a while back in 2016). I think this is a metaphysical problem (application of Wisdom to determine correct action) that we mistake for an epistemic issue (it came from 'bad' encoding). This is another spin on kypro's concern about the consequences of AI deployment to enable the construction of a panopticon. This is about WISDOM - making wise choices - not about coding a faster epistemic engine or ontologic classifier.
Next, after we get some rigor surrounding the ethical stances you consider 'good' vs. 'bad' (a vital piece that just isn't being discussed or defined) in the context of a metaphysic - you have to consider 'who' is using the system unethically. If it is the AI itself, then we have a different, but important issue - I'm going with 'you can use the AI to do wrong' as opposed to 'the AI is doing wrong' (for whatever reason, its own motivations, or it agrees with the evil or immoral users goals, perhaps, and acts in concert).
Unless you have clarity here, it becomes extremely easy to befuddle, confuse, or mislead (innocently or not) questions regarding 'who'.
- Who can answer for the 'Scope' or strategic context (CEO, Board of Directors, General Staff, Politburo, etc.)
- Who in 'Organizational Concepts' or 'Planning' (Division Management, Program Management, Field commanders, etc)
- Who in 'Architecture' or 'Schematic' (Project Management, Solution Architecture, Company commanders, etc)
- Who in 'Engineering' or 'Blueprints' (Team Leaders, Chief Engineers, NCO's, etc.)
- Who in 'Tools' or 'Config' (Individual contributors, Programmers, Soldiers, etc.)
that constructed the AI.
Then you need to ask which person, group, or combination (none dare call it conspiracy!) of these actors used the system in an unethical manner? Might 'enabled for use' be culpable as well - and is that a programmer, or an executive, or both?
What I'm getting at here, is that there is both a lack of rigor in such questions (in general in this entire area), a challenge in defining ethical stances in context (which I argue requires a metaphysic), and a lack of clarity in understanding how such systems come to creation ('who' is only one interrogative that needs to be answered, after all).
I would say that until and unless we have some sort of structure and standard to answer these questions, it might be beside the point to even ask...
And not being able to ask leads us to some uncomfortable near-term consequences. If someone does use such a system unethically - can our system of retributive justice determine the particulars of:
- where the crimes were committed (jurisdiction)
- what went wrong
- who to hold accountable
- how it was accomplished (in a manner hopefully understandable by lawmakers, government/corporate/organizational leadership, other implementers, and one would think - the victims)
- why it could be used this way
- when could it happen again
just for starters.
The sum total of ignorance surrounding such a question points to a serious problem in how society overall - and then down to the individuals creating and using such tech - is dealing (or rather, not dealing) with this vital issue.
We need to start talking along these lines in order to stake out the playing field for everyone NOW, so we actually might have time to address these things, before the alien pops right up and runs across the kitchen table.
You need 100-10000 of good ones to get any good results for now. Maybe once they employ GANs to the mix... Well, I guess we'll see with the next gen of NVidias...
Makes me wonder if snapping a photo of a person without their consent then immediately going home and making deep fake porn about them could constitute as digital rape.
It can be charged under slander laws iirc, but without physical assult, not rape. Many harassment laws may also cover this. But if the picture was taken in public, you may not have many rights.
Using 'rape' to mean any unwanted sexual communication is a terrible thing, and seems to be getting more common.
And then there is obscene "research" like this: https://arxiv.org/pdf/1611.04135v1.pdf "Automated Inference on Criminality using Face Images." How does stuff like this get past IRB?
I wonder if we really don't need AI. It's not like we have amazing ideas on how to save the planet and reverse global warming realistically. Sure, there _are_ ideas but the costs seem comically high.
Maybe AI could help us do that, and make cold fusion work while it's at it ;-) [I'm only half joking, actually]
These threats are real.
The weaponization of everything is actually happening. Since I wrote about it a month ago (Self-Crashing Cars) a number of people have reached out including people with actual insight into the military aspect of it. Militaries around the world are getting ready for true AI enabled weapon systems and there building deterrence strategies for mass casualty cyber attack (including nuclear weapons response), whether its from hacked industrial plants or cars it doesn't matter. They're actually talking about the weaponization of cars at the Munich security conference.
We need to stop burying our head in the sand and write to our politicians about this threat. I know it sounds crazy but it's real.
As an aside, my main complaint about the people that truly understand this the inability / unwillingness to accept that the act of subverting systems capable of mass destruction via cyber attack amounts to cyber weapons of mass destruction. We need to assemble all the work / treaties / regulations / research we put into securing nukes into securing AI / robotics and for the identical reasons. We know how this ends otherwise. We need wide-spread government funding and we need to communicate what these things are in language that our governments understand. Not saying something that is true just because it sounds weird is counterproductive.