Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What if someone builds and uses advanced AI systems unethically?
2 points by ratsimihah on Feb 10, 2018 | hide | past | favorite | 9 comments
It’s not because a technology can be dangerous that it shouldn’t be built. I think that’s the argument a respectable authority once used for nuclear technology. Thus, what if in the near future, someone manages to build an AI that is intelligent enough to pose a real threat to society?

I’m not talking Skynet yet, but a more realistic scenario where a company or individual would, for example, become able to use machine learning to hack most of any computer, financial, or power grid system.

I’m aware influential companies and individuals have invested in the ethical aspect of machine learning, but how would that help mitigate such unethical use of such technology?

My understanding is that unethical use of nuclear technology is discouraged between countries by deterrence, in which case retaliation could ensure that no side ends up winning, but as far as I understand, it doesn’t apply to machine learning, does it?



I suppose the same laws that are in place for human intelligence would either apply or be adapted to apply to these situations. For example there are people who can currently use computers to do the things you describe, and there are consequences if they get caught. If someone programs AI that subsequently commits these crimes, it is reasonable that the programmer could be held accountable. The gray area comes when someone programs an AI for one purpose and it is later used (or chooses by itself) for another illicit purpose.


I don't think your scenario is realistic.

Are there examples of machine learning used to break into a system? I've heard of other machine-supported techniques which seem more effective, like fuzz testing. But not deep learning.

Wouldn't it be more useful to talk about more realistic examples? Like how deep learning methods end up encoding race or gender bias (eg, https://motherboard.vice.com/en_us/article/nz7798/weve-alrea... or http://randomwalker.info/publications/language-bias.pdf ) because they are trained on data created by humans with human prejudices?

Or using deep learning to keep people "engaged" with a web site or app with in-app purchases, by making making it more addictive? (eg, https://boingboing.net/2018/02/07/literal-sociopathy.html )


What about a (theoretical) general AI that could socially engineer passwords and banking info from victims online? Or pretend to be a person to call in a bomb threat?


The OP asked about a realistic scenario in the near future.

Your example of a general AI seems to fall outside those constraints.


More realistic than SkyNet was the only qualification given, but point taken.


Your are right that "more realistic" isn't the same as "realistic". I took it to mean the latter as otherwise we're in movie plot territory.

I regard "what if in the near future" as another qualification.


I think we're within 20 years of computers being able to hold a freeform conversation in text or speech to do what I described. To chat someone up on Facebook or the equivalent, talk on the phone and convincingly imitate a real person. For reference, Dr. Sbaitso was created in 1991[1], and look how far we've come since then with Alexa, Google Assistant, Siri, Bixby.

1. https://en.m.wikipedia.org/wiki/Dr._Sbaitso


Hmm, what you wrote was "socially engineer passwords and banking info from victims online".

According to https://en.m.wikipedia.org/wiki/Turing_test :

> In the 21st century, versions of these programs (now known as "chatterbots") continue to fool people. "CyberLover", a malware program, preys on Internet users by convincing them to "reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers".[33] The program has emerged as a "Valentine-risk" flirting with people "seeking relationships online in order to collect their personal data".[34]

It may be that a computer with the ability to hold a freeform conversation/"advanced AI" is not necessary to do the social engineering you mentioned earlier.


I assume you mean AI in the sense of powerful machine learning systems - one would think if we consider a general AI, you might consider rephrasing the question along the lines of 'What if an advanced AI behaves unethically'? (If you get my point there...)

First, I think you need a way to define 'ethics' and 'morals', with a 'ontology' and a 'epistemology' to derive a metaphysic for the system (and for my $.02, aesthetics arises from here). Until we can have a bit of rigor surrounding this, it's a challenge to discuss ethics in the context of an AI, and AI in the context of the metaphysical stance it takes towards the world.

To quickly illustrate - comments below discuss encoding race or gender bias (for example). I think this is a metaphysical problem (application of Wisdom to determine correct action) that we mistake for an epistemic issue (it came from 'bad' encoding).

Next, after we get some rigor surrounding the ethical stances you consider 'good' vs. 'bad' in the context of a metaphysic - you have to consider 'who' is using the system unethically.

Unless you have clarity here, it becomes extremely easy to befuddle, confuse, or mislead (innocently or not) questions regarding 'who'.

- Who can answer for the 'Scope' or strategic context (CEO, Board of Directors, General Staff, Politburo, etc.)

- Who in 'Organizational Concepts' or 'Planning' (Division Management, Program Management, Field commanders, etc)

- Who in 'Architecture' or 'Schematic' (Project Management, Solution Architecture, Company commanders, etc)

- Who in 'Engineering' or 'Blueprints' (Team Leaders, Chief Engineers, NCO's, etc.)

- Who in 'Tools' or 'Config' (Individual contributors, Programmers, Soldiers, etc.)

that constructed the AI.

Then you need to ask which person, group, or combination (none dare call it conspiracy!) of these actors used the system in an unethical manner? Might 'enabled for use' be culpable as well - and is that a programmer, or an executive, or both?

What I'm getting at here, is that there is both a lack of rigor in such questions (in general in this entire area - it's not like it is your fault), a challenge in defining ethical stances in context (which I argue requires a metaphysic), and a lack of clarity in understanding how such systems come to creation ('who' is only one interrogative that needs to be answered, after all).

I would say that until and unless we have some sort of structure and standard to answer these questions, it might be beside the point to even ask...

And not being able to ask leads us to some uncomfortable near-term consequences. If someone does use such a system unethically - how can our system of retributive justice determine the particulars of who to hold accountable, what went wrong, how it was accomplished (in a manner hopefully understandable by lawmakers, government/corporate/organizational leadership, other implementers, and one would think - the victims), understand why it could be used this way, and if it could happen again - just for starters.

The sum total of ignorance surrounding such a question points to a serious problem in how society overall - and then down to the individuals creating and using such tech - is dealing (or rather, not dealing) with this vital issue. We need to start talking along these lines in order to stake out the playing field for everyone NOW, so we actually might have time to address these things, before the alien pops right up and runs across the kitchen table.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: