Hacker Newsnew | past | comments | ask | show | jobs | submit | hagbardceline's commentslogin

I believe much of this can be dealt with, if we start RIGHT NOW to address a serious issue in our systems - the lack of a way to represent Morals and Ethics (the When and the Why) in the systems we are building. This needs to provide important input to, and thus shape the DIKW(D) pyramid.

I've been doing some work with the IEEE on this - and I'm looking here on ycombinator to get some real-world feeedback on what people are thinking and concerned about.

I have some (personal) ideas that might work to address the concerns I'm seeing.

{NOTE Some of this is taken from a previous post I wrote (but kinda missed the thread developing, I was late I don't think anyone read it). It is useful for this thread, so a modification of that post follows.}

First, I think you need a way to define 'ethics' and 'morals', with a 'ontology' and a 'epistemology' to derive a metaphysic for the system (and for my $.02, aesthetics arises from here). Until we can have a bit of rigor surrounding this, it's a challenge to discuss ethics in the context of an AI, and AI in the context of the metaphysical stance it takes towards the world.

This is vital, as we need to define what 'malicious use' IS. This is still an area (as the thread demonstrates) of serious contention.

Take sesame credit (a great primer, and even if you know all about it, it is still great to watch: https://www.youtube.com/watch?v=lHcTKWiZ8sI ). Now here is a question for you:

Is it wrong for the government to create a system that uses social pressure, rather than retributive justice or the reactive use of force, to promote social order and a 'better way of life'?

Now, I'm not arguing for this (nor against for the purposes of this missive), but using it as a way to illustrate that different cultures, governmental systems, and societies, may have vastly different perspectives on the idea of a persons relationship viz a viz the state when it comes to something like privacy. I would suggest that transperancy in these decisions is a good idea. But right now we have no way to do that.

I think the current way the industry is working - seemingly hell-bent on developing better, faster, more eficient, et al ways to engineer Epistemic engines and Ontologic frameworks in isolation is the root cause of the problem of malicious use.

Even the analysis of potential threats (from the article referenced 'The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation' - I just skimmed it so I can keep up with this thread, please enlighten me if I'm missing something important) only pays lip service to this idea. In the executive Summary, it says:

'Promoting a culture of responsibility. AI researchers and the organisations that employ them are in a unique position to shape the security landscape of the AI-enabled world. We highlight the importance of education, ethical statements and standards, framings, norms, and expectations.'

However in the 'Areas for Further research' section, I would point out that the questions are at a higher level of abstraction than the other areas, or discuss the narrative and not the problem. This might be due to the authors not having exposure to this area of research and development (such as the IEEE) - but I will concede that the fact that the note about the narrative shows that very few are aware of the work we are doing...

This isn't pie-in-the-sky stuff, it has real-world use in areas other than life or death scenarios. To quickly illustrate - let's take race or gender bias (for example the Google '3 white kids' vs. '3 black kids' issue a while back in 2016). I think this is a metaphysical problem (application of Wisdom to determine correct action) that we mistake for an epistemic issue (it came from 'bad' encoding). This is another spin on kypro's concern about the consequences of AI deployment to enable the construction of a panopticon. This is about WISDOM - making wise choices - not about coding a faster epistemic engine or ontologic classifier.

Next, after we get some rigor surrounding the ethical stances you consider 'good' vs. 'bad' (a vital piece that just isn't being discussed or defined) in the context of a metaphysic - you have to consider 'who' is using the system unethically. If it is the AI itself, then we have a different, but important issue - I'm going with 'you can use the AI to do wrong' as opposed to 'the AI is doing wrong' (for whatever reason, its own motivations, or it agrees with the evil or immoral users goals, perhaps, and acts in concert).

Unless you have clarity here, it becomes extremely easy to befuddle, confuse, or mislead (innocently or not) questions regarding 'who'.

- Who can answer for the 'Scope' or strategic context (CEO, Board of Directors, General Staff, Politburo, etc.)

- Who in 'Organizational Concepts' or 'Planning' (Division Management, Program Management, Field commanders, etc)

- Who in 'Architecture' or 'Schematic' (Project Management, Solution Architecture, Company commanders, etc)

- Who in 'Engineering' or 'Blueprints' (Team Leaders, Chief Engineers, NCO's, etc.)

- Who in 'Tools' or 'Config' (Individual contributors, Programmers, Soldiers, etc.)

that constructed the AI.

Then you need to ask which person, group, or combination (none dare call it conspiracy!) of these actors used the system in an unethical manner? Might 'enabled for use' be culpable as well - and is that a programmer, or an executive, or both?

What I'm getting at here, is that there is both a lack of rigor in such questions (in general in this entire area), a challenge in defining ethical stances in context (which I argue requires a metaphysic), and a lack of clarity in understanding how such systems come to creation ('who' is only one interrogative that needs to be answered, after all).

I would say that until and unless we have some sort of structure and standard to answer these questions, it might be beside the point to even ask...

And not being able to ask leads us to some uncomfortable near-term consequences. If someone does use such a system unethically - can our system of retributive justice determine the particulars of: - where the crimes were committed (jurisdiction) - what went wrong - who to hold accountable - how it was accomplished (in a manner hopefully understandable by lawmakers, government/corporate/organizational leadership, other implementers, and one would think - the victims) - why it could be used this way - when could it happen again

just for starters.

The sum total of ignorance surrounding such a question points to a serious problem in how society overall - and then down to the individuals creating and using such tech - is dealing (or rather, not dealing) with this vital issue.

We need to start talking along these lines in order to stake out the playing field for everyone NOW, so we actually might have time to address these things, before the alien pops right up and runs across the kitchen table.


I assume you mean AI in the sense of powerful machine learning systems - one would think if we consider a general AI, you might consider rephrasing the question along the lines of 'What if an advanced AI behaves unethically'? (If you get my point there...)

First, I think you need a way to define 'ethics' and 'morals', with a 'ontology' and a 'epistemology' to derive a metaphysic for the system (and for my $.02, aesthetics arises from here). Until we can have a bit of rigor surrounding this, it's a challenge to discuss ethics in the context of an AI, and AI in the context of the metaphysical stance it takes towards the world.

To quickly illustrate - comments below discuss encoding race or gender bias (for example). I think this is a metaphysical problem (application of Wisdom to determine correct action) that we mistake for an epistemic issue (it came from 'bad' encoding).

Next, after we get some rigor surrounding the ethical stances you consider 'good' vs. 'bad' in the context of a metaphysic - you have to consider 'who' is using the system unethically.

Unless you have clarity here, it becomes extremely easy to befuddle, confuse, or mislead (innocently or not) questions regarding 'who'.

- Who can answer for the 'Scope' or strategic context (CEO, Board of Directors, General Staff, Politburo, etc.)

- Who in 'Organizational Concepts' or 'Planning' (Division Management, Program Management, Field commanders, etc)

- Who in 'Architecture' or 'Schematic' (Project Management, Solution Architecture, Company commanders, etc)

- Who in 'Engineering' or 'Blueprints' (Team Leaders, Chief Engineers, NCO's, etc.)

- Who in 'Tools' or 'Config' (Individual contributors, Programmers, Soldiers, etc.)

that constructed the AI.

Then you need to ask which person, group, or combination (none dare call it conspiracy!) of these actors used the system in an unethical manner? Might 'enabled for use' be culpable as well - and is that a programmer, or an executive, or both?

What I'm getting at here, is that there is both a lack of rigor in such questions (in general in this entire area - it's not like it is your fault), a challenge in defining ethical stances in context (which I argue requires a metaphysic), and a lack of clarity in understanding how such systems come to creation ('who' is only one interrogative that needs to be answered, after all).

I would say that until and unless we have some sort of structure and standard to answer these questions, it might be beside the point to even ask...

And not being able to ask leads us to some uncomfortable near-term consequences. If someone does use such a system unethically - how can our system of retributive justice determine the particulars of who to hold accountable, what went wrong, how it was accomplished (in a manner hopefully understandable by lawmakers, government/corporate/organizational leadership, other implementers, and one would think - the victims), understand why it could be used this way, and if it could happen again - just for starters.

The sum total of ignorance surrounding such a question points to a serious problem in how society overall - and then down to the individuals creating and using such tech - is dealing (or rather, not dealing) with this vital issue. We need to start talking along these lines in order to stake out the playing field for everyone NOW, so we actually might have time to address these things, before the alien pops right up and runs across the kitchen table.


I suppose this all depends on how you are framing/thinking about the question.

There is a certain amount of bias in being designed/architected/engineered/constructed by humans - the artifact is 'in our image', as it were.

Are you talking about feeding a machine learning system data/information to be turned into information/knowledge by the system that is created by humans? That has our fingers all in it as well.

Cognitive bias? Well, are we talking about what is an 'acceptable' result to a human, or the underlying process? Take Google's machine vision system that sees the world as dogs - the cognition certainly is different, and the results of value to humans for an illustration/visualization of the different approach as well as a certain aesthetic I suppose. The bias is in the entire point of the question and the specialization of the system to provide an answer. But as to the details of the analysis and results? I don't think that has bias in the sense you may be thinking of.

Can you be more specific about what is meant by bias?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: