Not just facts - the designers of the AI also choose the underlying assumption and models. Even the very idea of using an AI implies a certain set of biases and intentions.
Characteristics that are truly common enough in humans that can safely be extracted as a factor are rare. Most of the time we try to compromise so we can call our differences "close enough". The process of finding and/or creating those compromises is what we call "politics", and while computers can certainly help as a tool, but the process needs human involvement by definition.
Attempts to turn over any kind of political or social decision-making to an algorithm is simply a way to disguise the concentration of power. The algorithm's designers ultimately end up with the power, while others are denied power.
The racist tactic known as "redlining"[1] is a pre-AI example. Black people aren't denied hosing directly, they simply "don't quality" for a loan, with the real reasons obscured behind a proprietary "credit worthiness" equation.
Instead of using AI as a decision-maker, a place where AI (and other technology) might actually be useful is as the facilitator and/or part of the "panel of experts" used in Delphi methods[2]. While the people tend to jump on bandwagons and make stupid decisions when unorganized, we have a lot of examples where a general crowd of people make very good decisions when they are focused on a specific goal and have enough structure to allow for iterative refining of ideas.
By the way - I should mention that the last part about using modern technology to as organizational structure instead of decision making as part of some sort of Delphi method is originally from a 2014 interview[1] with James Burke (Connections, The Day The Universe Changed).
A few of his comments that are relevant to the use of technology with politics and society:
We have these extraordinarily limiting constraints from a past in which we did not have
the tools to have anything other than extraordinarily limiting constraints. But, now we
do have the tools, and the tools are running away with us faster than the social
institutions can keep up.
...
I think countries ought to set up Departments of the Future. [...] We are on the edge of
having the technology to be able to say, let us run a constant, dynamic, updated review
of everything that science and technology is thinking about [...] then let us use the same
techniques to ask the public in general, not politicians, whether they like that idea,
whether they feel that they could live with that idea. And then, like a Delphi technique,
re-run it until everybody stops changing their mind.
...
Collate all [research laboratories and business R&D] together and process them using stuff
like big data to see what the pattern looks like becoming, and then layering on top of that
social media analytics to say, if this was coming, would you like it, and if not, why not?
In other words, to have a sort of 24 hour a day referendum
The other parts of the interview are very interesting as well:
... it’s no longer important to teach people to be chemists or physicists or anything ‘ists
because those jobs are gone, and if they’re not gone today they’re gone tomorrow. And unless
we know the old tools of critical thinking and logic and such, we will not be able to handle
what follows. So, we’re wasting our time training people to be things that will no longer
exist in 10, 15, 20 years time.
...
Every single value structure is meaningless [...] commercial society will be destroyed
at a stroke. The trouble is the transition period [...] how we get from here to there.
The vested interests, I mean, we’re going to have to shoot every one of them – nobody,
nobody is going to give way to this. [...] All cultural values relate to scarcity, ultimately.
Not just facts - the designers of the AI also choose the underlying assumption and models. Even the very idea of using an AI implies a certain set of biases and intentions.
Characteristics that are truly common enough in humans that can safely be extracted as a factor are rare. Most of the time we try to compromise so we can call our differences "close enough". The process of finding and/or creating those compromises is what we call "politics", and while computers can certainly help as a tool, but the process needs human involvement by definition.
Attempts to turn over any kind of political or social decision-making to an algorithm is simply a way to disguise the concentration of power. The algorithm's designers ultimately end up with the power, while others are denied power.
The racist tactic known as "redlining"[1] is a pre-AI example. Black people aren't denied hosing directly, they simply "don't quality" for a loan, with the real reasons obscured behind a proprietary "credit worthiness" equation.
[1] http://www.theatlantic.com/magazine/archive/2014/06/the-case...
edit:
Instead of using AI as a decision-maker, a place where AI (and other technology) might actually be useful is as the facilitator and/or part of the "panel of experts" used in Delphi methods[2]. While the people tend to jump on bandwagons and make stupid decisions when unorganized, we have a lot of examples where a general crowd of people make very good decisions when they are focused on a specific goal and have enough structure to allow for iterative refining of ideas.
[2] https://en.wikipedia.org/wiki/Delphi_method