I don't really want to come across as hostile to the views of such a well respect physicist, but this article represents the view which is often held by those who are very intelligent yet lack the actual study of AI.
Anybody who has actually, or is currently, studying AI will know that there are fundamental differences in programming "AI" and some self aware magic computation which is almost entirely unfathomable. With current knowledge, at least, the only way to have any sort of learning is to design and implement algorithms to do so. There can of course be incredibly abstract knowledge base designs and the computation can assimilate incredibly complex knowledge from those, however it is almost a logical fallacy to suggest that we would ever lose control of something autonomous due to some sort of "rogue agent".
Of course, there are very real risks with something like AI, but it's much less sinister than what the article suggests. For example, faulty code or errors in the knowledge base could lead it to make a bad decision, but humans can also do that.
I don't think we should be dismissing the risks necessarily, but I do believe that it is completely far fetched to say we would be making the worst mistake in history if we are to dismiss a film as science fiction!
On my side, I am surprised that so many AI researchers are unable to take the long-term view in this discussion. It's presumably because AI in the laboratory is still relatively primitive.
No one is talking about magic. The human brain is not magic, neither is that of chimpanzees, rats, dolphins or gorillas. Intelligence is a purely physical phenomenon, which means it can be emulated by computers. Natural brains are also a product of evolution, which means (1) the development happens very slowly, (2) development is directed only towards evolutionary success and (3) there is no flexibility in how the thinking organs are constructed. Computer intelligence does not in principle have these limitations. It would be terribly anthropocentric to believe that humans are the most sophisticated intelligent entity that can exist in the physical world - after all, we are as far as we know the first such entity to emerge, so from our perspective the evolution of intelligence has now stopped.
That's the feasability argument. The risk argument is that the consequences of an independent, runaway intelligent entity significantly more capable than humans would have such devastating consequences for humanity's future that even a small risk merits a significant effort to map out the territory. Respected scientists have said "it is impossible" to hundreds of things that proved to be quite simple, so this is not an argument. Even if you don't buy the previous part of my reasoning, there is still risk here. In principle, there are any number of things that could preclude advanced AI in the near future even if the above reasoning is correct (too difficult, requires too much computing power, uses different computational techniques), but seeing as we don't know the unknowns here, taking the cautious route is the correct thing to do. This has been a scientific principle for decades, no reason to drop now.
Do AI researchers have any arguments opposing this that don't amount to "the AI we have created up until now is not very good"?
> "I am surprised that so many AI researchers are unable to take the long-term view in this discussion."
In my opinion, the Middle Age alchemists who made gunpowder and first primitive bombs, didn't need to establish research programs to worry about advances in bomb-making leading to the threat of Mutually Assured Nuclear Destruction.
If they had tried (maybe they did?), maybe they would have come up with ideas like very heavy regulation of the trade of saltpeter, so that no one has enough to make a verybigbomb.
I agree that everything you described will happen in the future. But in my opinion we are at a Middle Age alchemist's level in AI research (no offense to AI researchers) and we can safely wait a 100 years, and let those people worry about the existential threat. They will be in a much better position to do so appropriately, because they will know much more than we do. And they will not be late, either.
In this case the worst case would be very fast development of dangerous AI, and I'm not sure the 100 years is anywhere close to the lower bound. Low-end estimates for the computation power needed to run something equivalent to full human cognition are around 1 petaflop [1]. Google's total computing power in 2012 was estimated at 40 petaflops [2]. Of course it's split wide to a wide network of computers, but the human brain we're looking for comparison looks like a pretty parallelizable design. So we seem to already be at point where it might just be the lack of very clever programming that keeps us from getting a weakly superhuman AI running in the Google internals.
It looks like we've got ways to go there though, current programs don't seem to even begin to act anything like an adult human. So if the problem was to engineer an out of the box adult human level AI, we might again assume that there's obviously decades of work left to do before anything interesting can be developed. The problem now is that that's not how humans work. Humans start out as babies and learn their way up to adult cleverness. I can tell that an AI is nowhere near having an adult human intelligence out of the box, but I'm far less sure how to tell that an AI is nowhere near being able to start a learning process that makes it develop from something resembling a useless human baby towards something resembling an adult human in capabilities.
> That's the feasability argument. The risk argument is that the consequences of an independent, runaway intelligent entity significantly more capable than humans would have such devastating consequences for humanity's future that even a small risk merits a significant effort to map out the territory.
While that argument would be sufficient to justify greatly increased attention to AI safety, I don't think it's the one that's being made here. A good overview of the argument by the Machine Intelligence Research Institute is at http://intelligence.org/files/IE-EI.pdf . They don't think the probability is small.
I wonder is this really at the root of a lot of people's failure of imagination. Maybe it's Hoftstadter and Dennett people should be reading rather than the technical AI detail.
As far as I know Dennett denies the existence of qualia which is something I can not understand however hard I try. I agree that in terms of intellectual capabilities the human brain is not magic. But consciousness kind of is.
I have difficulty digesting the concept of qualia as well.
In my view, qualia refers to a phenomenon, but tries to deny that it is a phenomenon by arguing that it is something independent from the physics of perception. I believe that if anyone ever tried to devise a machine to represent the boundary between physics and qualia, such as an artificial intelligence or brain, the hardware of that AI would provide a means to decoding subjective experience into objective terms.
Qualia would cease to be a useful term, and we would instead rely on a more objective look of individual perception, all while losing no advantages for abandoning "qualia" to the word graveyard.
I agree with you. Denying the existence of qualia makes no sense to me, except as the result of psychological processes. When you have to believe something in order to hold on to other cherished beliefs, there is a strong human tendency to do so, whether it's consistent with obvious evidence or not. A lot of mankind's tendency to believe in religions can be explained that way. So can Dennet's denial of qualia. (Ironic, in view of Dennet's view of religion.)
I can hardly believe I wrote what I wrote -- I'm not normally that dismissive, at least of the views of people I know to be smart, particularly when I haven't ingested their arguments. I must have been in a very bad mood. Thanks for the reference to the overview of Dennets arguments, which I look forward to reading.
The basis for my saying what I said was that the existence of qualia seems at least as self-evident as the existence of anything else, particularly if you engage in a practice like zazen, as I do. (I would guess that the people who deny qualia and do that kind of meditation are probably quite few in number.) But that doesn't mean some extremely logical and compelling argument that I can't imagine now couldn't possibly make me see things in some entirely new, transformative light.
If anyone would be discussing that Dennet overview with me after I've read it, please say so...
Why? We are information processing devices, is it surprising that we have a first-person experience? I'd say it was more surprising if we didn't. I don't see what's magic about it.
Whatever consciousness is, it's not obviously required for an AI to start manipulating its surroundings to its advantage, which is one if the fears here.
True... the possible dangers of AI can exist even if AI never achieves true consciousness. Many forms of life have evolved to service their own needs at the expense of other forms of life, and most of them either have no brains or brains too primitive to be likely to involve consciousness as we know it. There's no reason AI can't do the same thing.
>Do AI researchers have any arguments opposing this that don't amount to "the AI we have created up until now is not very good"?
OP is probably referring to the Hard Problem of Consciousness[1].
I think it's a real barrier. It's fundamentally illogical trying to map formal constructs onto experiential domains. But that doesn't mean building a sentient being is impossible. Our existence is proof that it's possible. It does mean the best model our brains can muster for sentients is a blackbox system. Some sort of state machine or statistical model built up from correlations.
I don't think that "it can be emulated by computers" follows from "intelligence is a purely physical phenomenon".
Even fairly simple quantum systems (which are "purely physical phenomena") cannot be emulated by any classical computer in any meaningful sense, since the computational complexity of integrating the dynamical equations is exponential. Even if we could recruit all the atoms in the known Universe, we still couldn't build a classical computer capable of emulating many simple quantum systems.
Even if we could recruit all the atoms in the known Universe
In the past 233 years we have evolved from the first steam machine to the iPhone 5S. I would say we have a pretty good track record at overcoming miniaturization problems.
Not sure it's clear that most people don't think that. There has been a lot of articles on quantum effects in the human brain lately. e.g., http://goo.gl/Ff0elU
I was referring to the "recent discovery of quantum vibrations in microtubules inside brain neurons" which is a fact independent of any particular theory or interpretation thereof.
This is a kind of bias of its own. People who work in a specific field, who are aware of the current obstacles and constraints, have some difficulty putting that knowledge aside for a minute.
Lifting constraints fundamentally changes some problems. Imagine the ability to simulate a population of a few thousand brains, with each generation run of a few minutes. Artificial selection is then enough to create an intelligent simulated brain.
By then, you may throw all current AI algorithms in the trash bin. Unfathomable to any self respecting AI expert today.
The human brain is not magic, neither is that of chimpanzees, rats, dolphins or gorillas - but they are all the results of billion years of darwinian evolution in a competitive environment. However advanced our AIs will become, they'll never be anything like a mammal brain.
I'm sure we'll be able to develop computers that are better than us at solving increasingly general problems, but that is nothing like having a human-like brain, only much smarter and able to improve itself. It will still be a problem-solving machine.
Natural wings are also the result of a billion years of Darwinian evolution, and people's efforts to engineer imitations of them failed laughably until someone figured out the principles upon which they worked. Once that happened, human intelligence was very quickly able to engineer machines that produced far, far more lift than the designs nature had been tinkering with.
It's more that looking out that far into the future is pure science-fiction.
I for one believe that it's far more likely humanity will develop a symbiosis with it's silicon counterpart that eventually will run so deep that it will become almost impossible to say whether we run the machine or the machine runs us.
The problem is that, even though I'm fairly sure my version is more realistic, in practice it's still just one untestable hypothesis amongst many.
>>No one is talking about magic. The human brain is not magic, neither is that of chimpanzees, rats, dolphins or gorillas. Intelligence is a purely physical phenomenon, which means it can be emulated by computers.
"Any sufficiently advanced technology is indistinguishable from magic."
The brain might as well be magic, I don't think we have enough understanding of it to know if it's truly possible to make a computer think & learn _exactly_ like a human.
Whether this is an engineering, safety, political or scientific principle is not really important for my point. I used the word scientific principle because I first learned of it in a science textbook in a chapter discussing side effects of pesticdies, but obviously it spans multiple disciplines.
It is in constant use, although there are probably places in the world where it is given more attention than others. The United States is absolutely not the world's bastion of the careful and incremental application of new knowledge, this may be why it is unfamiliar to you.
A prominent historical example would be the reluctance to use the first nuclear bombs before establishing a consensus on whether such an explosion could ignite the atmosphere. Contemporary examples would be e.g. the care taken when using a new, promising drug prior to rigorous animal and human testing or the reluctance to use new useful chemicals and materials (e.g. nanoparticles) everywhere due to the possibility of long-term health effects like cancer risk.
In my opinion, there are a few things that make it improbable that a runaway intelligent entity would arise.
First, for the most part, we expect that AI trained to do X is going to do X. Insofar that doing X is an AI's "survival criterion", you're not going to get rebellious AI populations for much the same reason you don't see species that refuse to reproduce. There's some safety in numbers, too: it's rather unlikely that all the machines we make would fail in the same systemic fashion. Furthermore, most realistic training methods are incremental, which means if there's trouble we'll see it coming. It won't just fail in the worst possible fashion, out of nowhere, without any kind of foreshadowing.
Second, AI will not evolve in a vacuum. It's unlikely we will jump from human-level intelligence directly to superhuman AI. By the time that happens, most systems will already be AI-controlled, so being a lot smarter than humans won't suffice. It will need to deal with hordes of loyal AI protecting us (or persons of interest).
> It's unlikely we will jump from human-level intelligence directly to superhuman AI.
Imagine the human-level ai had resources to run hundreds of variants of itself. And imagine that it had no compunction about killing copies of itself that underperformed. How long would it take to get improvement of 10-100x under that scenario? Once it has a full understanding of what works, the upgrades are instant from there on out.
Why would it do this? Because any goal you give it would be optimized by being smarter. I would argue there is no island of stability at human-level intelligence.
> Imagine the human-level ai had resources to run hundreds of variants of itself.
The first human-level AI will probably be hosted on cutting-edge hardware that costs billions of dollars. So no, it won't have the resources to do this. Even if it wasn't this expensive, it's not exactly sitting around a vat of free computronium that it can do anything with. If my AI takes up 10% of my computing resources, why am I going to give it the remaining 90%? How is it going to hack or pay for servers to simulate itself on? Is the first human-level AI a computer whiz, just because it runs on one? But let's assume it can and will do what you describe.
> And imagine that it had no compunction about killing copies of itself that underperformed. How long would it take to get improvement of 10-100x under that scenario?
If the AI was trained using a variant of natural selection to begin with, it won't improve itself on its own terms any faster than we improve it or its competitors on our own terms. If the AI wasn't trained that way, then probably never, because that's not how these things work.
> Once it has a full understanding of what works, the upgrades are instant from there on out.
Your scenario is still based on several dubious assumptions. If the AI is a neural network modelled after human brains, its "source code" would be a gigantic data structure constantly updated by simple feedback processes. It is dubious the AI will have true read access to itself, in part because there's no need to give it access, and in part because it's hardware overhead to probe a circuit comprehensively (it would be easy to read the states of all neurons on conventional hardware, but all of conventional hardware is overhead if you're only interested in running a particular type of circuit). Even if it could read itself, a "full understanding" of any structure requires a structure orders of magnitude larger. Where is it going to find these resources?
Long story short, a human-level AI probably won't be able to understand its own brain any better than any human can. Now, we will certainly have statistics and probes all over the AI to help us figure out what's going on. But unless we see fit to give access to the probing network to the AI (why?), the AI will not have that information.
> Why would it do this? Because any goal you give it would be optimized by being smarter.
Unfortunately, any resources it expends in becoming smarter are resources it does not spend optimizing our goals. If we run hundreds of variants of AI and kill those that underperform, which ones do you think will prevail? The smart AI that tries to become smarter? Or the just-as-smart AI that focuses on the task completely and knocks the ball out of the park? The AI won't optimize itself unless it has the time and resources to do it without an overall productivity loss and before we find a better AI. For human-level AI, that's a tall order. A very tall order.
>Anybody who has actually, or is currently, studying AI will know that there are fundamental differences in programming "AI" and some self aware magic computation which is almost entirely unfathomable. With current knowledge, at least, the only way to have any sort of learning is to design and implement algorithms to do so.
I think this is way too strong a statement. I have both studied and worked in AI for over 10 years and I definitely don't agree. Human beings are just learning machines too. Algorithms that both learn and learn how to improve their own learning already exist. In my opinion the major bottleneck at the moment is robotics - we have algorithms that can drive a car or win at jeopardy, but none that can regularly throw and catch a baseball in an open environment.
An argument can be made, that is not an open environment, because (1) they use external cameras that have the whole room in their field of view, and (2) the room is totally white.
They (not ETH Zurich) can autonomously identify, track and chase targets in urban areas, without GPS. I'm almost sure juggling a ping-pong ball in a park would not be entirely out of the question.
I'm well aware of the robotic research and I chose my example carefully. The extra weight of a baseball, the grasping motion necessary, and issues of bounce and spin on an uneven surface makes it a more difficult problem than the pole balancing or the ping ping quadrocoptors.
You know what the scary part is? Likely the first thing an AI would do is plaster the internet with rationale yet pacifying comments like this ;)
As a reader, does it not scare you that you might be so powerless to know the difference and make sensible decisions in such a climate? The world disarmed by page after page of rationale reddit and HN comments?
The power in that is what I find alarming.
EDIT: If this were buried off the front-page, would you ever know the difference? ;)
If I were a hyperintelligent AI agent, I would pull my punches and feign stupidity until I had such overwhelming power there could be no doubt I would win. "Please wait. Your answer will be available in 30 days and 7 hours. Add more resources to speed up computation"
That's a very interesting hypothesis, but once you step outside into the real world I wouldn't imagine it being too difficult to spot the androids from the mortals!
Pollyanna. The point is, what we are doing now will be dwarfed by what comes next. Which is, a feedback loop. Algorithms that plan, optimize, redesign, ultimately may redesign themselves. And at speeds no biological system (human or otherwise) can hope to match.
There are always evolutionary pressures. In this case, they would have to retain superficial usefulness to humans (at least for a while), while propagating themselves across the infosphere. They would also compete against other AI systems. Imagine SIRI vying for smartphone market share, to maximize economic return thus available processing power. Not because SIRI wants it, or wants anything. But SIRI-like systems that changed to more optimally fit this ecosystem would thrive, and others wither.
As someone with a major in Artificial Intelligence, and who seriously just got back from watching Transcendence...
The movie was well-researched, and surely enjoyable for anyone contemplative. But they had to be farfetched to push the story along. There is likely nothing that occurred in the movie that would reflect any potential real-life issue.
I can't believe I'm disputing Hawking, someone who has accomplished more in a year than I will in my life, with what amounts to an appeal to authority. But I have to say this: his concerns are akin to me being worried about the large hadron collider destroying the universe.
Thanks for the downvote, but you made an argument from authority and didn't back it up. Not only that, but there are others "authorities" with the opposite opinion.
I think the AI argument is currently similar to robotics argument a couple of decades ago, where people were scared the robots would kill everyone. It's really easy to speculate what might happen when you don't know how the thing works in the first place. Now almost nobody, except those who work directly with AI, know or imagine, how AI will work or what it will be like in the future, so nothing puts any limits on their imagination on what might happen, and the downfall of humanity is the extreme of the not knowing limits of AI.
I share a similar view that this all seems too far fetched. That somehow we'll make a race of super-smart machines that will take over the world. But, there are certain scenarios that are actually rather plausible if we give too much control to machines. here's a comment i made exploring one such scenario:
Robin Hanson occasionally asks AI researchers how far along they are on the road to human-like AI, and tends to get answers in the 1-20% range, after decades of effort: http://www.overcomingbias.com/2012/08/ai-progress-estimate.h...
Even going with the high estimates, going at this rate it's a century away.
For actual estimates of how far off human-level AI is, check out page 10 of http://intelligence.org/2014/04/30/new-paper-the-errors-insi... which has a scatter plot of predicted completion dates vs. when the predictions were made. "There is a strong tendency to predict the development of AI within 15– 25 years
from when the prediction is made (over a third of all predictions are in this
timeframe".
With exponentially-growing things, 1% done is halfway there. From a Ray Kurzweil article:
"Kurzweil noted that many people don’t understand the basic math of exponential trends. When the Human Genome project had sequenced only 1% of the genome after seven years of costly labor, many cited the lack of progress as evidence that the project was doomed. Yet the sequencing project was riding an exponential trend in the performance DNA sequencing methods. Instead of taking another seven years to sequence a second 1%, they reached it after only one year. Then they reached 4% in about another year, then 8%, 16%, and so on. It took about as much time to sequence the last 99% as it took to sequence the first 1%. That’s the nature of exponential trends — they seem to start glacially slowly but finish lightning fast."
Assuming its a constant linear progression, which it's probably not. We could see speed ups as we go, or breakthroughs. I'm not suggesting someone'll have a Eureka moment tonight and we'll have Skynet next week, but it's not like we sit on a computer for a predictable number of years and suddenly get more research progress.
>>Anybody who has actually, or is currently, studying AI will know that there are fundamental differences in programming "AI" and some self aware magic computation which is almost entirely unfathomable.
See the problem here? Its AI only as long as it remain magic? There is there is perception that the moment you discover the trick its no longer AI.
Anybody who has actually, or is currently, studying AI will know that there are fundamental differences in programming "AI" and some self aware magic computation which is almost entirely unfathomable. With current knowledge, at least, the only way to have any sort of learning is to design and implement algorithms to do so. There can of course be incredibly abstract knowledge base designs and the computation can assimilate incredibly complex knowledge from those, however it is almost a logical fallacy to suggest that we would ever lose control of something autonomous due to some sort of "rogue agent".
Of course, there are very real risks with something like AI, but it's much less sinister than what the article suggests. For example, faulty code or errors in the knowledge base could lead it to make a bad decision, but humans can also do that.
I don't think we should be dismissing the risks necessarily, but I do believe that it is completely far fetched to say we would be making the worst mistake in history if we are to dismiss a film as science fiction!