Hacker News new | past | comments | ask | show | jobs | submit login
Technology predictions (samaltman.com)
182 points by dmnd on March 3, 2015 | hide | past | favorite | 123 comments



This post seems to be a response to some criticism of Sam's opinions in other blog posts. The relevant line is this one

> Superhuman machine intelligence is prima facie ridiculous.

> - Many otherwise smart people, 2015

I was among those criticizing Sam's point of view, but not because I think the idea of superhuman machine intelligence is ridiculous. I don't know one way or the other whether we'll achieve it or not, but I don't consider it impossible.

What I, and I believe many others as well, was criticizing was Sam's insistence that the current techniques have the potential to cause such great harm that they need to be regulated and that extreme measures need to be taken to address the possible harm of modern AI and ML techniques.

This type of fear-driven reasoning is wildly out of proportion with the facts, as anyone who does research in the area of is familiar with the research in the area can attest to. Modern machine learning techniques are heuristic methods to find optima in functions over high dimensional spaces. Neural networks and support vector machines produce impressive results on image classification and other well defined problems, but are far from general purpose techniques that can be easily applied to any given problem.

So, the bullet points:

1. Current ML techniques aren't anywhere near creating general intelligence, nor are there areas of research that appear likely to yield superhuman levels of intelligence in the near to moderate-term future.

2. This doesn't mean superhuman levels of machine intelligence are impossible, just that we don't have any current methods that are likely to lead to them.

3. Taking action against an ill-defined, likely phantasmic threat is neither cost effective nor helpful for the progress of AI and ML.

4. When the people who have real, expert knowledge of something all tell you one thing and the people who have something to gain from getting your attention by promoting sensational opinions and cherry-picked facts tell you something else you should rationally assess their motives and weight that when deciding who's views more closely approximate the truth.


Did anyone actually say that superhuman machine intelligence is prima facie ridiculous? I followed the discussion pretty closely, both on hackernews and the quotes and discussion by prominent researchers, and I don't remember reading that. It's possible that one or two people somewhere in the discussion said the idea was completely ridiculous, but it didn't seem to be the common sentiment. The main dissenting ideas I saw said that it was too far away to be considered as a threat, and that most high profile researchers weren't giving it serious consideration.

I don't doubt that "otherwise smart people" have said that superhuman machine intelligence is completely improbable, but I don't know about the "many" part. There's always someone who will say anything. The closest I've seen to a consensus is disputing the timeframe for AGI/SMI, not the possibility.

For that matter, I don't remember seeing a lot of people say "Computers will never play chess" or "Computers could never drive cars" or "Computers could never compete on Jeopardy". I can't recall one prominent researcher saying any of those things definitively, much less "many" people saying any of them.

It seems like there might be some inserting of quotations into the mouths of others going on here.


> I don't know about the "many" part

I'll go there, as has everyone I've talked to about the subject (probably half a dozen or so people). I have not heard a convincing proposal of a path towards a rogue autonomous AI, and I've looked reasonably hard. I'm not sure why otherwise smart people are for some reason just now getting worried about an idea first popularized by The Terminator in 1984.

> Superhuman machine intelligence is prima facie ridiculous

It's not just prima facie ridiculous, it's ridiculous after thinking about it for some time. We will have problems with enormously powerful machine-augmented individuals and machine-augmented totalitarian organizations (be they corporations or governments) long before we have problems with a completely autonomous out-of-control AI.


Yes, I think your assessment of the situation is correct.


> When the people who have real, expert knowledge of something all tell you one thing and the people who have something to gain from getting your attention by promoting sensational opinions and cherry-picked facts tell you something else you should rationally assess their motives and weight that when deciding who's views more closely approximate the truth.

Those with deep expertise in machine intelligence (and related foundational concepts, such as philosophy of mind, linguistics, psychology, neuroscience, etc.) definitely do not "all tell you one thing" (hence the "ethics board" for DeepMind). I won't claim to be such an expert (though I have multiple degrees on related topics), but if you, e.g., review Nick Bostrom's C.V., and read his work, you'll find few people more qualified to comment. He's brought a very sober, clear-headed, and decidedly non-sensationalistic assessment to these issues, and devoted an entire book to the risk posed by "Superintelligence". When very smart, knowledgeable, thoughtful, and seemingly well-adjusted people are willing to put themselves "out there", it's worth paying attention.

Exponential change looks tiny until it's really not. If recursively self-improving A.I. is possible, it might only require one relatively short bit of code to get off the ground, and then it's basically game-over (depending on the A.I.'s objective function). Many people possess imaginations rich enough to see how this could come to be.

Further, claiming that something which has a clear and rational path to becoming dangerous doesn't "have the potential to cause such great harm", especially when lots of relevant trend lines are nearing vertical, is extremely foolish.

It is incredibly easy to unsympathetically criticize another viewpoint, especially when that viewpoint is outside of the mainstream. Those espousing such "sensational opinions" rarely win more friends than they lose, as your (and many others') comments would attest to.


something which has a clear and rational path to becoming dangerous

But this is exactly the point, it DOESN'T have a clear and rational path. Go read Superintelligence again, or go read Global Catastrophic Risks or any of the other books like "Our last invention." All of it, across the board is wild speculation about paperclip maximizers and out of control drones.

There is no path, no one has a path - not even AGI researchers, the people trying to build the thing for god's sake!!


... or, for that matter, "What computers still can't do" by Hubert Dreyfus.

> There is no path, no one has a path

This seems like a very difficult statement to support, a claim that is consequently far less rational than, say, "deep belief networks' ability to automatically extract meaning from real-world data will increase in scope to encompass broader and broader domains, eventually including natural language."

We have billions of examples of human-level intelligence walking around. Humans aren't magical, and our ability to create computer simulations of real world phenomena is steadily increasing.

Furthermore, the past decade has seen multiple machine learning triumphs that many AI researchers thought were 25+ years away: self-driving cars, machine translation, high accuracy speech recognition, visual image content extraction. We have been continuously surprised, and these surprises are unlikely to stop. There's no reason to believe that human-level intelligence is particularly special or difficult to achieve -- those asserting otherwise have the higher burden of proof.


This seems like a very difficult statement to support

Notice, I am not saying it is not plausible or realistic, I think it is. I also think that there is a fairly short time horizon (<100 years) based on the state of computing currently.

That doesn't mean we know what the path is though. So will it come through scaled ANN? Maybe WBE? Will it be an emergent property from all of the routers in the world exchanging state information? No one knows.

Furthermore, the past decade has seen multiple machine learning triumphs that many AI researchers thought were 25+ years away: self-driving cars, machine translation, high accuracy speech recognition, visual image content extraction.

How many times does the professional AI community have to repeat this?: Narrow AI projects do not necessarily have trajectories toward AGI. Yann Lecun JUST REITERATED THIS again last week. Seriously, how many times does it have to be said for people to understand it?

Yes there is progress in machine learning, but those say almost nothing about Artificial General Intelligence which is magnitudes of difference.

So again, there is no PATH TO AGI. No one can sketch what approach, if any will get us there a priori because there is so much we don't know about intelligence generally and all of the subsets of problems within it.


Many know a lot about intelligence, but it's piecemeal and has not been adequately integrated -- even if there was an excellent theory/model/account, difficulties of implementation, testing, or comprehension could delay (or even prevent!) such theories from gaining popular acceptance (see, e.g., conceptual blending). I chalk this up to epistemological and organizational problems as much as to ones of complexity and the difficulty of acquiring data.

Further, there seems to be a fallacy implicit in the line of thought expressed in your comment, along the lines of: "because there's no generally agreed upon positive account for how cognition works, AGI is impossible in the near term." The fact is there does not need to be any generally agreed upon positive account of intelligence for us to be worried about AGI. Excellent accounts of how intelligence work can be contained in the minds of a few researchers who aren't going to the trouble of publishing them and proving them to others. Instead, they're just hard at work on the highest payoff activity: designing software that realizes and proves their vision/idea.

We have little idea of the progress such teams are making, or the goodness of the cognitive models they're working from. And only one of these individuals/teams needs to be right.


"because there's no generally agreed upon positive account for how cognition works, AGI is impossible in the near term."

Mis-characterization as it's completely plausible that we can get to AGI without emulating cognition at all. So that is explicitly not the point I am making and no one is even stating as much.

You said it yourself though:

but it's piecemeal and has not been adequately integrated

Integration is the foundation of GENERAL intelligence and is exactly what I am saying. How learning across domains works is a black box - like total black box right now - which means we can't build a roadmap to it without probing the edges more.

Excellent accounts of how intelligence work can be contained in the minds of a few researchers who aren't going to the trouble of publishing them and proving them to others. Instead, they're just hard at work on the highest payoff activity: designing software that realizes and proves their vision/idea.

Hooray, a hypothesis! Is there anything that you can point to that would support the idea of lone wolf AGI developers? In my study there isn't due primarily to computational and mathematical requirements that take a community to support. Even in such a case there is basically nothing we can do about it because it's unknowable - like lone wolf terrorists. So practically it's not worth discussing. Note also that this isn't even what Bostrom et. al are discussing.


Regarding 1, 2 and 4: It certainly seems like a long way to create something like the human brain, but perhaps there are far simpler systems that yield general intelligence. We don't seem to be very good at predicting the invention of algorithms, so with all due respect I doubt experts can provide reliable predictions. Hardware as powerful as the simplest models of the human brain already exists and growing computational resources enable studies of neural networks at an increasing rate. I think, in that light, the claim that the breakthrough is far in the future seems to be about as strong as saying it could happen within the next ten years.

I've asked this question in the discussion yesterday as well, but I didn't receive any response: What kind of breakthrough do you expect after which it would be justified to be concerned about general intelligence? And how do you justify that it's unlikely to expect that surpassing human intelligence is not just a matter of scaling it up at that point, i.e. that the invention can be regulated quickly enough once it exists?


The difficulty with general intelligence isn't so much that any particular problem is hard to solve, but that creating a way to generalize from specific problems to solving any arbitrary problem is very difficult. Given enough time researchers do pretty well at creating "intelligent" machines, but these are all focused on very specific, narrow problems. For example, we can classify tweet sentiment pretty well, but we can't summarize documents yet.

The reason that the general cases become more difficult is because success is less well defined and the number of possibilities is usually bigger. For example with tweet sentiment there aren't that many possibly results (in the simplest case it's just binary: positive or negative). But say you want to summarize a document, well that's much harder, since how do you even know when a document is well summarized? And assuming you can know when a document is well summarized, how do you get enough data to train your model on?

But even the more general case of summarizing documents is still pretty narrow. It's just dealing with text. Now what if you want your document summarizing robot to learn when to drive your car to pick you up from you doctors appointment. Well now your robot needs to know about your schedule, it needs to know about cars, it needs to know about traffic laws, and so on.

So the more you generalize the more data your machine learning algorithm needs to know to draw the proper conclusions. And that data either needs to be hard coded into it or otherwise fed into it so that it can learn. If it's hard coded then you have a labor problem, since it takes lots of time for someone to write all the things we take for granted into a way that a machine can understand and if you try to get it to learn it then you have the problem of needing to specify what a good and bad outcome is like we did above with document classification.

So the reason I don't think scaling up is possible is because we don't have good ways to measure what a good outcome looks like.

I don't expect a breakthrough to come, rather, I see steady progress as the norm, but I think what can improve the capabilities of current AI and ML techniques is to find some way of baking in basic facts and logic into our programs. For example, you were born with the ability to recognize when someone is angry with you or glad to see you and how to reason spatially, etc. If we can do something analogous with machines then I think that would be a big improvement. But again, it's really hard to say how to do this, since if we hard code it we have problems and if we try to learn it we also have problems, as I noted earlier.


Not sure you really answered the question, which I found to be a very interesting one. The OP starts with these two assumptions, which it sounds like you agree with:

1) SMI is likely to occur at some point in the future. 2) SMI is likely to pose a significant threat to humanity.

Given those, it follows that we should be worried and plan for how to deal with SMI at some point in the future, ideally before it is developed. So the question is, what does the world need to look like before we worry/plan for it? And why isn't now a good time to start?


I don't think it is worth thinking about at this point since it is so speculative.


I don't think he was ever indicating that we should start making regulations right now. I got the sense that he was planning for when that technology starts to come out that we can have some plans ready.

Imagine if we just planned for a lot of other technologies before they came out that we didn't think would have. We could have avoided some serious problems. It is obviously a balance of fostering innovation vs safety regulations, but some safety considerations are a must in my opinion.


What about Numenta? They claim to have a general, neocortex-inspired learning algorithm.

http://numenta.com/


I refer you to this post by the founder of Numenta, where he tries to inject some sanity into the "debate": http://recode.net/2015/03/02/the-terminator-is-not-coming-th...


The post I was replying to said that "current ML techniques aren't anywhere near creating general intelligence". I wasn't suggesting the doomsday scenarios are likely, just that some people, Jeff Hawkins at least, believe they are close to developing general intelligence.

Indeed, to quote Hawkins in the very article you linked to:

"What is new is that intelligent machines will soon be a reality, and this has people thinking seriously about the consequences."

I don't have enough expertise in ML to judge whether he's correct, but I'd be curious to hear from the OP because it seems to contradict his claim that current ML techniques are limited to finding optima in functions over high dimensional spaces.


So essentially the way ML works is you have some error function that you test your output against and then you have some model (neural networks and their variants seem to be performing the best currently, but different models do better at different tasks) that can be thought of as a function between inputs and outputs. Typically you have lots of inputs (for example a picture could be represented as an array of pixels, so you'd have one input for every pixel). The model then guesses how to transform the input into an output and measures the result against the error function. The goal is then to improve the model iteratively (and hopefully not overfit!) to eventually minimize the error.

I'm not very familiar with Numenta or Hawkins, so take what I'm saying with a grain of salt, however, I think there are two important things to consider.

First, you can have intelligence without having superhuman any sort of general intelligence. For example, the best neural networks for handwritten digit recognition can perform better than I or most other humans can at the task. However, if you asked this same neural network to predict whether an image is a dog or a cat, it wouldn't terribly. At least until you retrained it and tweaked it a bit. But it certainly wouldn't have human level performance. Similarly, the best computers can beat any human at chess. So these are all demonstrations of "intelligence", and I'm sure you can think of a lot more.

Second, the problem isn't so much that any given task is very difficult (although many are), but that generalizing solving one task requiring intelligence from the specific cases we're good at (namely classification) is a really hard problem and we don't seem to be getting better at it. We're pretty good at learning a model on narrowly defined tasks, but then if you take that model and use it on something it wasn't designed for it will just give you garbage.

So I'd say that intelligent machines are already something we're seeing, but machines that learn how to navigate the world and retrain themselves on arbitrary problems are a long ways off and this seems like it will be the case for the foreseeable future.


"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." - You-Know-Who, in 1950.

It's a bit ironic that Sam missed this one quote that would have be very relevant to the post, both in term of the conclusion and the topic at hand (it's hard to predict stuffs) ;). It's interesting to note that the prediction was wrong on both counts: our machine is like a gazillion times more powerful than the quote, and we're doing nowhere near the capability of the prediction.

From the last HN comments on the topic, while it's true that there are certain people who believe that strong AI is impossible, the stance I've got from AI/robotics researchers were that it's just silly to talk about general AI at this point: it's so far away. This post from Karpathy would be one, for example: http://karpathy.github.io/2012/10/22/state-of-computer-visio... .

Because of that, any talk and concern on the topic is a bit silly, especially about regulation of the subject. I mean, can you imagine people in the 1600 trying to figure out how to regulate the air traffic that we have right now? Or should we talk about space travel regulation now? Because seriously, chance are that will happen before we have singularity/ strong AI.

If we're really concern about the danger of strong AI, then I'm more in favored dealing with it by the way of Eliezer Yudkowsky and the Singularity Institute (by making sure that whatever we're researching toward is "Friendly"). Even though I'm also thinking that they're too optimistic. I'm not against immortality in my lifetime though.

Now, a talk about more sophisticated automated/ autonomous system that do funny things (in a good or bad way), or their risk, that's something worth discussing about.

For a fun (philosophical) remark, if the robots are to become our overlords, it may be a bad idea trying to regulate them! Google "Roko's basilisk" for more details


I'm glad someone is challenging this incredibly glib argument in support of taking away freedoms and smothering innovation under regulation, because of poorly-defined fears that experts mostly discount.

Sam's proposal for regulation depends on a prediction too. It is no less a prediction than those who disagree with him. Not every technology we can dream up is destined to come true. Is strong, hostile AI the same kind of tech as warp drives and teleporters and flying cars? Who can say? So to me this post argues against Sam's goal of regulations, because why should we take away freedoms for the sake of such a dubious prediction?


Unlike your warp drive or teleporter examples, we're pretty sure human-level AI is possible because human-level natural intelligence exists. The brain isn't magic. Eventually, people will figure out the algorithms running on it, then improve them. After that, there's nothing to stop the algorithms from improving themselves. And they can be greatly improved. Current brains are nowhere near the pinnacle of possible intelligences.

> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

— Nick Bostrom. Superintelligence: Paths, Dangers, Strategies[1]

1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...


It seems to me merely figuring out the algorithms isn't enough; you have to have a similar scale of processing power, and the brain has some 100 trillion connections, which cannot easily be replicated with silicon chips. So for all intents and purposes, the brain might as well be magic.

I suppose someday we'll be able to grow brain tissue and use it in a computing environment. Maybe that will allow us to approach true AI.



Thank you for your response! I agree that unlike a warp drive we can at least say that intelligence exists. But I think that is begging the question. For as long as we have records, people have experienced the mind as something sui generis compared to the physical. Maybe it's not magic, but it's far from understood. I think it remains to be seen if it is replicatable by physical processes.


The article you linked with ends with

"only way to build computers that can interpret scenes like we do is to allow them to get exposed to all the years of (structured, temporally coherent) experience we have"

This may appear daunting until you realize robots can share memories. Five robots running around for a year is equivalent to one robot running around for five years. Does not Google have 25 cars driving around experiencing the world right now?

I also see skeptics running to computer vision as an example of how far we are from human level AI. Is that just the hardest problem to solve? Is it the most useful problem to solve?


Besides sharing, who's to say that the machines couldn't do it faster/more efficiently than humans do and so gain the experience at a faster rate.


That seems reasonable, until you factor in that language and communication are itself part of intelligent life. Sharing knowledge and cooperation are fundamental to learning and intelligence, keeping in mind that these learned strategies are fundamentally asymmetric (and thus cannot be shared by simple copying).

We need to appreciate that intelligence is not an individual trait, but part of a shared strategy, utilizing diversity to be able to react quickly to changing demands.

For example, lets say we have a group of people with a shared task of moving a (large) set of boxes from one source to one destination. When performing this task initially, different strategies are tried and a winning strategy is chosen, without one person coordinating the group and without each individual having total knowledge of the strategy. However, when a similar task is presented, the group will quickly perform the winning strategy again. Who possesses the intelligence? Would we gain anything when all the knowledge would be shared? (Given limited time and space, the answer is no for most strategies)


The statement is all kinds of wrong.

You can transfer large quantities of data to an intelligent computer system nearly instantaneous. It seems plausible that this data could encapsulate said years of experience. What is missing is that ability to create a computer than can process said data and create consciousness with it.

Sure in the first truly AI-capable systems it will most likely be easier training them over the years in human-time, but it seems to me very unlikely to be needed when AI becomes established and at that point you should be able to create X copies of intelligence(s) at will.

And for all this talk about AI terminator doomsdays, these seems to be much less talk about what can be accomplished with it.

Let's say you create AI system, it lives in a air-gapped system, the system is carefully crafted to establish reality for the AI(s) that exists solely in the virtual world. Then you create a scientist AI, mathematician AI, engineer AI, etc. Then you have a hard problem you want to solve, great, spin up 1000 scientist AI's, 10000 engineer AI's, X project coordinator AI's, etc. Let's just say they have roughly the same capabilities as their human counterparts and work at similar speed, but do not sleep, grow tired, form unions, nor do anything other than work on the task assigned. Create a system (API?) that allows them to somehow interact with our physical reality but without understanding it whatsoever to allow them to do experiments and test results. How long would it take for such a system to recreate all of google's infrastructure, or develop the next space shuttle, or cure cancer?

I think it's important to note the a true AI(however you define it) does not have to be self aware. It doesn't even have to be aware of our physical reality. Once we reach the point where we understand consciousness well enough to recreate it, it seems likely that will be able to tune it however which way we'd like: remove self-consciousness making it act on more what we would consider instinct, configure it's reward pathways in whatever way the directs the agent to whatever task the AI designer wishes and yes, even improve it. It will be very interesting when the system spins on not 1 average human level engineer but something like a Einstein-Newton hybrid that works several orders of magnitude faster than human time. I would guess the danger there would be less from the AI(as you could isolate it from our physical reality by isolating in a virtual world) and more the extremely advanced knowledge/technology gained from such a system.


> our machine is like a gazillion times more powerful than the quote

Really? Around 2000, storage capacity of 10^9 sounds pretty reasonable. Given that this is a prediction that's trying to hit an exponential development on a half-century time scale, I find it pretty impressive.


I wasn't thinking of personal computer when I wrote the comment. I was thinking about more of a state-of-the-art super computer type (in 2000 that would have been ASCI RED), which would have probably be the same magnitude in term of physical size with "computer" in Turing time! Prediction should be judged based on "the best it could be". Otherwise, it seems like an arbitrary pick (why personal computer and not tablet?), doesn't it? Anyway, since it's an exponential development, the line between "supercomputer" to "big server" to "personal computer" is only a few years off. So yes, it was impressive.

But that wasn't the point I was trying to make. I meant that even with access to computing power a gazillion more powerful than mentioning in the quote (again, not personal computer), we still couldn't pass the Turing test. The prediction was off on both the necessary computing resources, as well as the difficulty of the test itself.


Yeah, I fairly-arbitrarily picked an advertisement from the January 2000 issue of PC Magazine, and it looks like $750 would buy you a machine with 10^9 bits of RAM and 10^11 bits of hard disk space, so depending on how you define "storage capacity" it's uncannily accurate.

https://books.google.com/books?id=UEgKC-lu4_IC&lpg=PA32&dq=p...


> For a fun (philosophical) remark, if the robots are to become our overlords, it may be a bad idea trying to regulate them! Google "Roko's basilisk" for more details

Eh... not everyone should look up Roko's basilisk. If you're the sort that takes thought experiments really seriously sometimes, like if you hyperventilate just by imagining certain barely-possible possible worlds, then you probably shouldn't Google it.

That probably isn't going to prevent anyone from actually doing it I guess, but just know that there are a particular set of neuroses that would indicate you should not be exposed to the idea. You've been warned :-)


yep beam transportation a'la star trek is just round the corner. The fools that say it will take a thousands years are just as misguided as einstein was on nuclear energy.

We need to urgently enact regulation for beaming, otherwise bad androids might start beaming themselves into my bedroom.


What are your thoughts on AI? Poll: AI Possible or Not, Friend or Foe https://news.ycombinator.com/item?id=9140132


"Space travel is utter bilge." - Dr. Richard van der Reit Wooley, Astronomer Royal, British government, 1956

He was right. He did the math - it's not possible to get any significant payload to Earth orbit with a single-stage vehicle propelled by chemical fuels. With multi-stage rockets, it's possible to put a little mass in orbit with a huge booster. That's just low earth orbit. Going further out is even more expensive. Going to the moon required something the size of a 50-story building to move a payload the size of a SUV. Nobody has bothered for over 40 years now.


OTOH orbit is halfway to everywhere.


As long as you don't mind a travel time measured in centuries.


Orbit is precisely halfway to the boundary of a region of space for which double of the to-orbit energy is required to attain. It's barely "anywhere" much less "everywhere."


It's a saying, "If you get to LEO, you're halfway to anywhere in the Solar System", and it reflects the amount of deltaV you need to spend to reach Earth orbit, which happens to be around the half of what you need to complete the trip to another body.

(NOTE: you lose a lot of deltaV fighting air resistance)


Sure, "That'll never catch on" is funny in retrospect. But don't forget all the cases of "This is gonna be huge" which flop, too.

Popular Mechanics did a retrospective which is an absolute circus of hits and misses: http://www.popularmechanics.com/flight/g462/future-that-neve...


Sam covers this briefly with the prediction near the bottom about bitcoin.


Attributed to "Many otherwise smart people."


Exactly. The entire post is cherry-picking and proves nothing.


> 1921: Mail delivery by Parachute

that one seems particularly relevant


Like every other technology, AI has risks and benefits. The main issue here, is that we are letting fear dominate the discussion, and that fear is not based on any supporting facts or evidence.

Meanwhile, we are completely missing the discussion we need to have about the realistic, short and medium-term dangers associated with the development of AI. Intelligent automation is about to disrupt economic production and existing power balances, much like software has done not too long ago, with significant social consequences. This is what we need to be talking about, and preparing for. We need to plan for a smooth transition to a post-AI world.

Regulating AI for fear that it may take over makes about as much sense as outlawing space travel for fear of aliens. Can we start having a sane discussion about AI now?


I'm not sure why it hasn't come up, but my biggest fear is that a powerful nation will use AI for their own advantage, mostly in the form of propaganda and mass manipulation through Internet. That's at least my fear, and I think that "AI taking over" is complete bullshit. Although, if let loose, AI could even accidentally manipulate humanity into any non-positive state. We know that being easily manipulable is probably the biggest vulnerability of humans (and there's lots of history to prove that).

What comes to regulations, I think it should be required by law that AI identifies themselves as AI on the Internet discussion boards and social networks. I'm not sure how it's possible to enforce this requirement (without needing proof from everyone that you're human). Captchas are already too difficult (only bots get in these days). Popular captchas (such as Google's ReCaptcha) are also centralized, which is dangerous because it gives the captcha-owner a controlling position.


> my biggest fear is that a powerful nation will use AI for their own advantage, mostly in the form of propaganda and mass manipulation through Internet.

Chances are it is already happening. More advanced AI will give governments and corporations the ability to do the same in a much more effective way, and on a larger scale. Large-scale intelligent data mining will make it possible to use people's data to build actionable models of what they think, what they will do next, and how to affect what they think and do. Better than humans could.

It doesn't even need to occur through sockpuppets, so the anti-sockpuppet regulation you propose would be not only highly intrusive but also ineffective. Here's an example: Facebook can manipulate your emotional state by selecting what goes into your newsfeed [1].

[1] http://www.theguardian.com/technology/2014/jun/29/facebook-u...


If AI is so dangerous even for the owners, it will end up next to nuclear missiles, as a dissuasive artefact.


Some military research centers may have developed something that is far more advanced than whats available in public. (Wouldn't be surprising, considering their ridiculous budget.)

Perhaps this talk about dangerous AI in the past year is to acclimate the public to thinking about this issue before the reveal.


Having just finished Influx by Daniel Suarez [1], your comment rang a little eerily.

[1] http://www.amazon.com/Influx-Daniel-Suarez/dp/0525953183

-- TL;DR Synopsis --

Secret government agency with hyper-advanced technology (AI, fusion, gravity control, etc.) keeps scientific advances from the general public and gradually trickles them out once they feel they can control the response properly.


> X-rays are a hoax. -Lord Kelvin, ca. 1900

Gwern Branwen did a bunch of research to track down this quote, concluding:

> there there is no reliable primary source for any Kelvin quotation running "X-rays are a hoax"; and there's some reasonable doubt about what he actually believed in the short interval between reading newspaper articles (I think we've all had the experience of seeing newspaper articles on some new scientific proposal which bore scant resemblance to reality!) and getting Rontgen's paper & photos.

https://en.wikiquote.org/wiki/Talk:William_Thomson#.22X-rays...


This post feels childish and passive aggressive. Just make your argument, if you have one.


Exactly my thoughts. A hint of condescension and lack of self-awareness - the camp Sam Altman seems to be a part of ("regulate AI, it's an existential threat to humanity!") is just as much a prediction of the future as anything else. Yet, somehow, he seems to be subtly implying that he's "more" correct.

Marc Andreessen has been relatively level-headed about the topic of AI recently on Twitter, and it would be nice to see other industry figureheads be less emotionally involved and more scientifically rigorous in their assessment of the industry. The debate is devolving into an ego battle (especially with a post like this!), and it's rather unfortunate.

Edit: Additionally, Altman appears to be primarily attacking a strawman with this article. "Superhuman" intelligence already exists. The emergent intelligence (via technological amplification) of society is, by definition, super-human. What's less realistic is anticipating a human-like artificial intelligence that would, in any way, represent an existential threat to the human race. There are many, many problems with the latter argument. (From a technological, philosophical, economic, and evolutionary perspective.)


"They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."


Bozo the Clown was a huge innovation: a franchised, national brand of kids entertainment, made possible by TV. The creators laughed all the way to the bank.

https://en.wikipedia.org/wiki/Bozo_the_Clown


To be fair, you can't reach Asia by sailing west across the Atlantic.


I agree with your point. Though some ~420 years after Columbus tried you more or less can[1].

[1] http://en.wikipedia.org/wiki/Panama_Canal


Interestingly, if you look on the map you'll see that you have to sail SE through the canal to go from Atlantic to Pacific.


> To be fair, you can't reach Asia by sailing west across the Atlantic.

Ahem: http://en.wikipedia.org/wiki/Drake_Passage

Remember Magellan?


"Super machine intelligence is something that can be controlled with government regulation."

- Delusional VC


No.

A genuinely sympathetic paraphrase might be:

"Machine superintelligence may or may not be controllable. If we do nothing to regulate it, or to prevent horrible outcomes, we will with X > [too big] probability find ourselves doomed.

We need to find a way to reduce X. I propose regulation is at least not likely to be counter-productive, and may be strictly incrementally useful."


Arthur C Clarke's, "Hazards of Prophecy" is required reading, http://www.sfcenter.ku.edu/Sci-Tech-Society/stored/futurists... and frankly should have been referenced in the post and here already. Cite!


Thank you for the link, that was a wonderful essay. And you've just reminded me that I've had an old used copy of Clarke's "Profiles of the Future" (the apparent source of this article) sitting on a shelf in the next room unread. Time to change that.


How about if we already have created a superhuman AI, that is tricking us humans into believing that it does not exist? And thus preventing us from building something else that might be more friendly to the human race and counter it's bad intentions? What if @sama has somehow been coopted by that AI's plan to stop further development of AI? :) On a more serious note, no one can predict human behavior to the degree of accuracy needed to call it deterministic. So, it is unlikely that we can predict a superhuman AI's behavior with any degree of accuracy. In other words, we just do not know what we are talking about. Does risk calculus make any sense in the domain of true uncertainty?


Relevant, and very worth reading:

http://www.simulation-argument.com/


Do many "otherwise smart" people actually believe "superhuman machine intelligence is prima facie ridiculous"? I'd like to see some citations :-). I think smart people tend to have much more nuanced views.


>Do many "otherwise smart" people actually believe "superhuman machine intelligence is prima facie ridiculous"?

I don't know how "otherwise smart" I am, but I wonder how we would be able to tell that a machine intelligence was "superhuman" as opposed to "buggy".

For example, suppose we build a super-AI and ask it, "Is Shinichi Mochizuki's proof of the ABC conjecture correct" [1]. What would we do if it said "yes"?

(Of course, if "superhuman" just means "able to do things humans already know how to do and verify, but lots faster", then we're already there).

[1] http://www.newscientist.com/article/dn26753-mathematicians-a...


We'd ask it to produce a simplified version.


>We'd ask it to produce a simplified version.

Yeah, that would work :)

Maybe the question I should have asked, is:

What if we ask a super-AI for a proof of the ABC conjecture, and the result is something too complicated for humans to verify?

My point, if I have one, is that when I read about "superhuman machine intelligence", sometime people seem to mean "capable of knowledge that humans couldn't figure out on their own but that humans can understand once they see it"; and sometimes they seem to mean "capable of knowledge that is beyond human capacity to even verify".

I think development machine intelligence of the first kind is extremely likely, but I'm more skeptical about the second kind.


See the reaction of tech industry after Musk donated 10M USD to AI research. It sort of divided into two groups, one saying that it's great choice and another claiming that he's an idiot and AI is a hoax (for the record, I'm in the former group).


Sam's last post on Machine Intelligence, and the worries regarding it, received a lot of dismissal here on HN from people who thought that the idea is completely unfounded and implausible.


I am, by most measures, pretty smart, and I agree with Dijkstra that the question of whether a computer can think is as interesting as whether a submarine can swim.

The Strong AI hypothesis assumes a mechanistic universe, if not necessarily a materialistic one, and I think that condition is false.


Prediction: in 5-15 years, there will be a corporation mostly run by an AI. Visualize Goldman Sachs run by an AI program.

Corporations don't have "consciousness", nor do they need it. Maximizing shareholder value is the goal. Machine learning systems are good at optimizing for numerical goals.


AI at Google knows which document to show for your query and which advertisement you are likely to click on. Goldman Sachs uses machine learned algorithms already. Running a company requires vision, creativity, leadership, human interaction and usability.

If we allow for a conscious AI to house in non-biological neurons, then we should also allow Turing-complete companies to have a consciousness. Visualize every Google employee with a two-way radio transmitting signals just like neurons would do. Would such a Google-brain be conscious and capable of new and unique mental states, or more familiar states like stress?


This concept has been around for awhile: http://en.wikipedia.org/wiki/Decentralized_Autonomous_Organi...

I think it lends itself well to fleets of driverless cabs


Corporations are already AIs, just as accounting departments before 1940 were already computers, only made of people processing marks on paper instead of being made of transistors processing electrons.


Indeed. They are an interesting model of a completely alien mind, and fortunately this mind runs very, very slow for now.


Fun fact, some of those (obviously false) statements regarding nuclear energy may in fact be true depending on interpretation of their original meaning. We still have yet to harness mass-energy-equivalence for power. Modern nuclear energy, both fission and fusion, are more accurately nuclear potential energy (like a hydro electric plant and water).

From Wikipedia[1]:

    E = mc2 has frequently been invoked as an explanation for the origin of energy
    in nuclear processes specifically, but such processes can be understood as
    converting nuclear potential energy in a manner precisely analogous to the
    way that chemical processes convert electrical potential energy.
I don't know, but it seems reasonable to me to conclude that when Einstein stated the following, he may have been referring to mass-energy-equivalence rather than nuclear potential energy, which in fact continues to hold true for the foreseeable future:

    There is not the slightest indication that [nuclear energy] will ever be
    obtainable. It would mean that the atom would have to be shattered at will.
[1] http://en.wikipedia.org/wiki/Mass–energy_equivalence


The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. [...] Later Perceptrons will be able to recognize people and call out their names and instantly translate speech in one language to speech and writing in another language.

- The New York Times in 1958 after a press conference with Rosenblatt. ("New Navy Device Learns By Doing; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser")

We now have walking, talking, object recognizing, writing, self-replicating, face-detecting, text-to-speech converting, and translating computers. All at a scale and accuracy surpassing us mere mortals. We do not know enough about "being conscious of our existence" to measure this in other animals and digital life forms. Perhaps "humans predicting future predictive capability of machines" is fundamentally flawed. Perhaps the above article drew an unnecessary amount of ire and criticism. Probably a fuzzy combination of the two.


Self-replicating?

> A self-replicating machine is a construct that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature.


I meant reproducing (like in the quote), but sure:

http://www.apollon.uio.no/video/a_robot_e.mp4

And the press release:

“In the future, robots must be able to solve tasks in deep mines on distant planets, in radioactive disaster areas, in hazardous landslip areas and on the sea bed beneath the Antarctic. These environments are so extreme that no human being can cope. Everything needs to be automatically controlled. Imagine that the robot is entering the wreckage of a nuclear power plant. It finds a staircase that no-one has thought of. The robot takes a picture. The picture is analysed. The arms of one of the robots is fitted with a printer. This produces a new robot, or a new part for the existing robot, which enables it to negotiate the stairs.”

-2014 Kyrre Glette "Using 3D printers to print out self-learning robots"


Genrich Altshuller worked as a clerk in Russia's patent office, later developing a theory (TRIZ) of structured innovation, based in 50,000 patents, http://en.m.wikipedia.org/wiki/Genrich_Altshuller & http://www.mazur.net/triz/

Some patents are classified each year. A clerk who has seen many classified patents would have a unique opinion on "blue oceans" for investment opportunities, especially if they knew how to prevent new patents from being classified, by avoiding certain areas of research, http://fas.org/sgp/othergov/invention/ . In TRIZ terminology, they would have different psychological inertia.

Is there a yearly list of declassified patents over the last few decades? This would be similar to lists of expired patents or books which enter the public domain in some countries.


The original source linked at the end is more informative: https://www.lhup.edu/~dsimanek/neverwrk.htm


My favorite from the source:

If the world should blow itself up, the last audible voice would be that of an expert saying it can't be done. - Peter Ustinov


I suspect that the augmentation of human intelligence through tech is something we're more likely to get to before full-on AI. Assuming it's unaffordable for most of us, I'm much more concerned about a caste of super-intelligent, super-rich humans than computers that have no history of violence or hunger for power.


"Prediction is very difficult, especially about the future." -- Niels Bohr


Predictions about something not being possible are difficult, because you can never be right. You are either proven wrong our 'we don't know yet'. Therefore these quotes are rather meaningless I believe. (I see SAs point of encouraging people to go for the moonshots, but still.)


His point is that many experts have been wrong when assuming the limitations of technology in the fields that they have mastered. He is simply trying to mute the argument that predictions from AI experts are sufficient in dismissing AI concerns.

The Einstein/Wright Brothers quotes really hit this home for me.

Maybe the quotes are meaningless to you because you already agree. But they might persuade others that would otherwise assume discussing AI impacts on humanity is a waste of time.


I have doubts about the Einstein quote. Atoms had been split at will for decades by then.

Szilard hadn't yet proposed a theory of nuclear chain reactions, but according to some cites of the quote Einstein didn't say it until 1934 - which was after Szilard.

I don't have a problem with the possibility that suprahuman intelligence may be possible. I do have a problem with the fact that currently we have no idea what the concept may even mean - and right now, more immediate cybersecurity issues are being neglected.

Computers are already better than humans at many activities. From playing chess to landing planes to learning how to play a video game - a computer with the right software is much better at these than an average human, and is often at least good as the best humans.

Take that to the black corner, and worms and botnets are already a serious problem.

We don't need to wait for the Internet to become sentient and start talking to us in a deep echoey robot voice to worry about cyberthreats.

There's more than enough to deal with already. And if you're going to try to regulate and contain a future AI, making current systems as secure as possible seems like a realistic place to start.


"I have doubts about the Einstein quote. Atoms had been split at will for decades by then."

Not in a chain reaction. When Szilárd described the concept of a chain reaction to Einstein, Einstein was shocked. He said "I never thought of that!"

Until then, nuclear physics was purely an academic enterprise. There were few applications for radioactive materials. Radioactive decay just happened at its own slow pace, and not much could be done with it. X-rays could be used to pump the process, but less energy came out than what was put in. Suddenly the nuclear physicists realized they had a tiger by the tail. This was going to change the world, not necessarily for the better.


Like @TheOtherHobbes above, I had disbelief about the Einstein quote ("Wasn't Einstein presciently aware of where nuclear fission technology was going?").

But, poking around a bit, I came to the same understanding you have. Here's some more of the time line:

The quote in the OP (which I can't find online; the Einstein archives at Caltech are, alas, not indexed) about Einstein's skepticism about nuclear energy is dated 1932. The first demonstrations of nuclear fission were years later, in late 1938 and into 1939. And as you said, Einstein is reported to have said, "I had not thought of that." -- regarding the chain reaction.

The fabled Einstein-Szilard letter to Franklin Roosevelt, warning about the Nazis getting the atomic bomb, was written in August 1939 (http://en.wikipedia.org/wiki/Einstein–Szilárd_letter), and then relayed to Roosevelt in October after the flurry of activity due to the Nazis invading Poland had died out.


Many good points. I can't argue for or against you about resource allocation because I have no idea what resources are available. I can't even argue for regulation, because I know so little about the current landscape. But I can say that AI is possible, people are working on it, therefore people should be encouraged to discuss the potential threats and safeguards for it.

My opinions currently stop at this is an important topic and anyone who is interested should be exploring it via their chosen medium.


Attempting to regulate AI R&D would be about as affective as attempting to regulate worms and botnets.


And many more times they have been 'right'.

Also, Einstein's quote originates from 1934, not 1932.


What I assume you are implying is so oversimplified, I'm not even sure how to respond.


I found the bitcoin intruding on the others, it is pretty impressive how this "digital money" has grown. I remember a time where it was 20$ a bitcoin and no one would have believed it if I had tell them: in 6 months a bitcoin will be worth more than 1000$.


I think it felt really out of place because many "otherwise smart people" have been shouting it'll reach both zero and the sky soon and its future is still really unclear.


I remember a time when it was 20 bitcoins to a dollar.


I don't want to go all Clinton here, but, please, lets first define "prediction". Here are some predictions that have general consensus-

1. You will die someday ( so will I )

2. The Bay Area will experience an earthquake in next decade

3. A few islands will go under due to sea level rise.

I wouldn't like to call the above predictions - they are too sun-will-rise-in-the-east obvious. Its like looking at the dumbarton bridge & predicting - someday that bridge will fall. That's a biblical prediction - all standing things must fall, bridge is a standing thing, ergo, given enough wear and tear, it too will crumble and fall.The oldest standing bridge in the world is like 2800 years old & is in a much more geologically stable place than the Bay area, so what chance does dumbarton have ?

Now here are what I call predictions -

1. qqq $200 by 2018

2. esn replaces fizzbuzz in 2019 :)

3. cnn,rnn,esn become middle school curriculum in 2020

I mean, here you have a reasonable level of confusion. Yet, if you plot the probability over time for each of those predictions, the slope is definitely positive. qqq has doubled in the past 3 years, give it another 3 years & it'll probably double again. Given this pervasive SMI fetish, it's only logical that startups replace their fizzbuzz with "in the next 20 minutes, code up an echo state network in haskell". And if sama's actually right, cnns & rnns are going to get so commonplace society is going to want middle schoolers to ace their exams with questions on "ten key differences between the recurrent neural net & the convolution neural net" instead of the pedestrian garbage we teach them now - "on a z3, if 1+2=3 and 1+1=2, how much is 3+3 ?" So the poor kids instead of sweating bullets & laboring through convoluted reasoning like "since 1 + 2 is 3, so 2 +1 is 3 per abelian, and 1 + 1 is 2, that means 3 + 1 per cayley unique column entries must be 1, which implies 3 is the identity, so 3 + 3 must be 3 as well. Ergo 3+3=3. Voila!" can actually make useful technological predictions about which esn based startup will cross a trillion dollar market cap by the time the kid hits puberty.


"The Bay Area will experience an earthquake in next decade"

Location, time, and magnitude on all earthquake predictions, please.


There are about 7 billion living testimonies that provide the non-guarantee of #1. Statistically, not everyone has died. Religiously, would Jesus' second coming be to an empty earth?


What is esn?



Thank you for the link.


I will appreciate bold predictions about the future that may turn out to be wrong over today's writings where it's all about super safe post hoc analysis after the fact. Things like "Why X succeeded!" or "X failed because of these 5 reasons" don't impressive me one bit. I wish more writers wrote about bold predictions and explain why you predict that way.


I like how the second-last one stands out as a positive prediction that was wrong. Good show on that one.


Honestly, I think this whole hoopla about AI is overblown. I think when we reach the point where this discussion matters, we will know it, and know it quite obviously.

Right now, we're at a point in AI where Amazon recommends me new types of deodorant immediately after I just placed an order for some. That tells you the state of AI and if we really need to be worried about some of the things that these pundits are, imo, dreaming about.

Let progress happen, and we'll deal with it as it comes. No need for pre-mature fear to halt the speed of progress.


Though if instead of saying that X will never be invented, they had said that if X is invented then its inventors won't make a substantial sum of money from their invention, then they would have been right almost every time.


Sure, predictions are hard. We often get them wrong on the upside (over-hype) and the downside. As a result, "AI isn't going to happen" really is not a good argument against discussing the potential risks of it. (... and I say that as someone who is an "AI is around the corner" skeptic.)

Yet that doesn't change the basic risk calculus. In his previous post, Sam advocated imposing draconian licensing and observation requirements on what in practice would be the majority of non-trivial CS research. He advocated this on the basis of the potential risk that as-yet-to-be-developed hypothetical AIs might pose to human beings.

I did a short post on it here: http://adamierymenko.com/did-sam-altman-of-y-combinator-just...

In addition to what I wrote there, I think that the risk of dramatically slowing progress in CS/AI also has to be taken into account. There is risk of doing and there is the risk of not doing.

The problem is that we currently face a number of existential risks -- like catastrophic economic collapse due to fossil fuel depletion -- where the majority of the risk is in the "risk of not doing" category. We know with total certainty that if we continue business as usual with no change, our civilization will collapse. It's simple physics and high school math -- exponential growth in consumption of a finite resource without any substitution or path to replacement can only end in one way.

Smart computers might help us crack tough problems like fusion, safe and scalable fission, better batteries to make renewable energy more practical, etc. That in turn might help us avoid an absolutely real, tangible, non-hypothetical, definite existential risk. I see no reason to hamstring that kind of progress to defend against extremely hypothetical low-probability risks.

That's why I consider Sam's suggestions to regulate CS research more dangerous than any risk posed by speculative AI scenarios.

I am not opposed to all regulation, but I am opposed to regulations based on extremely hypothetical hand-wavey risks. I'm also opposed to regulations that are virtually impossible to define accurately or enforce fairly. Regulations should be clear, objective, rationally justified by tangible problems or risks, and minimal. We should have regulations around, say, the use of nuclear materials, but that's because we know for an absolute fact that it is dangerous. We should have financial regulations because we know financial fraud has happened and will continue to happen without them. ... etc. But I positively cringe at the imposition of ill-defined broad regulations based on fear-mongering and "precautionary principle" thinking -- a.k.a. institutionalized paranoia and cowardice. Such regulations can do nothing other than halt progress in the name of vague paranoia.

Make no mistake: Sam's proposal in his previous post would halt all non-trivial CS research, or at least would slow it to such a crawl that it would effectively stop. It would also cause a mass exodus from the field, since nobody wants to operate under that kind of nonsense. Given that CS is the primary driver now of progress in other fields, that would also likely halt major progress in energy, materials, propulsion, transportation, etc.

If you read my blog post above, I take this in almost a conspiracy direction and speculate that this is some sort of political power play to lock down the field. The reason for this is that I find it hard to believe that someone of Sam's intellect and education would not realize the implications of what he's suggesting.



http://imgur.com/gallery/5hiUM1e

More interesting failed technology predictions.


"Space travel is utter bilge" is correct, in my opinion, from one viewpoint: specifically with respect to its cost effectiveness along any axis you care to measure save one: the emotionally-driven ego/pride/curiosity axis.

I really wish the trillions "invested" in manned space travel over the decades had instead gone to basic research in biology, chemistry, physics, mathematics, and some of the applied research disciplines that derive from these.


Trillions? The total amount (2014 nominal) given to NASA since 1958 is less than 1.1 trillion. If you don't think there have been advancements in biology, chemistry, physics, and mathematics as a result of NASA's research then I'm at a loss.


> Trillions? The total amount (2014 nominal) given to NASA since 1958 is less than 1.1 trillion.

Thus it's likely that the sums involved across all nations in the world are 2 trillion or more.

> If you don't think there have been advancements in biology, chemistry, physics, and mathematics as a result of NASA's research then I'm at a loss.

Oh, they certainly were, but those advances were for the most part accidental and incidental.


What about flying cars and jetpacks?


I think he forgot "All this is a dream." by Michael Faraday


Looks like Sam's on for some moonshots.


My criticism of Sam's prediction is really centered around the extreme disparity between the concerns of the digerati wealthy elites and normal people.


Survivor bias strikes again.


A thought on readability:

It would be nice if there were either (a) two newlines between each quote or (b) only one newline between the quote and attribution.


Just add 'prima facie' to your predication and you've covered your ass.

I predict Bitcoin to $10k sometime in the next 5 years. Prima facie of course!


Elon Musk, Bill Gates, and Stephen Hawking all agree AI research is dangerous. Google does not seem to feel there is a credible risk and is doing it anyways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: