Hacker News new | past | comments | ask | show | jobs | submit login

Thing is, we have good evidence that all those things actually happen (except maybe the nuclear winter), and that they are/were increasing.

Meanwhile, all we have from the AGI doomsday people is sci-fi stories, and actual programs that still can barely distinguish a cat from a mole even after looking at thousands of pictures.




It would be really useful if there were some historical example of a sudden increase in intelligence leading to a new class of agents taking over the Earth and determining it's future.

Or maybe if there were any evidence that AI capabilities are increasing, like becoming dominant at Chess or Go, driverless cars, or a slew of recent papers on transfer learning.

Maybe if one of the co-authors of the leading AI textbook, Stuart Russell, voiced concerns, we could count that as evidence.

But you're right, it's better to wait until we know for a fact that someone's built an agent smart enough to end civilization before we commit any resources to the problem.


Driverless cars are a great example. Thirty years after the Navlab¹, and twenty after it did a travel from Washigton to San Diego mostly by itself², we still don't have a viable model that can safely navigate even an highway. But "human-level AI" is ten year away?

To me, all these discussions sound like someone who read The Earth to the Moon in 1865 and started working on how to avoid getting humans harmed by the explosion in the barrel.

¹ https://www.youtube.com/watch?v=ntIczNQKfjQ

² https://www.youtube.com/watch?v=bdQ5rsVgPuk


Nobody is saying human-level AI is ten years away, people are saying there's some chance of it in ten years. Median 50% estimate is quite a bit longer. But a 10% chance just means divide the impact by ten, and with an event like this that still leaves you with a really damn big number.


I get that, I still find the numbers absolutely unrealistic. They're just setting it up for another AI winter.


If an AI winter is going to happen again it's not going to be from the AGI side, but from the over-hyped deep learning stuff failing to provide much business value at the enterprise level (where it's only starting to be exploited by the big players). At least compared to simple linear regression.

The AlphaGo situation is an interesting one. Did you have any predictions for when an AI would beat a top pro at Go? I didn't really learn to play until sometime in 2015 but I was amazed AIs still hadn't dominated, they weren't even close. Still I saw that with the single trick of MCTS AIs had improved a lot, and seemed to have some steady improvement year after year for a while. I don't think it would be unreasonable to have predicted that at some point with X amount of computing power an AI could be made to win. Then later that year I saw a paper that reported they were beating all the MCTS bots with a deep learning based one they were making. Immediately it seemed clear that the first person to create a fusion of deep learning + MCTS would create a very strong Go AI bot, but would it beat pros? Maybe with a year of effort by a big company using custom hardware like IBM's Deep Blue or more likely these days GPU clusters, but would it happen soon? Not for a year at least. Turns out it was already happening (AlphaGo guys started in 2014, and Facebook had a project going too) and the Fan Hui matches being announced took a lot of people by surprise. Some because their predictions were ignorant of advances in either or both of MCTS or deep learning and so still predicted many years before computers would win. I was more surprised it was done without anyone hearing about it sooner. Even so it wasn't clear it could beat Lee Sedol a few months later, since there's another big gap between lower pros and higher pros, but it did.

A couple lessons to take from AlphaGo: we don't necessarily know what's actively being worked on around the world nor how far along it is, and problems that seem insurmountable with current computer hardware can suddenly become solved with the right fusion of existing ideas that haven't yet been fused.

Black swans are hard to predict, disagreements over the predictions are totally normal and fine, I think the harder disagreement is just getting people to accept that should nothing specifically hinder it (like an extinction event from an asteroid/disease, or successfully enforced bans on AI research), AGI is inevitable at some point in humanity's future. There probably should be some research done into its safety, and since the prediction problem is so uncertain, there's no pressing reason not to start now or indeed 10 years ago. "You could have used that money for feeding Africans!" is a non-argument.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: