You can watch that talk and see the approach they're taking. Maybe you're more skeptical than they are about the near term possibility, but you can see that the work and progress is real.
The AGI risk is real too.
What Milton said doesn't make semantic sense, it's not a question of timelines.
I firmly believe that some parts of the AI community have a vast overestimation of their capabilities.
AGI is not 50 years away - it is unknowably far in the future. It may be a century, it may be a millenium. We just know far too little about the human mind to make any firm prediction - we're like people in Ancient Greece or 1800s England trying to predict how long it would take to develop technology to reach the moon.
We are at the level where we can't understand how the nervous system of a worm works in terms of computation. Emulating a human-level mind is so far beyond our capabilities that it is absurd to even imagine we know how to do it, "we just need a little more time".
And yet most scientists would claim that there was no plausible, practical way to achieve a controlled fission reaction, with many flat-out stating it was impossible, at the same time that the first fusion reactor was ticking away in Chicago in 1942.
I agree it's not possible to have a good idea to the answer to this question, unless you happen to be involved with a development group that is the first to figure out the critical remaining insights, but it's more in the league of "don't know if it's 5 years or 50" rather than "100 years or 1000".
Predictions are easy when there are no consequences, but I wouldn't make a bet with serious consequences on critical developments for AGI not happening in the next decade. Low probability, probably, but based on history, not impossible.
My reasoning is based on two things. For one, what we know about brains in general and the amount of time it has taken to learn these things (relatively little - nothing concrete about memory or computation). For the other, the obvious limitations in all publically shown AI models, despite their ever-increasing sizes, and the limitted nature of the problems they are trying to solve.
It seems to me extremely clear that we are attacking the problem from two directions - neuroscience to try to understand how the only example of general intelligence works, and machine learning to try to engineer our way from solving specific problems to creating a generalized problem solver. Both directions are producing some results, but slowly, and with no ability to collaborate for now (no one is taking inspiration from actual neural networks in ML, despite the naming; and there is no insight from ML that could be applicable in formulating hypotheses about living brains).
So I can't imagine how anyone really believes that we are close to AGI. The only way I can see that happen is if the problem turns out to be much, much simpler than we believe - if it turns out that you can actually find a simple mathematical model that works more or less as well as the entire human brain.
I wouldn't hold my breath for this, since evolution has had almost a billion years to arrive at complex brains, while basic computation has started from the first unicellular organisms (even organelles inside the cell and nucleus are implementing simple algorithms to digest and reproduce, and even unicellular organisms tend to have some amount of directed movement and environmental awareness).
This is all not to mention that we have no way right now of tackling the problem of teaching the vast amounts of human common sense knowledge that is likely baked into our genes to an AI, and it's hard to tell how much that will impact true AGI.
And even then, we shouldn't forget that there is no obvious way to go from approximately human-level AGI to the kinds of sci-fi super-super-human AGIs that some AI catastrophists imagine. There isn't even any fundamental reason to assume that it is even possible to be significantly more intelligent than a human, in a general sort of way (there is also no reason to assume that you can't be!).
I think mixing in the human bit confuses the issue, you could have a goal oriented AGI that isn't human like that causes problems (paperclip maximizer).
Which shows some generality, the best way to accurately predict an arithmetic answer is to deduce how the mathematical rules work. That paper shows some evidence of that.
> evolution has had almost a billion years to arrive at complex brains
There are brains everywhere and evolution is extremely slow. Maybe the large computational cost of training models is similar to speeding that computation up?
> there is no obvious way to go from approximately human-level AGI to the kinds of sci-fi super-super-human AGIs that some AI catastrophists imagine.
It's worth reading more about the topic, it's less that we'll have some human comparable AI and then be stuck with it - more so that things will continue to scale. Stopping at human level might be a harder task (or even getting something that's human like at all).
> This is all not to mention that we have no way right now of tackling the problem of teaching the vast amounts of human common sense knowledge that is likely baked into our genes to an AI, and it's hard to tell how much that will impact true AGI.
This is a good point and basically the 'goal alignment problem' or 'friendly AI' problem. It's the main reason for the risk since you're more likely to get a powerful AGI without these 'common sense' human intuitions of things. I think your mistake is thinking the goal alignment is a prerequisite for AGI - the risk comes from the truth being that it's not. Also humans aren't entirely goal aligned either, but that's a different issue.
I understand the skepticism, I was skeptical too - but if you read more about it (not pop-sci, but the books from the people working on the stuff) it's more solid than you probably think and your positions on it won't hold up.
GPT-3's performance on artihmetic is exactly one of the examples of how limited it is, and of how little the creators have tried to understand it. They don't even know if it has some (bad) model of arithmetic, or if it's essentially just guessing. I find it very hard to believe that it has an arithmetic model that works well for numbers up to the thousands but fails on larger numbers. More likely it has memorised some partial multiplication tables.
Getting back to the human bit, I'm using 'human' just as a kind of intelligence level, indicating the only intelligence we know about that can do much of anything in the world.
The paperclip maximizer idea still assumes that the AI has an extremely intricate understanding of the physical and human worlds - much better than any human's. My point was that there is no way at the moment to know if this is possible or not. The whole excersise believes that the AI, in addition to understanding the world so well that it can take over all of our technology, could additionally be so alien in its thinking that it may pursue a goal to this utmost extent. I find this combination of assumptions unconvincing.
Thankfully, the amount of knowledge we have about high level cognition means that I'm confident in saying that I don't know significantly less than, say, Andrew Ng about how to achieve it (though I probably know far less than him about almost any other subject).
I'm not claiming that AGI risk in some far future won't be a real problem. My claim is that it is as silly for us to worry about it as it would have been for Socrates to worry about the effects of 5G antennas.
> More likely it has memorised some partial multiplication tables.
Did you read what I linked? (I don't intend this to be hostile, but the paper explicitly discusses this.) They control for memorization and the errors are off by one which suggest doing arithmetic poorly (which is pretty nuts for a model designed only to predict the next character).
(pg. 23): ”To spot-check whether the model is simply memorizing specific arithmetic problems, we took the 3-digit arithmetic problems in our test set and searched for them in our training data in both the forms "<NUM1> + <NUM2> =" and "<NUM1> plus <NUM2>". Out of 2,000 addition problems we found only 17 matches (0.8%) and out of 2,000 subtraction problems we found only 2 matches (0.1%), suggesting that only a trivial fraction of the correct answers could have been memorized. In addition, inspection of incorrect answers reveals that the model often makes mistakes such as not carrying a “1”, suggesting it is actually attempting to perform the relevant computation rather than memorizing a table.”
> The paperclip maximizer idea still assumes that the AI has an extremely intricate understanding of the physical and human worlds - much better than any human's. My point was that there is no way at the moment to know if this is possible or not.
It seems less likely to me that biological intelligence (which is bounded by things like head size, energy constraints, and other selective pressures) would happen to be the theoretical max. The paperclip idea is that if you can figure out AGI and it has goals it can scale up in pursuit of those goals.
> I'm not claiming that AGI risk in some far future won't be a real problem. My claim is that it is as silly for us to worry about it as it would have been for Socrates to worry about the effects of 5G antennas.
I think this is a hard claim to make confidently. Maybe it's right, but maybe it's the people saying the heavier than air flight is impossible two years after the Wright brothers flew. I think it's really hard to be confident in this prediction either way, people are famously bad at this.
Would you have predicted gpt-3 kind of success ten years ago? I wouldn't have. Is gpt-3 what you'd expect to see in a world where AGI progress is failing? What would you expect to see?
I do agree that given the lack of clarity of what should be done it makes sense for a core group of people to keep working on it. If it ends up being 100yrs out or more we'll probably need whatever technology is developed in that time to help.
> Did you read what I linked? (I don't intend this to be hostile, but the paper explicitly discusses this.) They control for memorization and the errors are off by one which suggest doing arithmetic poorly (which is pretty nuts for a model designed only to predict the next character).
I read about this before. I must confess that I had incorrectly remembered that they had only checked a few of their computations for their presence in the corpus, not all of them. Still, they only check for two possible representations, so there is still a possibility that it picked up other examples (e.g. "adding 10 with 11 results in 21" would not be caught - though it's still somewhat impressive if it recognizes it as 10 + 11 = 21).
> It seems less likely to me that biological intelligence (which is bounded by things like head size, energy constraints, and other selective pressures) would happen to be the theoretical max. The paperclip idea is that if you can figure out AGI and it has goals it can scale up in pursuit of those goals.
Well, intelligence doesn't seem to be so clearly correlated with some of those things - for example, crows seem to have significantly more advanced capabilities than elephants, whales or lions (tool use, human face memorization). Regardless, I agree that it is unlikely that humans are a theoretical maximum. However, I also believe that the distribution of animal intelligence to brain size may suggest that intelligence is not simply dependent on the amount of computing power available, but on other properties of the computing system. So perhaps "scaling up" is not going to be a massive growth in the amount of intelligence - that you need entirely different architectures for that.
> Would you have predicted gpt-3 kind of success ten years ago? I wouldn't have. Is gpt-3 what you'd expect to see in a world where AGI progress is failing? What would you expect to see?
I don't think GPT-3 is particularly impressive. I can't claim that I would have predicted it specifically, but the idea that we could ape human writing significantly better wouldn't have seemed that alien to me I think. GPT-3 is still extremely limited in what it can actually "say", I'm even curious if it will find any real uses that we don't already outsource as brain-dead jobs (such as writing fluff pieces).
And yes, I do agree that this is a problem worth pursuing, don't get me wrong. I don't think lots of AI research is going in the right way necessarily, but some is, and some neuroscience is also making advances in this area.
I don't understand why it's relevant to watch a video of someone who is not Musk as a comparison to Milton.
I'm not claiming that the Head of AI Research at Tesla has said stupid things (I don't think he has), I'm saying that Musk has said stupid and nonsensical things that don't make semantic or scientific sense, and in comparison to Milton their hucksterism is just a difference of degrees.
You can watch that talk and see the approach they're taking. Maybe you're more skeptical than they are about the near term possibility, but you can see that the work and progress is real.
The AGI risk is real too.
What Milton said doesn't make semantic sense, it's not a question of timelines.
https://intelligence.org/2017/10/13/fire-alarm/