I thought about this article in context of the recently reposted piece "There’s No Fire Alarm for Artificial General Intelligence" [1]. While it may be true that we haven't been able to automate new kinds of jobs, maybe we are just moments away from having an AGI that would make that kind of automation easy. That would contradict this article's claim that the narrative around an incoming automation revolution is bullshit. Just as likely, though, is that we actually are decades out from AGI and that kind of automation.
> maybe we are just moments away from having an AGI that would make that kind of automation easy
What evidence do we have for being closer to AGI today than we were twenty years ago?
My point isn't that AGI is useless. But it's currently an article of faith. Faith is fine. Faith is essential. But isn't a good way to make huge allocations of resources.
His assertion that we could always be one discovery away is not false, it's just irrelevant. The article states no evidence that there's been any past effect of "increasing automation" and assumes the near future will look like the recent past. Just inventing AGI wouldn't actually change much the next day, it would just be a new paradigm, and things would eventually change... so I agree with TFA.
(Also, AGI would be not so useful. Humans are notoriously difficult to work with, and we seem to have General Intelligence. It would have to have a strange concoction of motivations to be immediately --- or even eventually --- useful for anything that we would want to automate).
I definitely don't mean it as in "AGI is invented; humans need not apply", but in the point you've covered:
> The article states no evidence that there's been any past effect of "increasing automation" and assumes the near future will look like the recent past.
My point was that predictions of AGI being so far away may be false, as explained the article I linked. If they were false, and we ended up with AGI, I don't think the article's assumption about the near future will apply. This does not need to happen overnight, it can take time, but the kinds of jobs we could automate would change drastically over time (and the time scale in the article is two decades long!)
On your second point, though. Part of why humans are difficult to work with because we have our own emotions and motivations and sometimes decide not to do things other people want. An extremely intelligent AI that did only what it was told to do would probably not be as difficult to work with?
The idea that intelligence is disconnected from the "baser" instincts like emotion, need to eat, need to reproduce, need for social recognition, etc is probably just false. Akin to there being a useful measure IQ which predicts ability to solve the worlds problems (hint: Nope). Our story-telling mind can construct all kinds of intelligent hypotheses, but was probably evolved to appear rational to our fellow people and attribute agency where possible. Our wander-through-the-woods mind can visualize and hypothesize about spatial relations and transformations, etc.
There's much to do for AGI, but I believe that motivation-engineering will be the hardest part. Morality is intrinsically connected to our role as sorta-hive-minded monkeys.
Caveat: All the above is poorly presented opinion from the following resources:
- Learning how to learn on coursera
- Buddhism and modern psychology on coursera
- Righteous Mind by Haidt.
- Happiness Hypothesis by Haidt
- Bullshit as an honest indicator of intelligence
Building a machine using our evolutionary history as a prior design is the only way we know how to produce general intelligence, but all the strange, varied, "Emotional" baggage that goes with it means we never would. Why would a computer be afraid of snakes? If what you want is a computer that can come up with solutions you wouldn't have imagined, then you need clever search, problem specification, and significant computation, not general intelligence. If you want to automate something, you may need learning, but don't need intelligence.
From my perspective, there are essentially two directions from which we can reach AGI.
One is that we may gain sufficient understanding how minds work and become able to implement one. IMHO it's a reasonable assumption that we currently have sufficient hardware capability to make the proper computing power for a mind with human-equivalent intelligence if only we knew how that mind software should be structured, but we simply don't really know how minds work, how they should work, and how to build them. There's lot of work being done there, some knowledge is being gained, but it seems that incremental advances won't be sufficient and progress will happen only in the case of a major breakthrough which we can't predict or expect any time soon - or, possibly, never; there certainly are people arguing that.
The second direction, on the other hand, does not presume a theoretical understanding of how intelligent minds truly work on a high level, but refers to brute force and/or simulation of low level constructs in human brains which we can understand and implement without needing a theoretical breakthrough. This approach requires immense computational power far beyond our current capability, so it's completely unrealistic to attempt in the near future - but twenty years of Moore's law has brought us twenty years closer to a brute-force solution to AGI. If we look at the estimates made in 2000 or earlier (I seem to recall reading brain-brute-force estimates from 1980s but I can't find them now. Perhaps Kurzweil was writing something like this already back then?) about the expected requirements for computing power and the expected progress, then we're pretty much on track; the computing power lines cross at something like 2050-2060; any hopes of "GAI in 2020" were based on the first approach which requires a breakthrough in understanding.
So from this perspective AGI is inevitable even without any progress whatsoever in "true intelligence" research, as long as our physics research and engineering keeps delivering improvements to raw compute power. We'll reach that point someday, likely within my lifetime, even without a breakthrough. But if we do start to understand how minds should be properly constructed, then that can easily shave many orders of magnitude off of the computing power requirements, and accelerate the arrival of GAI by decades.
> What evidence do we have for being closer to AGI today than we were twenty years ago?
None; I agree. But we have no evidence we are 20 years away from it either. The article I linked gives reasoning for why we shouldn't assume the latter, and prepare for the former.
Resources are scarce; decisions have to be made about where to use them.
The likelier severe risks are nuclear war, pandemic, volcanism, earthquakes and tsunamis, multinational infrastructure cyberattack, bioweapons, nonlinear climate change (e.g. a step change in strength and duration of heatwaves making most of North India and Pakistan uninhabitable), and multiple food supply system collapse (e.g. widespread outbreaks of severe wheat, rice, potato and corn diseases).
In contrast to those risks, which we understand quite well, humanity has no idea how an AGI might be made. Since a putative AGI will be made by humans, this is a big stumbling block for anyone who proposes we need to divert resources to AGI risk.
[1]: Link to the piece: https://intelligence.org/2017/10/13/fire-alarm/, recent HN Discussion: https://news.ycombinator.com/item?id=23401328