His assertion that we could always be one discovery away is not false, it's just irrelevant. The article states no evidence that there's been any past effect of "increasing automation" and assumes the near future will look like the recent past. Just inventing AGI wouldn't actually change much the next day, it would just be a new paradigm, and things would eventually change... so I agree with TFA.
(Also, AGI would be not so useful. Humans are notoriously difficult to work with, and we seem to have General Intelligence. It would have to have a strange concoction of motivations to be immediately --- or even eventually --- useful for anything that we would want to automate).
I definitely don't mean it as in "AGI is invented; humans need not apply", but in the point you've covered:
> The article states no evidence that there's been any past effect of "increasing automation" and assumes the near future will look like the recent past.
My point was that predictions of AGI being so far away may be false, as explained the article I linked. If they were false, and we ended up with AGI, I don't think the article's assumption about the near future will apply. This does not need to happen overnight, it can take time, but the kinds of jobs we could automate would change drastically over time (and the time scale in the article is two decades long!)
On your second point, though. Part of why humans are difficult to work with because we have our own emotions and motivations and sometimes decide not to do things other people want. An extremely intelligent AI that did only what it was told to do would probably not be as difficult to work with?
The idea that intelligence is disconnected from the "baser" instincts like emotion, need to eat, need to reproduce, need for social recognition, etc is probably just false. Akin to there being a useful measure IQ which predicts ability to solve the worlds problems (hint: Nope). Our story-telling mind can construct all kinds of intelligent hypotheses, but was probably evolved to appear rational to our fellow people and attribute agency where possible. Our wander-through-the-woods mind can visualize and hypothesize about spatial relations and transformations, etc.
There's much to do for AGI, but I believe that motivation-engineering will be the hardest part. Morality is intrinsically connected to our role as sorta-hive-minded monkeys.
Caveat: All the above is poorly presented opinion from the following resources:
- Learning how to learn on coursera
- Buddhism and modern psychology on coursera
- Righteous Mind by Haidt.
- Happiness Hypothesis by Haidt
- Bullshit as an honest indicator of intelligence
Building a machine using our evolutionary history as a prior design is the only way we know how to produce general intelligence, but all the strange, varied, "Emotional" baggage that goes with it means we never would. Why would a computer be afraid of snakes? If what you want is a computer that can come up with solutions you wouldn't have imagined, then you need clever search, problem specification, and significant computation, not general intelligence. If you want to automate something, you may need learning, but don't need intelligence.
(Also, AGI would be not so useful. Humans are notoriously difficult to work with, and we seem to have General Intelligence. It would have to have a strange concoction of motivations to be immediately --- or even eventually --- useful for anything that we would want to automate).