“If you believe that AI will literally automate all jobs, literally, then it makes sense for a company that builds such technology to … not be an absolute profit maximizer.
It's relevant precisely because these things will happen at some point.”
Can someone explain how that's possible? AI will plant/harvest crops ant be a butcher? I don't follow.
A job could be performed by a human, but still be automated in the sense of virtually all decision-making being made elsewhere, turning pretty much every form of labor into assembly-line work, with human supervisors only present to wear a badge of authority, affirm the computer's decisions, and formally transact hiring and firing.
automated in the sense of virtually all decision-making being made elsewhere, turning pretty much every form of labor into assembly-line work, with human supervisors only present to wear a badge of authority, affirm the computer's decisions, and formally transact hiring and firing.
As long as humans represent the lowest cost actuators, perhaps we'll have an economic role suited to manual labour. But that's not necessarily a situation that lasts a very long time.
Nor one that will enable particularly humane jobs. Manual labor tend to be poor in terms of health, and competing only on cost will be a race to the bottom.
Even the rough path after the singularity isn’t fathomable to our lower intelligence. Once you have something that can self improve itself smarter and smarter on the order of seconds, there’s no way of telling how it achieves what it wants to achieves. It’s the power imbalance that exists in a universe where an ant is trying to run to safety when there’s an F-16 overhead.
Humans are so used to being top of the food chain, there is scant understanding in our culture of what it means to be in the middle of it, like most organisms.
we are reading 'panic' the same way :) fight panic all you want, but ai rendering the vast majority of humans redundant is quite a more pressing problem to fight, i would say.
A great many people disagree and do not think that is a likely or even remote outcome. If they are correct, then you are shouting "FIRE" in a crowd for no reason.
If there's a lot of smoke coming, people are running out of the building and you can see an ominous red glow in the windows, shouting "FIRE" is the right thing to do even if we are not going to be engulfed in flames this very second or the next. The potential costs given the evidence we all have are simply not comparable.
What smoke? So far the predictions of AI x-risk folk haven't panned out the way they said it would. In fact the opposite has happened. What smoke are you referring to?
Apparently, Open AI's board decides when AGI has been achieved. When it has, all bets are off with Microsoft(and presumably other sponsors ?). Any exclusive IP or deals no longer apply.
Indeed, but probably kind of relevant as Ilya is one of the few left at the reign of OpenAI, so useful to know what their thoughts are as the board didn't get rid of them.
tl;dr: the author doesn't think AGI is possible (or somehow thinks it won't affect anything) but Ilya does think it's possible, and the author is concerned by this for some reason.
Can someone explain how that's possible? AI will plant/harvest crops ant be a butcher? I don't follow.