We're a long, long, long way from machines taking over white collar work to any reasonable degree. Even extremely basic commands are routinely misunderstood by voice assistants.
What is going to continue to happen is what is already happening now: Less work wasted on bullshit, more potentially interesting findings surfaced for humans to examine. Same type of thing I was working on 2011 (helping lawyers with discovery, edit: no I confused things. Back then I was helping companies scan internal communication for automatic skill mapping), only better.
We’re a long way away in technology time but not in society time. We might be as far away from this world as we are from the creation of the Web — but society isn’t ready to have 10x the number of unemployed people in 25 years. Political/economic/social systems will not survive — the hope is that they get replaced nonviolently rather than in a bloodbath.
I agree. Mass unemployment is not a concern we can start to address because we do not know the nature of the coming unemployment. We're talking about 20 or 40 years away.
Meanwhile, we have real fucking problems right now:
1. Sky high housing prices.
2. Extreme wealth disparity.
3. Environmental degradation.
4. Cybersecurity failures everywhere, just as everything goes autonomous or otherwise cyberphysical.
5. Information operations threaten both democracy and the openness of the internet.
You can still make $100k a year editing English. The demand for talent is sky high, it's investments in education and society that will help us, not fretting over how AI will take away jobs long before we can do anything about it.
> You can still make $100k a year editing English.
The only employers paying that much are old-school publishers that are a bit of a hold-out but will eventually give way. As a translator often doing full-length books, I have witnessed some big publishing names severely slashing the amount they pay for editing (by doing things like outsourcing the job to India), and they don’t worry about any drop in quality because it is felt that in a web-heavy world, the public no longer cares too much.
It is depressing as fuck for me as a translator, because I put a lot of effort into my translations and I wish they would get the same level of love at the next stage of the publishing process, but this is the trend of the future.
Also, I feel lucky to still have work as a translator because many clients today are running the text through Google Translate and then just paying a native speaker to clean up the text for less money than they would have to pay a translator. Obviously that is not common in the mainstream publishing world, but it is increasingly happening with the marketing materials and technical manuals that are a translator’s daily bread and butter. “AI” is already hitting businesses based on human language hard.
Perhaps mass unemployment and extreme wealth disparity are the same problem. It's known that the productivity gains we've had from computers has mostly accumulated at the top. Workers are more productive, but their pay does not reflect that.
Wealth is moving around less because we don't need to spend (i.e. distribute) as much as we have in the past to generate more wealth. So, I think wealth disparity is a strong predictor for unemployment. I also think unemployment is the wrong word because it paints a false dichotomy. There's a not a huge lifestyle difference between making $20,000 a year and $0 a year (since the government steps in), compared to making $20,000 and $80,000 a year; and then $80,000 and $200,000.
One reason housing prices are so high right now is because people (mistakenly IMO) view houses as an investment. NO one wants to lose money on a sale. Another reason (for some areas) is probably because of the productivity gains. Since we need to distribute less to make more wealth, we've further centralized our economy into smaller pockets of land (i.e. cities).
I agreed with everything in your comment until you mentioned the investment in education. To handle education well, we do need to do everything we can to understand the changing demand in the upcoming decades.
Most government attempts to provide targeted education based on expected future demand haven't worked. Educators and bureaucrats are terrible at predicting the future (and HN users aren't any better).
> Meanwhile we have real non-imaginary problems to focus on.
Do you really think it's not worth thinking about the future and trying to predict problems that might occur, and preparing for them beforehand? Isn't not doing this one of the reasons we have real problems now? (Only some real problems, of course, others have nothing to do with this).
No one's talking about infinite problems except you.
Unemployment is a prety bloody obvious problem that might occur if automation increases.
We also have historical data of jobs that have disappeared with various levels of success in retraining. This time there's legitimate and serious doubts that the employees affected by automation will be able to retrain at all, because several low-skill type of jobs that were retraining possibilities are also targeted by automation.
Automation is at the highest level in history. Unemployment is at record lows in modern peacetime history (with exceptions like Southern Europe which are not hotbeds of automation). If automation inevitably destroys jobs, how can these facts be reconciled?
Unemployment has been around as long as there has been employment. It's not at all obvious that structural unemployment will increase in the future. You could make an equally plausible argument that it will decline. At this point we're all just guessing.
> The number of problems that might occur is infinite. But our resources and time are finite. So we have to prioritize.
I completely agree!
The reason I think this is an issue with thinking about is:
1. Logic and common sense make a good argument for automation eventually replacing all jobs.
2. Most arguments against this amount to "historically this never happened even when people thought it would". This is a weak argument for several reasons (the period we're taking about is small, there's no reason to think history has to repeat itself here, etc).
3. Many smart people think this is a problem, including economists, including people from other fields.
If this doesn't make this problem worth thinking about, what does?
10x is a hyperbolic figure (imo), but I disagree that it is pure speculation to think about job displacement from increasingly sophisticated ML systems.
We definitely do have real non-imaginary problems to focus on, but we shouldn't just ignore future problems. That's Hyperbolic discounting at its worst
> society isn’t ready to have 10x the number of unemployed people in 25 years
Nobody can predict those numbers; any number about unemployment is essentially made up. And fallacious as well, according to the lump of labour fallacy.
In the meanwhile, the industrial revolution (of whom automation is part of), in its 200+ years, hasn't caused any death of political/economic/social systems.
> In the meanwhile, the industrial revolution (of whom automation is part of), in its 200+ years, hasn't caused any death of political/economic/social systems.
Hasn't it? It completely killed off mercantilism and mostly killed off monarchism.
It wasn’t the industrial revolution that killed monarchism. The home of the industrial revolution was and is still the United Kingdom, after all.
Advances in politics made monarchies less relevant, not industrialization by itself, with France, the US and later Russia being the key examples. The US and Russia industrialized later than France and still-monarchies like the UK, Belgium, Denmark etc. Industrial giant Germany lost its king by losing a World War: despite industry, not because of it.
I think this time is different because the rate at which technology can improve is growing increasingly fast and society/people's ability to change is not increasing at a commensurate rate.
The change is also changing. The first waves of automation reduced demand for brute and repetitive physical labor - almost everyone impacted was eligible for the less physical jobs made available by automation.
This move towards automating thinking will not follow the same trend.
You can't extrapolate the rate of technology improvement based on current trends. It may slow down in the future. For example we're already hitting serious obstacles in basic physics when it comes to semiconductors.
> In the meanwhile, the industrial revolution (of whom automation is part of), in its 200+ years, hasn't caused any death of political/economic/social systems.
Communism was a direct response to the wealth inequality of the industrial revolution. That was one of the most disruptive political/economic/social upheavals of all time.
> In the meanwhile, the industrial revolution (of whom automation is part of), in its 200+ years, hasn't caused any death of political/economic/social systems
It not only killed off most pre-capitalist systems, it also mostly killed off the original system named “capitalism” (mostly in favor of modern mixed economies.) It also both spurred and then killed off Leninist Communism.
Very much agreed with this comment. When people compare today's technological displacement of people and say "yeah but it happened in the industrial revolution, people will as always find new work", I truly think this time is VERY VERY different because of the pace at which technological gains are increasing.
I do not think our society which has recently (and ongoingly) had someone work in one job function (often at one company) for 30+ years is ready for the fast paced change of needing to change jobs every few years.
Look at the rate of displacement in tech of technologies and frameworks. Try being an Angular developer for 20 years... I think people in software have come to expect rapid change because its the nature of the game, but that rate of change is unreasonable to be endured by every industry everywhere.
> I truly think this time is VERY VERY different because of the pace at which technological gains are increasing.
The pace is not increasing. There have been radical changes, as much as in the last few hundred years.
Two "poster children" of technological advancements (AI and nuclear) are always behind the corner, but never here. Even if they were here, there would be no catastrophe, as much as there hadn't been with green energy, and transistors.
> Look at the rate of displacement in tech of technologies and frameworks. Try being an Angular developer for 20 years...
This is irrelevant; Angular or not, the software engineering market is evergreen, even with poorly skilled software engineers (I do know some).
This example is actually a counterargument to the thesis: displacement of technologies in this field did not lead to displacement of jobs.
In 25 years (which is like late 1990s) technology has made disappointing progress even in well defined areas like speech recognition. We may be decades away from breakthrough, we may be centuries away.
Exactly! It all seems so special now, but IBM was selling a voice processing software that supposedly helped you write a Word document. This is not possible even today, and voice recognition is one of the biggest obstacles AI has to go through before being really effective.
For example, can Siri or Alexa delete an specific phone number from a contact? Like "ok Google, delete home number from X" is not possible right now.
Even extremely basic commands are routinely misunderstood by humans, too.
With the human you can have a conversation that will hopefully avoid similar confusion in the future which eases the frustration for many. How long before machines allow us to do the same? My bet is months, a year or two at the outside. After all, we already have conversational interactions. Building on those and tying an intent interpretation to a clarifying correction provides high information training data. There's already a swath of old, well understood ML techniques -- mean subtraction, singular value decomposition, boosting, etc. -- aimed at differentiating information from data. Instantaneous classification of responses as errors followed by information on a better response? That sounds like training gold.
"Even extremely basic commands are routinely misunderstood by voice assistants"
I think the three most popular voice assistants (Siri, Google & Alexa) do for the most part get the basic commands right and even some complex ones.
The struggle I think with using voice assistants for me is the human interaction. When I am in a social setting or walking outside on the street I can't see people using it. When I am alone or maybe with a significant other I can see it being used a lot. I rarely see people actually using Siri on a train or when they are out and about.
I tried "put the world cup on" with google this morning. It didn't work. I have Youtube TV and a Chromecast so it has access to everything it needs, it just didn't know what I meant.
There's room to deskill white collar work, and to a magnitude people may find surprising. I think science and marketing are most at risk from automatic causal modelling.
What is going to continue to happen is what is already happening now: Less work wasted on bullshit, more potentially interesting findings surfaced for humans to examine. Same type of thing I was working on 2011 (helping lawyers with discovery, edit: no I confused things. Back then I was helping companies scan internal communication for automatic skill mapping), only better.