We've been automating away office jobs for a lot longer than we've been putting ML in robots to automate factory work, though.
For example, the way business mail used to work was that the bureaucrat in question would record their message onto a tape, and then send that tape off to a special department full of typists to actually turn that voice recording into a letter. That whole concept is not only gone, but it's such a foreign idea that it sounds like something you'd write for a dieselpunk novel. The moment we started putting computers on people's desks, we expected everyone to know how to type. Same thing goes for a lot of other office tasks, which are now comfortably managed by software suites we literally call "Office".
That being said, the new wave of machine-learning powered automation scares me. Not because I'm worried that my job will be taken by software, but because said software will barely work. For factory jobs, the risks are obvious; that's why we put these robots in cages[0]. However, these office jobs are still making critical decisions that will increasingly be handled by automation. We already know how much having to deal with Google sucks; and they are pretty much addicted to automating away all their support staff. In your manuscript example, it could be that the ML model just starts burying specific genres of book or books with specific types of characters in them, for stupid reasons.
[0] Or if you're Amazon, you put the workers in cages, because Dread Pirate Bezos hates them.
> Not because I'm worried that my job will be taken by software, but because said software will barely work.
If said automation works like most corporate initiatives I've been a part of, it'll require 5 employees to implement, update, and maintain for every 1 that it saves, meanwhile costing millions of dollars per year to some vendor for a support license. Some workers might be let go but they were on the chopping block anyway. A few years later the whole thing is scrapped and the cycle starts over again.
One of the most interesting automations I saw was a system intended to let the company handle more calls with fewer people by pushing people into not talking to a human.
Except the reason they had so many calls in the first place was largely because every other business process was fucked, which kinda meant you needed a human.
I call this automating feeding coal into a dumpster fire.
"Broken process vs broken execution?" is one of the first questions everyone should keep in mind during automation discovery. But it's (usually) a nuanced call.
This is why automation to cut costs in a lax labor market is much less interesting than automation in a tight and tightening labor market.
The fear of wages and workers, not just idle "gee more profits would be nice", is what makes automation actually go, vs floundering out as some middle managers doing a b2b steak dinner grift
<<Not because I'm worried that my job will be taken by software, but because said software will barely work. For factory jobs, the risks are obvious; that's why we put these robots in cages[0]. However, these office jobs are still making critical decisions that will increasingly be handled by automation.
Just to build on this a little. Even if they do work, general population will have little to no understanding on how they work. They will be little black boxes that govern our daily lives with little to no way to correct it if things go awry. As much as I am amazed by what ML can do already, we need some basic customer facing documentation on how it is supposed to work.
Not just general population, but actual employees!
We've already seen this with legacy code. Whereas a manual process featured someone you could have a conversation with, how many 10+ year old apps are there that have some quirk that nobody remembers or understands?
Automation is going to make that worse, because it goes from "that process 3 people know about" to "that process no one has thought about in 5 years."
This is one of the dystopian incoming realities. Automation will be extremely pervasive in daily life and especially in urban environments. But because it will be dumb and there will be no profit in making it smarter or to improve respectful interaction with humans, we will be the ones corralled to allow the automation have free reign. Whatever noble intentions the small minority working on this might have, it will mainly be another gut punch to human dignity in the name of capitalism.
I know this is a bit of a low effort argument, but every ML "enhanced" product I use ay my job is a variation on "you entered data xyz - here are some more examples where people entered .xyz. and this is the result they got:..." or "you are entering data at time ab:cd - here is what other people searched for at similar times:...".
“For example, the way business mail used to work was that the bureaucrat in question would record their message onto a tape, and then send that tape off to a special department full of typists to actually turn that voice recording into a letter. That whole concept is not only gone, but it's such a foreign idea that it sounds like something you'd write for a dieselpunk novel. The moment we started putting computers on people's desks, we expected everyone to know how to type.”
It’s true that executives now do their own typing, but that is not automation. It’s actually a rare case of modern work becoming less specialized, with a whole category of highly specialized workers (typists) ceasing to exist.
If the executive uses voice-to-text technology, then that would be a case of automation.
If you try using a vintage typewriter you will reconsider. It was a ton more work than now.
(1) Typewriter keyboards were actually physically strenuous... find an old manual typewriter and type a couple pages on it and see how whether you feel like typing a dozen more. This was rectified beginning in the 60s I think, but manual models were still around for a while due to cost.
(2) Even once electric typewriters were invented, dealing with minor typos, let alone more major textual surgery, remained a huge hassle until the personal computer / full word processor came about allowing on screen editing... Just imagine typing most of a page only to realize you forgot a sentence near the middle, or even made just a small typo. which is around when typists stopped being a thing, because then, and only then, typing had become so automated that a reasonably skilled bureaucrat wasn't really saving much time (and losing latency) by using the typing pool, especially for brief, urgent memoranda.
It is a case of automation working so well, you don't even notice it at all and thus think nothing was automated, until you think about it in more detail.
The automation with a word processor is not the input (the typing), it's the page layout and reflow, and being able to edit and get WYSIWYG before committing to the printed page.
My best typing experience, typewriters manual or electric, or computer keyboards, was the IBM Selectric. Second best was on an AT&T 3270 (!?!) with a mechanical clicking keyboard, similar to original IBM PC, but even better. I mean using a 3270 isn't great fun but the typing was great.
To extend your excellent summary of office automation, you can think of most government functions as a manually operated AI. There are piles of rules and regulations to administer, and that is ripe for automation. However, can you imagine the horror of, say, a machine efficient IRS? The only thing that makes a lot of the regulatory regime survivable is the inefficiency of the bureaucracy. A hyper efficient bureaucracy would be suffocating.
For example, the way business mail used to work was that the bureaucrat in question would record their message onto a tape, and then send that tape off to a special department full of typists to actually turn that voice recording into a letter. That whole concept is not only gone, but it's such a foreign idea that it sounds like something you'd write for a dieselpunk novel. The moment we started putting computers on people's desks, we expected everyone to know how to type. Same thing goes for a lot of other office tasks, which are now comfortably managed by software suites we literally call "Office".
That being said, the new wave of machine-learning powered automation scares me. Not because I'm worried that my job will be taken by software, but because said software will barely work. For factory jobs, the risks are obvious; that's why we put these robots in cages[0]. However, these office jobs are still making critical decisions that will increasingly be handled by automation. We already know how much having to deal with Google sucks; and they are pretty much addicted to automating away all their support staff. In your manuscript example, it could be that the ML model just starts burying specific genres of book or books with specific types of characters in them, for stupid reasons.
[0] Or if you're Amazon, you put the workers in cages, because Dread Pirate Bezos hates them.