The problem I have with this idea is that ML systems usually only repeat patterns they've seen before. Sure, you can get quirky output by going to unused portions of the latent space, but that's also more likely to yield degenerate results (things that look 40% shirt, 40% pants, and 20% "other"). While these types of results are usually the most interesting, they're also the most likely to require professional "cleaning"; removing artifacts in a way that produces good results still requires skilled labor. I would expect an ML system to be able to do fine adjustments like spacing and repetition of minor motifs, coarse adjustments like combining motifs in novel ways, or even simple block patterns like the one in the article, but it's unlikely you'll find novel motifs solely via the latent space. Even for the most interesting potential use case, novel combinations of existing motifs, you're likely going to need human discretion as a final pass.
When a human designs something, there's often intent involved; there are design constraints and social context involved. I don't expect statistical ML (which is good at interpolation) to cross these gaps without integration with symbolic ML (which is good at extrapolation).
Though maybe I'm biased since I work in a symbolic AI lab.
Humans often only repeat patterns they've seen before as well, and quirky output comes from wild guesses and requires significant skilled labor to clean up.
For every new design you see, there may be 100 or 1000 that were thrown away. All of those discarded designs cost money regardless of if they were used or not.
Sure an algorithm can get the low hanging fruit that you say but what we really care about is computers competing with the best that can be made by humans.
I mean, it's going to be a long long time before we get an AI on the level of Iris van Herpen or Chalayan. It seems to be mostly about colour and minor variations on extremely basic articles of clothing, barely even "fashion" in the sense of innovation.
I agree that statistical ML is only good for interpolation, but I also feel that the set of human behaviors that are reducible to interpolation is larger than we realize. Even high order human behaviors like academic research or art criticism contain huge chunks of pattern recognition.
From an art/fashion perspective, this could be interesting in a playful way if the industry saw reason to 'mechanically play' with machine designs in the same way it does with it's own history and the history of art. This would make the production and consumption function more like a stream of information that is bidirectional, rather than just a unidirectional cyclic aggregation of historical data.
Yeah, the “rotate your device to landscape” definitely shows a lot of polish. /s
A simple fix would just be to inline all the content, like all the little ol’ two-column responsive sites do. In light of that obvious solution, they put a modal on it and compromised their content.
Perhaps the risk isn’t that technology will actually take over white collar jobs, but that managers will use minimally viable examples of technology to layoff massive numbers of people for short-term gains.
Of course when the companies then implode because the technology can no way make up for the resulting skill drain, said managers will have moved on to a new position, having sold the layoffs/technology as evidence of their “superior business accumen.”
There’s plenty of uninspiring (not sure about high skilled) white collar work...I would not be surprised if a lot of what early career lawyers, investment bankers, accountants, consultants do could be automated. I’m also bullish on radiology being the first area of medical practice that can be largely automated by machines.
In the future (and even now) careers are going to be defined by how well you can form relationships (and therefore sell) i.e. the things that will be hardest for machines to do.
I’m pretty bullish on radiology going first too. I keep telling my radiologist friend but she refuses to acknowledge that the computer is already better and faster than her.
But she also makes a good point — even if the computer is better, in today’s lawsuit happy America, it will be a long time before anyone will accept a result that wasn’t at least reviewed by a human.
Absolutely agreed on radiology. Earlier this year 3 different radiologists read my c-spine MRI and said I have slight bulging on C5-6 and nothing more. 5 different neurosurgeons* said sure, the C5-6 is bad but the real issue is the C6-7 herniated disc impinging on my nerves. I actually asked 2 of those 3 radiologists re-read the MRI and look for the C6-7 hernia and they couldn't find anything. All of the surgeons picked it up immediately.
I was told by a surgeon that this happens because radiologists are generalists (looking for strong evidence of different types of issues all over the body) while surgeons are trained to know the specific issues that happen in few parts, even if they don't show up clearly in MRI/x-rays.
AI should be able to take data from all the specialists to make a better generalist than human-trained radiologists. Integrated AI system should immediately read an MRI/x-ray/ultrasound and spit out possible issues. I can imagine an x-ray or ultrasound video feed hooked to the cloud that shows in real-time possible diagnoses and highlights the areas of concern. Ultrasounds are safe and this could even be a consumer device. Just like 3D-ultrasounds and 23&Me are for 'entertainment' and not medical solutions, ultrasound-with-AI can be a good tool for at-home what-ifs. It could be a great prenatal monitoring device.
* I know a lot of surgeons personally. Didn't cause a trillion dollar insurance claim.
As a doctor (not a raiologist), I believe your example shows the opposite, that is, it shows how hard it will be to automate radiology.
Radiology requires a "theory of the body", so to speak. You can't just look at the image in isolation. You often need detailed knowledge of the patient's clinical situation, and some actual reasoning. My guess is that that's why the surgeons got it right in this case (they are more familiar with the complaints of the patient and with the "live" anatomy of that region).
This doesn't mean that radiology can't be automated. It just means that to be a good radiologist, you might need to be a general artificial intelligence, capable of graduating from medical school.
This is different from something like classifying moles into benign, malign and high risk. That's something that can be determined from the pixels of a picture (even by human dermatologists, through experience or by following certain simple algorithms), and has no relationship to the rest of the patient. This means that automating mole classification is kinda like automating chess. Automating radiology looks more like automating the command chain for WW2.
On the other hand, pathology (looking at tissue samples through the microscope) seems much easier to automate. It relies heavily on pattern recognition and IMO (I'm not a pathologist either, although I've spent time in a pathology lab) it's less dependent on the clinical data of the patient. It's almost as if the doctor were looking at the image and nothing else, and the kinds of pattern doctors are something that might be automated. This is of course a simplification, and sometimes clinical judgement is important even in pathology.
None of this means that medicine can't be automated. I'm just trying to convey some of the difficulties you might have in automating radiology, as opposed to other areas of medicine.
And in any case, my criterion for difficulty of automating is "does it seem to require a general artificial intelligence or not?". If you have a general artificial intelligence completely indistinguishable from a human, then all bets are off.
That's great to hear! Did you have the disc replaced? I have similar pain in L5-S1 due to a disc protrusion affecting the nerves in that region (either via impingement or, more likely, inflammation). Unfortunately, the surgeon I consulted said that surgery is rarely performed that far down the spine unless there's really serious symptoms.
In the meantime, I keep monitoring these studies on mesenchymal stem cells for disc regeneration, hoping one of them makes it to clinical trials :(
I had disc replacement as well as fusion, since C5-6 is likely in next few years. I got lucky because C-spine is much easier to operate than lumbar. They go in from front (neck) and do not need to touch the spinal cord. For lumbar, they have to.
People need to be able to believe they can still make the leap of awareness in their career - even if the machine produces better diagnosis, this is on one hand, no different than referencing a book. People need to be able to learn with the machine in order to yield skill that exceeds what AI can presently do. Unfortunately, as I'm finding, lots of that stuff tests my own knowledge. My IDE gives me a complexity score on some of the functions I write. It's easy to focus on lowering that number, it's hard to actually know 'is this metric actually helping me write better code for my specific environment?'
A metric like that is a piece of data. So is anything a machine is going to produce. It's going to come in a different form, Oracle machines in some ways seem to be an actual thing these days, in that I have to ask myself why - why do all the words I search line up in this specific way? Why does that produce some thoughts I have? How do I know what I know? How can I test that?
I think humans can adapt to anything. I think we retain that flexibity as long as we are up for the challenge. That may seem like common sense or folk wisdom, but, there's probably good reasons stuff like that sticks around.
Saying to your radiologist friend that the computer is better and faster than she is flat out puts the entire security of her future - everything she has built into - on a coin toss. Of course she's going to react defensively. If you punch your hand into someone's chest and hold their heart out in front of them - yes, they will likely have difficulty thinking objectively.
People can adapt. It is often very challenging. But it's often also worth changing, if the question of progress versus stagnation is the thing at stake.
People who go into medicine want to save lives, so work with that foundation. Rather than who is going to get sued, direct the conversation towards how many more lives can be saved. It's the same argument as self driving cars. The problem is as we age, we think we have control over permanent stuff.
If we crash the car, that's our fault. If the car crashes the car, that's something we have no control over. But sometimes we might have a random seizure. We don't think about those probabilities when it comes to us driving the car versus the car driving the car, because we become accustomed to a context. But that's just an illusion until shit changes. I'd rather be aware of the easy and obvious changes in a conservative fashion, than totally ignorant to the hard ones until catastrophe I didn't see coming happens.
The interesting challenges will come when trying to explain
to a jury just why any sufficiently esoteric algorithm
(AI, ML, DL) choose an action (surgery vs. “watchful waiting”, braking vs. “lane following”, buy vs. sell, etc.) that was taken.
All of this stuff is going to question ethics. People who do mental health care on truly sick people (people that want to hurt other people) understand very clearly how many factors need to align to produce such a person. That changes our definitions of autonomy, that changes belief.
These are things that are core to people who don't understand computation, and they are core to ego - what makes the lives we live better than the lives we compare ourselves to? That's the lion inside of us, that doesn't give a shit who gets ripped to shreds (or simply can't afford to think about it). I know I am a good person because I have hurt less than all the others. But that's not true. I tell myself this, but is this a thing I can prove?
There are profound arguments to be made about why a machine can do a better calculation than a human does. It has access to more information. If people can't believe that, that's their own ego.
Create a job called computer science lawyer, make sure the judge understands computer science, explain the computation to a jury in a way that explains how the algorithm was designed, align that with present understanding of psychology. Checks and balances.
That's why I don't see it coming in law anytime soon. It's the same reason why Google removed all the AI on the search part. Everywhere where you need to explain why an answer was chosen, AI is a poor solution because of its poor debugging.
Lexis Nexus wiped out an entire level of the law firm hierarchy. Quickbooks and Lacerte significantly reduced the demand for accountants. There's a ton of white-collar high-skilled work that is really just grunt work that requires knowledge, and those will be automated in the future.
> Quickbooks and Lacerte significantly reduced the demand for accountants.
My understanding is spreadsheets didn't so much reduce accountant employment as change the job from determining the facts to predicting the future; more 'what-ifs.'
If these years of grind are replaced with a higher-level and more intentional early-stage practise of monitoring and managing the automation, then I think we could get better skilled people.
EDIT: Although I suppose that these professionals would need at least some practise of manual work too, to be able to monitor the machines' output.
Does it matter if the skill level(in diagnostics) of the average doc + software will become much better, while the special doctors, who work on optimizing the software, will be much more skilled?
And on the other hand, the average doctor's interpersonal skills will probably improve ?
We're a long, long, long way from machines taking over white collar work to any reasonable degree. Even extremely basic commands are routinely misunderstood by voice assistants.
What is going to continue to happen is what is already happening now: Less work wasted on bullshit, more potentially interesting findings surfaced for humans to examine. Same type of thing I was working on 2011 (helping lawyers with discovery, edit: no I confused things. Back then I was helping companies scan internal communication for automatic skill mapping), only better.
We’re a long way away in technology time but not in society time. We might be as far away from this world as we are from the creation of the Web — but society isn’t ready to have 10x the number of unemployed people in 25 years. Political/economic/social systems will not survive — the hope is that they get replaced nonviolently rather than in a bloodbath.
I agree. Mass unemployment is not a concern we can start to address because we do not know the nature of the coming unemployment. We're talking about 20 or 40 years away.
Meanwhile, we have real fucking problems right now:
1. Sky high housing prices.
2. Extreme wealth disparity.
3. Environmental degradation.
4. Cybersecurity failures everywhere, just as everything goes autonomous or otherwise cyberphysical.
5. Information operations threaten both democracy and the openness of the internet.
You can still make $100k a year editing English. The demand for talent is sky high, it's investments in education and society that will help us, not fretting over how AI will take away jobs long before we can do anything about it.
> You can still make $100k a year editing English.
The only employers paying that much are old-school publishers that are a bit of a hold-out but will eventually give way. As a translator often doing full-length books, I have witnessed some big publishing names severely slashing the amount they pay for editing (by doing things like outsourcing the job to India), and they don’t worry about any drop in quality because it is felt that in a web-heavy world, the public no longer cares too much.
It is depressing as fuck for me as a translator, because I put a lot of effort into my translations and I wish they would get the same level of love at the next stage of the publishing process, but this is the trend of the future.
Also, I feel lucky to still have work as a translator because many clients today are running the text through Google Translate and then just paying a native speaker to clean up the text for less money than they would have to pay a translator. Obviously that is not common in the mainstream publishing world, but it is increasingly happening with the marketing materials and technical manuals that are a translator’s daily bread and butter. “AI” is already hitting businesses based on human language hard.
Perhaps mass unemployment and extreme wealth disparity are the same problem. It's known that the productivity gains we've had from computers has mostly accumulated at the top. Workers are more productive, but their pay does not reflect that.
Wealth is moving around less because we don't need to spend (i.e. distribute) as much as we have in the past to generate more wealth. So, I think wealth disparity is a strong predictor for unemployment. I also think unemployment is the wrong word because it paints a false dichotomy. There's a not a huge lifestyle difference between making $20,000 a year and $0 a year (since the government steps in), compared to making $20,000 and $80,000 a year; and then $80,000 and $200,000.
One reason housing prices are so high right now is because people (mistakenly IMO) view houses as an investment. NO one wants to lose money on a sale. Another reason (for some areas) is probably because of the productivity gains. Since we need to distribute less to make more wealth, we've further centralized our economy into smaller pockets of land (i.e. cities).
I agreed with everything in your comment until you mentioned the investment in education. To handle education well, we do need to do everything we can to understand the changing demand in the upcoming decades.
Most government attempts to provide targeted education based on expected future demand haven't worked. Educators and bureaucrats are terrible at predicting the future (and HN users aren't any better).
> Meanwhile we have real non-imaginary problems to focus on.
Do you really think it's not worth thinking about the future and trying to predict problems that might occur, and preparing for them beforehand? Isn't not doing this one of the reasons we have real problems now? (Only some real problems, of course, others have nothing to do with this).
No one's talking about infinite problems except you.
Unemployment is a prety bloody obvious problem that might occur if automation increases.
We also have historical data of jobs that have disappeared with various levels of success in retraining. This time there's legitimate and serious doubts that the employees affected by automation will be able to retrain at all, because several low-skill type of jobs that were retraining possibilities are also targeted by automation.
Automation is at the highest level in history. Unemployment is at record lows in modern peacetime history (with exceptions like Southern Europe which are not hotbeds of automation). If automation inevitably destroys jobs, how can these facts be reconciled?
Unemployment has been around as long as there has been employment. It's not at all obvious that structural unemployment will increase in the future. You could make an equally plausible argument that it will decline. At this point we're all just guessing.
> The number of problems that might occur is infinite. But our resources and time are finite. So we have to prioritize.
I completely agree!
The reason I think this is an issue with thinking about is:
1. Logic and common sense make a good argument for automation eventually replacing all jobs.
2. Most arguments against this amount to "historically this never happened even when people thought it would". This is a weak argument for several reasons (the period we're taking about is small, there's no reason to think history has to repeat itself here, etc).
3. Many smart people think this is a problem, including economists, including people from other fields.
If this doesn't make this problem worth thinking about, what does?
10x is a hyperbolic figure (imo), but I disagree that it is pure speculation to think about job displacement from increasingly sophisticated ML systems.
We definitely do have real non-imaginary problems to focus on, but we shouldn't just ignore future problems. That's Hyperbolic discounting at its worst
> society isn’t ready to have 10x the number of unemployed people in 25 years
Nobody can predict those numbers; any number about unemployment is essentially made up. And fallacious as well, according to the lump of labour fallacy.
In the meanwhile, the industrial revolution (of whom automation is part of), in its 200+ years, hasn't caused any death of political/economic/social systems.
> In the meanwhile, the industrial revolution (of whom automation is part of), in its 200+ years, hasn't caused any death of political/economic/social systems.
Hasn't it? It completely killed off mercantilism and mostly killed off monarchism.
It wasn’t the industrial revolution that killed monarchism. The home of the industrial revolution was and is still the United Kingdom, after all.
Advances in politics made monarchies less relevant, not industrialization by itself, with France, the US and later Russia being the key examples. The US and Russia industrialized later than France and still-monarchies like the UK, Belgium, Denmark etc. Industrial giant Germany lost its king by losing a World War: despite industry, not because of it.
I think this time is different because the rate at which technology can improve is growing increasingly fast and society/people's ability to change is not increasing at a commensurate rate.
The change is also changing. The first waves of automation reduced demand for brute and repetitive physical labor - almost everyone impacted was eligible for the less physical jobs made available by automation.
This move towards automating thinking will not follow the same trend.
You can't extrapolate the rate of technology improvement based on current trends. It may slow down in the future. For example we're already hitting serious obstacles in basic physics when it comes to semiconductors.
> In the meanwhile, the industrial revolution (of whom automation is part of), in its 200+ years, hasn't caused any death of political/economic/social systems.
Communism was a direct response to the wealth inequality of the industrial revolution. That was one of the most disruptive political/economic/social upheavals of all time.
> In the meanwhile, the industrial revolution (of whom automation is part of), in its 200+ years, hasn't caused any death of political/economic/social systems
It not only killed off most pre-capitalist systems, it also mostly killed off the original system named “capitalism” (mostly in favor of modern mixed economies.) It also both spurred and then killed off Leninist Communism.
Very much agreed with this comment. When people compare today's technological displacement of people and say "yeah but it happened in the industrial revolution, people will as always find new work", I truly think this time is VERY VERY different because of the pace at which technological gains are increasing.
I do not think our society which has recently (and ongoingly) had someone work in one job function (often at one company) for 30+ years is ready for the fast paced change of needing to change jobs every few years.
Look at the rate of displacement in tech of technologies and frameworks. Try being an Angular developer for 20 years... I think people in software have come to expect rapid change because its the nature of the game, but that rate of change is unreasonable to be endured by every industry everywhere.
> I truly think this time is VERY VERY different because of the pace at which technological gains are increasing.
The pace is not increasing. There have been radical changes, as much as in the last few hundred years.
Two "poster children" of technological advancements (AI and nuclear) are always behind the corner, but never here. Even if they were here, there would be no catastrophe, as much as there hadn't been with green energy, and transistors.
> Look at the rate of displacement in tech of technologies and frameworks. Try being an Angular developer for 20 years...
This is irrelevant; Angular or not, the software engineering market is evergreen, even with poorly skilled software engineers (I do know some).
This example is actually a counterargument to the thesis: displacement of technologies in this field did not lead to displacement of jobs.
In 25 years (which is like late 1990s) technology has made disappointing progress even in well defined areas like speech recognition. We may be decades away from breakthrough, we may be centuries away.
Exactly! It all seems so special now, but IBM was selling a voice processing software that supposedly helped you write a Word document. This is not possible even today, and voice recognition is one of the biggest obstacles AI has to go through before being really effective.
For example, can Siri or Alexa delete an specific phone number from a contact? Like "ok Google, delete home number from X" is not possible right now.
Even extremely basic commands are routinely misunderstood by humans, too.
With the human you can have a conversation that will hopefully avoid similar confusion in the future which eases the frustration for many. How long before machines allow us to do the same? My bet is months, a year or two at the outside. After all, we already have conversational interactions. Building on those and tying an intent interpretation to a clarifying correction provides high information training data. There's already a swath of old, well understood ML techniques -- mean subtraction, singular value decomposition, boosting, etc. -- aimed at differentiating information from data. Instantaneous classification of responses as errors followed by information on a better response? That sounds like training gold.
"Even extremely basic commands are routinely misunderstood by voice assistants"
I think the three most popular voice assistants (Siri, Google & Alexa) do for the most part get the basic commands right and even some complex ones.
The struggle I think with using voice assistants for me is the human interaction. When I am in a social setting or walking outside on the street I can't see people using it. When I am alone or maybe with a significant other I can see it being used a lot. I rarely see people actually using Siri on a train or when they are out and about.
I tried "put the world cup on" with google this morning. It didn't work. I have Youtube TV and a Chromecast so it has access to everything it needs, it just didn't know what I meant.
There's room to deskill white collar work, and to a magnitude people may find surprising. I think science and marketing are most at risk from automatic causal modelling.
If it didn't require above average skill or more, it would mean your average Joe on the street should be able to come up with patterns that will sell above average. (The latter of which might be plausible at the end of the day. But professional cloth designers will probably beg to differ and/or raise that it's a recipe for lack of creativity.)
There's a wide spectrum between average and high skills; the parent referred specifically to high.
Besides, the article, which is very poor and click-baity, starts with a factoid (algorithms designing shirts) and then steers into something completely different - big data.
> "Suits make a corporate comeback," says the New York Times. Why does this sound familiar? Maybe because the suit was also back in February, September 2004, June 2004, March 2004, September 2003, November 2002, April 2002, and February 2002.
> Why do the media keep running stories saying suits are back? Because PR firms tell them to. One of the most surprising things I discovered during my brief business career was the existence of the PR industry, lurking like a huge, quiet submarine beneath the news. Of the stories you read in traditional media that aren't about politics, crimes, or disasters, more than half probably come from PR firms.
> I know because I spent years hunting such "press hits." Our startup spent its entire marketing budget on PR: at a time when we were assembling our own computers to save money, we were paying a PR firm $16,000 a month. And they were worth it. PR is the news equivalent of search engine optimization; instead of buying ads, which readers ignore, you get yourself inserted directly into the stories.
Precisely! The least they could've done is mention what it's called for people who might want to learn more, rather than allude to it being some proprietary AI technique that the company came up with.
When a human designs something, there's often intent involved; there are design constraints and social context involved. I don't expect statistical ML (which is good at interpolation) to cross these gaps without integration with symbolic ML (which is good at extrapolation).
Though maybe I'm biased since I work in a symbolic AI lab.