I don't even lean toward the worst-case AI narratives, but it sure feels like Economist journos will keep pushing our "here's why AI wont take your job" articles, even as their own writers get quietly pushed out by ChatGPT creeping across their open-plan office one desk at a time.
In this piece, they lean heavily on precious "official American data", and celebrate the increased number of people working in translation, while conveniently ignoring more telling figures, such as the total amount those translators actually earn now per unit of work.
My partner works in university administration, and their "official data" tells a much spicier story. Their university still ranks highest in our country for placing computer engineering grads within six months of graduation. But over just six terms, the number of graduates in employment within six months dropped by half. That's not a soft decline by any means, more like the system breaking in real time.
I'm on the other side of the fence in BigCo and from where I'm sitting we're not hiring any Americans because we're in the middle of the biggest outsourcing push I've ever seen.
The take seems to be "if your job can be done from Lake Tahoe it can be done from Bangalore". What's different this time around is the entire tech organization is being outsourced leadership and all. Additionally, Data Science and other major tech-adjacent roles are also affected.
For us, our hiring rate for tech and tech-adjacent rolls has been zero in one US for several years. 100% is attributable to outsourcing. 0% to AI.
I'm seeing the same thing. We have a formal IT hiring freeze, all jobs are moving overseas. However, AI has not been eating the jobs, just traditional outsourcing.
If we were in the boom times, less hiring would be a convincing signal. But the global economist is toast right now. There are very very good reasons not to hire engineers, and it’s plausible AI has nothing to do with it.
Anecdotally there’s no way AI has enabled me to replace a junior hire.
AI has major problems, although it’s a fantastic tool, right now I’m seeing a boost similar to the emergence of stack overflow. That might increase but even then we may just see higher productivity.
I think recent history has made Solows paradox more interesting than Jevons, which is mostly a thing talked about by people with something to sell related to AI it seems, and less so by economists. Seems to have applied much better early in the industrial revolution. I'm not sure economists even work on Jevons anymore (or if it was ever a very interesting topic for them, the writing on it seems very sparse in comparison).
I work in network security, specifically on the automation team. As we articulate more and more processes, products, monitoring together, new demand is created, and our scope grows (unlike our team right now).
Being able to automatically write unit tests with minor inputs on my part, creating mocks and sometimes (mostly on front end work or on basic interface) even generating code makes me more 'productive' (not a huge increase but we work with a lot of proprietary stuff), and I'm okay with it. I also use it as a rubber duck under advices from someone on HN and it was a great idea.
I think a ton of hidden signals of a waning economy are being obscured by AI and globalization talk. Sure, some attempts to globalize and use more AI are genuine, but those are still cost cutting measures. And we cut costs when things aren't going well. There's just no way a brand new tech has penetrated the market enough to depress every sector of tech- or language-adjacent employment.
There's also something about tarrifs and gutting government investment I've been hearing about from the US. I'm no economist, it's possible that might have something to do with the economy waning.
I do not disagree with your broader point, but its worth noting that The Economist article deliberately framed its analysis around datasets that also wouldn't capture the economic slowdown!
The word "keep" here is synonymous with "continue," implying that this is already happening. It's fair game to ask if that's actually the case.
And this is by the way, but English sentences almost always have some degree of ambiguity; to talk about "well defined meaning" in the context of natural languages is to make a category error.
It's ambiguous as to whether the clause after "as" is in the present tense or future since "will keep pushing" presupposes the action is already happening.
I'm not sure either. I'm pointing out that The Economist is presenting misleading metrics deliberately: based on the "reliable American data" that they chose, you wouldn't see evidence of an ongoing recession either!
Do you really think half of these grads arent getting job because of AI replacing coders, In the short time where these coding assistants have been available?
I mean I’ve tried Claude Code - its impressive and could be a great helper and assistant but I still cant see it replacing such a large amount of engineers. I still have to look over the output and see if its not spitting out garbage.
I would guess basic coding will be replaced somewhat, but you still need people who guide the AI and detect problems.
You can’t build a business on scaling out the hiring 1000s of AI experts. You only need so many, which is why they get higher salaries. There will never be an infosys or tata for such workers like there was for many of us mere coders. Infosys and tata will likely benefit, but their average worker will not.
Well these “AI experts” are just senior devs and without once being a junior, youll never become one. So there will be junior devs. They might not grind their teeth on CRUD apps anymore, but we will definitely have them.
I can see current models replacing *fresh graduates*, on the basis of what I've seen from various fresh graduates over the last 20 years.
I don't disagree that models makes a lot of eye-rolling mistakes, it's just that I've seen such mistakes from juniors also, and this kind of AI is a junior in all fields simultaneously (unlike real graduates which are mediocre at only one thing and useless at the rest) and cost peanuts — literally, priced at an actual bag of peanuts.
Humans do have an advantage very quickly once they actually get any real-world experience, so I would also say it's not yet as good as someone with even just 2 years experience — but this comes with a caveat: I'm usually a few months behind on model quality, on the grounds that I think there's not much point paying for the latest and greatest when the best becomes obsolete (and thus free) over that timescale.
These state-of-the-art models are barely able to code an MVP app without tons of hand-holding, you really think new grads are getting replaced by AI ? I only see statements like these coming out of the likes of Elon Musk.
Think that's the problem. The people who have the keys to the money to do the hiring are often off the tools and have no real grounding in the capabilities of the current generation of LLMs. They make decisions about how much to or not to hire based on the junk they see from the Elon Musk types.
In this piece, they lean heavily on precious "official American data", and celebrate the increased number of people working in translation, while conveniently ignoring more telling figures, such as the total amount those translators actually earn now per unit of work.
My partner works in university administration, and their "official data" tells a much spicier story. Their university still ranks highest in our country for placing computer engineering grads within six months of graduation. But over just six terms, the number of graduates in employment within six months dropped by half. That's not a soft decline by any means, more like the system breaking in real time.