Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] 77% of employees report AI has increased workloads and hampered productivity (forbes.com/sites/bryanrobinson)
71 points by layer8 59 days ago | hide | past | favorite | 65 comments



This is a paid ad by Upwork masquerading as an article.

It’s based on a study that Upwork sponsored, and it cites a bunch of conclusions that make freelancers seem attractive to big orgs:

C-suite executives, bringing in freelance talent into their workforce say freelancers are meeting productivity demands and often exceeding them, outpacing full-time employees. The level of well-being and engagement has improved. And they have doubled the following outcomes for their business: organizational agility (45%), quality of work being produced (40%), innovation (39%), scalability (39%), revenue and bottom line (36%) and efficiency (34%). The findings also show that 80% of leaders who leverage freelance talent say it is essential to their business, and 38% of leaders who don’t already leverage this talent pool intend to start in the coming year.

This is a stealth ad meant to engender FOMO in business decision makers: “Your existing employees won’t get results quickly because AI is overloading them - instead, hire a clever freelancer who has figured out how to use AI to their advantage, and reap the rewards!”


Came here to say the same thing, after trying to find a link to the original source, i.e., the purported "study" mentioned in the headline. I was unable to find it.[a]

This doesn't deserve to be on HN.

---

[a] I mean a proper study with an explanation of methodology, proper statistics, and sources of data. I could only find a press release (https://investors.upwork.com/news-releases/news-release-deta...) and a short blog post with limited information (https://www.upwork.com/research/ai-enhanced-work-models).


That’s the whole point of Forbes today


Turns out I only needed one out five 'whys' to find the root cause:

The majority of global C-suite leaders (81%) acknowledge they have increased demands on their workers in the past year.

If your C-suite have bought into the dream that AI magically makes everyone more productive, but haven't invested the time or cash to roll it out in well-understood, provably useful ways, then employees are going to find themselves fighting to get the 'expected' (read 'magical dreams') productivity gains, and they'll waste a lot of time trying to apply AI to problems where it doesn't really fit as a solution.

None of that says anything about AI and it's usefulness in the right context. It's all just people who don't understand a problem or the right solution jumping in and saying "I know best because I'm the highest paid!"


One would expect this is at least somewhat related to the efficacy of LLMs or genAI in general in the solving of actual problems but this is still a horrifying stat, what ever is to blame. 81% is beyond a bubble level scale of investment across the economy in vaporware.


> C-suite have bought into the dream that AI magically makes everyone more productive

I don't think they are just that naive. Modern stock markets expect CEOs to collude with them in pumping up the stock price by any means necessary( including outright lying).

"LLMs are going to lead to AGI, and its just around the corner". Everyone knows this is bullshit but we have CEOs of mega corps trafficking in these lies openly because markets expect them to and keep the gravy train rolling.


This is what happens when expectations of productivity gains thanks to AI are not realistically set:

"Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains." (cf. Forbes article below)

The reality: AI models (generative or not) are useful in specific cases, not all cases. Failing to acknowledge that and failing to strategise accordingly only leads to short term success and long term pain. For example, use cases that imply relying on LLMs as reasoning engines are doomed to fail given the current state of the art. If you want to know which use cases make sense, check out my articles on medium (DMs also open):

https://medium.com/thoughts-on-machine-learning/where-genera...

https://medium.com/thoughts-on-machine-learning/chatgpt-and-...


We keep getting assaulted by co-pilot, which sucks in the first place, but also doesn't really work correctly given our non-standard Microsoft environment. One of my analysts tries to use GPT to solve problems, but he doesn't understand the problems he's solving, so he can't properly evaluate the solutions GPT is spitting out. Honestly, I hate AI so much, I hate the hype, I hate how companies are pushing it. For most cases it's an enormous waste of energy, and companies are much more afraid of missing out on revenue than they are of wasting or misusing technology.


It's useful if you can't be arsed to type trivial stuff in, but you have to know how to do the trivial stuff in the first place because you have to be ready to correct it.

So it wouldn't help a beginner to learn. Kinda like modern StackOverflow.

Mind, I've only used the public LLMs, not copilot's code completion. That one I've only tried once and it has the potential to be extremely annoying.


> It's useful if you can't be arsed to type trivial stuff in, but you have to know how to do the trivial stuff in the first place because you have to be ready to correct it.

This. The only exception I’ve found to this rule is shell one-liners. I’m not sure why they’re so good at them; maybe the terse nature helps? However, I’ve also never had it do anything that I couldn’t figure out in awk on my own.


Thankfully my company isn't trying to inject AI into everything, and we're the better for it. I do use chatgpt a little in my job, but only a little. This generation of LLMs are simply far too unreliable to be depended upon for anything serious. AI is not going to make you more productive if you need to double check everything it outputs.

Intellij recently introduced a feature which uses AI to complete a line of code. Almost every time, it produced incorrect code, like it would generate the line of code and immediately there would be red squiggly underlines highlighting all the errors. It wasted my time and I'm more productive by turning that feature off.

No doubt AI will continue to improve, but the current state of the art simply isn't good enough to make most of us any more productive. Often the opposite.

I may be using the term AI too broadly here, but hopefully you understand I'm referring to LLM chatbots like chatgpt, and related technologies like copilot.


This generation of LLMs are simply far too unreliable to be depended upon for anything serious.

I have managed to learn ways to make LLMs very productive. Most of this project was written by Claude where I acted as a very high level architect.

https://github.com/williamcotton/guish

It benefited from using a Claude Project and keeping the source up to date. It is also a good practice to bail on a thread quickly if it is being unhelpful. And the biggest tip is to be an overly pedantic technical communicator.


This article reads like a covert ad for Upwork, a platform for hiring freelancers.

> C-suite executives, bringing in freelance talent into their workforce say freelancers are meeting productivity demands and often exceeding them, outpacing full-time employees.

> a fundamental shift in how we organize talent and work

> leveraging alternative talent pools

= "You should hire freelancers instead of full-time employees."

> outdated work models

I guess they try to project the idea that full-time employment is outdated.


> Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains.

Well, there’s the problem. This is just like the 80s and early 90s when execs decided to drop computers into the workflow of their employees and expect instant improvements.


computerized! as seen on tv! new, enhanced with uranium! radium brand stockings really make your legs shine!


Very poor article. It doesn't link to the study, and it doesn't give details about what type of workers the study examined.


It looks like it tries and fails to link to the study. I did a quick google and found this, which is what I think it's trying to link to:

https://www.upwork.com/research/ai-enhanced-work-models


This is not a study. This is an executive summary of a study.

This sort of stuff rankles me. Without the numbers, questions, and methodology there’s no way to ascertain what errors the folks who created the study committed, if any.


This part resonated for me

> To add insult to injury, nearly half (47%) of employees using AI say they don’t know how to achieve the expected productivity gains their employers expect, and 40% feel their company is asking too much of them when it comes to AI.

I routinely talk to clients and partners where the business decision makers are just utterly delusional about what AI will produce for them. They genuinely seem to think the age of employing people is ending which I guess isn't a shock since that's what Sam Altman and the media have been telling them.

Meanwhile internally we're just puttering along using Copilot and it's definitely a force that can be used for great good or great ill. I can say it has... Further reduced my appetite for hiring people to do programming tasks that should be automated out of existence anyway? That seems like a fair assessment. It's somewhere between a tool and a toy for helping complete the real work.

Edit: oh yeah, and sooooo many tech products out there right now burning dev time on AI features that aren't really useful.


> They genuinely seem to think the age of employing people is ending which I guess isn't a shock since that's what Sam Altman and the media have been telling them.

I wonder if they have good answers as to who will buy their products after all the jobs are gone. Reminds me of https://quoteinvestigator.com/2011/11/16/robots-buy-cars/ . An anecdote 70 years old at this point but seemingly evergreen in its applicability.


>I routinely talk to clients and partners where the business decision makers are just utterly delusional

You can just stop there; the problem is not necessarily with AI but the type of people that have power.


on the plus side, the sort of investors who entrust their companies to management that bets them on ai without understanding how to use it will not have power much longer :)


Hate to say but this absolutely squares with my experience. The only people excited are people who simply do not know enough to know they are wrong.

Eventually the economics will catch up to these middle managers but unfortunately I don't think they'll figure out they themselves were responsible for the coming crash.


I don't know about the roles surveyed, but at my job technologists are only allowed to use slow half baked IDE integrations. The result is slower everything because it eats so much RAM.


I feel like we focus too much on getting from XX% of good LLM suggestions to 100%, even though it's likely we'll end up stuck somewhat far from this, far enough that it remains a drag if we leave them in the way.

I wish we'd focus more on UX and better integration to leverage these XX% into positive productivity gains. Make it easy to discard suggestions, make sure the non-LLM (default) path remains as productive as it has been so far.

As I'm writing this I realize that I haven't tried GH Copilot in a while (and I should). Does it achieve this, already?


+1 on feeling there's a lot of UX possibilities left on the table. Most seem to have accepted chat as the only means of using LLMs. In particular, I don't think most people realize that LLMs can be used in very powerful ways that just aren't possible with black-box API services as they currently exist. Google kind of has an edge on this area with recent context caching support for Gemini, but that's just one thing. Some things that feel like they could enable new modes of interaction aren't possible at all, like grammar constrained generation and rapid LLM-tool interactions (think a repl or shell rather than function calls; currently you have to pay for the input tokens all over again if you want to use the results of that function call as context and it adds up quickly).

On Copilot, I've been using it since it was public, and have always found it useful, but it hasn't really changed much. There's a chat window now (groundbreaking, I know) and it shows a "processing steps" thing that says it's doing some distinct agentic tasks like collecting context and test run results and what have you, but it doesn't feel like it knows my codebase any better than the cursory description I'd give an LLM without context. I use the jetbrains plugin though, and I understand the vscode extension has some different features, so ymmv.


Visual Studio definitely takes up lot of my time. Even powershell takes about 2-5 seconds every time I open a tab in Windows Terminal.


AI has helped me a lot to write code. At a conference recently someone said it makes a senior 10x more productive (I'd say 2x), but a junior 50% less productive.

You need to understand what ChatGPT is saying.

I wanted a backward compatible change to code. ChatGPT proposed something, I was sure this was not backward compatible. Argued for 10 minutes but ChatGPT insisted it was (it was around lower/uppercase).


Maybe I'm in the minority in how I use the the AI tools. But I really don't let it write big chunks of code for me. It's great for tricky one or two liners or for esoteric shell commands. But i won't let it go hog wild on my code base.


Mostly all of my side effect free functions are done with ChatGPT (e.g. sort the list of items by X then y then return z, or find A in this deep structure of map[map[map[...]]). If it doesn't need to use lots of APis it's usually right.

For my website in Hugo, it writes all the code for partials and shortcuts.


"10x more productive" means they were selling AI tools :)

The only scenario in which i can imagine the LLMs increasing my productivity 10x is if I had to write code to create a new window with lots of controls on it from scratch. And that only for the initial set up of the controls.

But then I'd have to, you know, write the functionality for those controls...


It often helps when you need to do something that you haven't done before. Letting ChatGPT show you the code - e.g. in another programming language b/c it's a script - and you learning from the correct code, makes development faster.

Outside learning, something you have done a hundred times already, ChatGPT doesn't help a lot.

So I guess the amount of ChatGPT helping depends on the ratio unknown-work/known-work.


> and you learning from the correct code

Yeah, except is it the correct code? I had answers with the correct code but also answers with code that looked correct but had no connection to what I wanted.


I used to see some devs brute-force autocomplete on the IDE to find a method, and now I see people spend hours trying to make ChatGPT spit out a valid block of code for a particular task.


Lol yes. I once spent a few hours trying to get GPT-4 to produce a fully-featured B+tree in Python. I knew how they worked, but had never sat down and written a class for one, so I wanted to see if it could manage. It could not; it kept getting hung up on re-balancing when a split occurred.


I write software for the publishing industry, particularly many business to business publications. The rise of AI is a huge transformative force here. Those who aren't injecting AI into their products are at a large disadvantage.

The people talking about the disadvantages of AI on this page are mostly right. But for the typical article in a B2B magazine, AI can already produce output better than many, or even most entry level writers. Editors are doing that themselves in lieu of hiring. We're just removing the copy and paste from a chatbot step. This is table stakes now, people are doing this not expecting to get ahead but just trying to keep up.

To push back is to be the milk man who refuses to trade in his old horse for a truck. That choice may be good for the horse, but not for long. As a programmer I identify more with the horse than the milk man. I too am trying not to get knackered.


In my opinion, this is largely due to the fact that the focus is mostly on quantity and not quality and has been for a while.

Writing has been about putting out X puff pieces per day and very little of the industry is actually interested in providing quality pieces of research/journalism.

Because “everything for the short term gains”, which is rampant in all creative industries right now.

Of course AI can write puff pieces or top 10 lists or possibly make a typical Marvel movie soon.

But who wants to read it?


And half the time you can tell, because the article reads like it was put together by something that doesn't understand the complete picture.

IE, they've produced crap.


well no one was reading those articles in the first place



This is also not the study, but a summary of the study.

At this point I’ll email their POC and see if they’ll release the actual study itself.


This is almost the exact opposite of the responses we just got from a survey of around 800 people, which showed of them 78% regularly use AI/LLMs to augment their work and of that over 90% reported that they've found it to improve their work efficiency.


I think it’s just an expectations question. Current gen models are a relatively minor productivity enhancer on the margin. Orgs that treat it as such are yielding the benefits. Those that think it means they can get 20% extra out of their existing workforce… are not.


I can only speak for myself here but with AI/LLM augmentation I'm capable of achieving a lot more and in areas where previously I'd either have to have a lot more learning time or enlist the help of peers. It's near impossible to measure but for software development my output at at least the same or higher quality is at least 2-3x across several languages I wouldn't otherwise have considered.


It’s strange to combine “increased workloads” and “hampered productivity” in the same stat.

Should be obvious that employees will nearly always either believe or want others to believe that their jobs have gotten harder. It’s not in their best interest to say that things have improved.

I’m in the 23% here, I guess.

My workload has increased and my productivity has increased. I’m not a programmer — I work in marketing. But there have been dozens of small scripts and automations that I could’ve written without AI but that would’ve been frustrating for me to figure out that I can now whip up in minutes with AI.

I really hope that people hate AI and fight against using it because it gives me an edge.


>I really hope that people hate AI and fight against using it because it gives me an edge.

This is my problem too. AI is amazing for my work, I finish a days worth of work in two hours.

At the same time if everyone else started using it I'd have to work 8 hours a day again.


I'll tell the other side of the story.

I am not allowed to use ChatGPT and Copilot in my work.

As a result, I feel hampered, terribly. Especially because I usually work on hobby projects in the weekends where I use both consistently. Not a day goes by that I don't ask ChatGPT to reorganize some code, tell me what a function of some library does, or tell me about a specific error I am facing. Multiple times a day I will write a comment about what is going to happen in the next line and have Copilot generate the rest for me.

I feel like a carpenter being forced to use a screwdriver instead of a drill.


I'm currently unemployed, and my last job ended over two years ago, so my experience is more related to the pre-AI boom era.

Back then, I often felt that the software products we were developing could have been created by much smaller teams of experienced programmers, or even by a single programmer. I'm referring specifically to direct programming, excluding management, QA, and devops. My professional experience is primarily with startups and small companies, but I believe this idea could extend to some larger products as well.

This raises the question of whether I, as a programmer, was productive enough. I believe that my colleagues and I were quite productive, and we performed our daily tasks honestly and fairly. However, I feel that our responsibilities were artificially limited. I think my productivity could have been much higher if my responsibilities within the company had been expanded. At least, this is what my personal, non-commercial experience with my pet projects in my spare time suggests.

I understand that a pet project is not the same as a business solution, but I believe the core issue is not that AI affects programmers' productivity, but that AI has helped management realize that increasing the number of programmers does not necessarily improve product quality.

I also found Josh Christiane's video on this topic very insightful: https://www.youtube.com/watch?v=hAwtrJlBVJY


The headline is based on this claim from the report:

> Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way

Which, to my understanding, was not a question asked directly, but is instead the total number of people who answered "yes" to any of whether they're "spending more time reviewing or moderating AI-generated content", "invest more time learning to use these tools", "are now being asked to do more work", etc.

So you could have spent some time to learn the tools, realize massive productivity gains from it, and be included in that 77%. It's not whether it increased your workload on net.


Best use of it has just having it there. Being able to drop a question into ChatGPT and hear ideas or data come back quickly is a great relief whenever you face a groan-type task.

But what its really shone light on is that, for most tasks, it's the formulating of the problem itself that takes up the bulk of the work, not executing the solution. And without being able to know when its solution is inappropriate, I don't see how it's not going to make "two steps back" situations for sub-senior developers.


This is nothing more than marketing propaganda by Upwork. Flagged.


This is an advertisement with highly questionable statistics and data. Its informational utility is so poor I’d almost think it was written by an Upwork freelancer using AI.


Link to the study: https://www.upwork.com/research/ai-enhanced-work-models

It's definitely biased but still interesting.


First things first: The "article" (if it can be called one) is complete garbage and is nothing more than an advert for UpWork. That being said, the (clickbait) title resonates with me and from what I can observe with my colleagues at least.

It's clear to me from talking to the C-level in my company that they've been completely hoodwinked by Sam "Give me your biotelemetrics" Altman and others like him into thinking current day LLM technology is some sort of superhuman AI capable of replacing entire teams. The only thing they hear in their circles is "Infinite productivity and growth!!!" and salivating at the thought like an abused dog hearing a dinner bell that's never actually coming.

We build customer support software, and the CEO is adamant on "replacing 95% of human agents with an AI chatbot". So we've been building out this AI feature (Read: we're calling OpenAI's APIs with some custom prompts) and it's been laughable how fucking useless and unusable it is. The responses are full of lies & contradictions and never actually answers any queries with any kind of accuracy once you examine the output for longer than 2 seconds. But the C-level is loving it, despite it being a massive resource drain in every conceivable way, I suspect because now they can put "We have AI!" in their pitch to investors.

I dream of a massive solar storm that wipes out all of this crap off the planet for good, because I'm not sure if I can handle the incoming future full of spam, scams, lies automated at such an insane rate.


I think it may be a self-fulfilling prophecy of sort - the more you struggle to make it reasonable, the more C-suite will think that their current IT teams are overpriced and useless, so AI consultants must be the way!

Of course reality will eventually get back to this, but it may take surprising amount of time with usual service quality degradation and losing of customers.


> The part that is not debatable is that not only is AI not going away, it’s on the upswing. You can get that tattooed. So it’s important to develop a reasonable comfort around its use.

I mean, this is a bold claim to make in an article about the thing being bad. Like "this reduces productivity, but it will inevitably take over the world anyway, because reasons" is a weird stance.


This is a sham study funded by UpWork whose business is likely being hit by AI, since many low-skilled office tasks can now be automated.


so AI increases workloads and hampers productivity

But only for employees

freelancers are very efficient with using AI

so the solution to the problem is to hire freelancers

and the best place to find freelancers is Upwork

and the study is conducted by Upwork.

got it.


That's because the workforce doesn't know how to use AI. The expectations are legitimate for me as an employer.


The headline is the clickbait, but the overwhelming argument of the article is an advertorial for outsourcing.

(Also, a video keeps sliding in over the article: "Miley Cyrus Explains Why Growing With Her Audience Means So Much To Her".)

Forbes seems to be garbage now, and maybe we shouldn't reward them by upvoting articles like this, even if the clickbait appeals (so people are going to want to comment without reading the article).


(why don't you use an adblocker and/or reader mode?)


I do use uBlock Origin. The video seemed to be 'content' of Forbes, which reflects upon Forbes more than whatever ads they run.

I don't use Firefox Reader Mode because (against all reasonable expectation) it bypasses the blocker, permitting tracking like crazy.


No one has to use anything. Like. Don’t click the button.


So the AI-generated Recruiter Spam I have to deal with comes because "I" pressed a button?

I'm not that surprised by the findings. Someone has to deal with the generated crap people produce, and usually they don't get to decide.


I have personality experienced employer mandated AI inside my IDE and they track of you disable it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: