My 2024 stance is "buy every AI add-on and decide whether to keep it next year"
So our team has access to Enterprise ChatGPT, Gemini, Notion AI, Slack AI, and basically every AI add-on in every SaaS platform that offers it as an upsell (Github Copilot, ReadMe AI, etc).
2024 is the year of "I don't know what the hell these AI tools are going to be useful for, so let's buy them all"
2025 will be the year of "Ok, we spent $xxx on all these AI tools last year, is anyone actually using them?"
I predict we'll be canceling a lot of those subscriptions. Which all in cost us over $100/mo per employee.
That's a nice idea but it sounds like you must be spending far above $100/m. Back of the envelope suggests you might be spending the equivalent of an additional salary, and in many companies that's a hard thing to justify without any proven value.
Yes, and my point is that this is an expensive process that is not accessible to many 30 person companies because it's close to hiring another person, for the benefit of maybe 1-2 tool choices in a year's time.
You can get some AI services and fire some people straight away. Increasing the work from 30 to 50 needs more sales which may or may not happen one day.
Investor pitches are the other way round - all we are going to grow 50x, no we're going to have layoffs.
It says "over $100/m", but these addons are all $10-20 USD each, and it implies more than the list. With just a few of the common ones it's easy to reach $125/m and the approach seems to be to get all of them. Up to $200 is quite likely.
That would be 72k USD which is a salary. Admittedly not a software engineer salary, but a significantly above average US salary. Selling the idea of getting another employee is a big deal for most businesses of 30 people.
36k in a small company is a non-trivial operating expense, esp. for hype-based tools that won't live there tomorrow.
like, it's a rounding error as IT budgets go, but throwing a bunch of money at something that's already considered dubious is silly. That's 36k in bonuses for Juniors, or a X-Mas party, licenses for software that you'll actually use.
This is exactly how Ive seen things occur. It makes a lot of sense, given some tools are great new additions (coding especially), but others fall flat (IMO, search)
That sounds great. Where I work, the CIO has blocked access to all AI for "security reasons". Any advice on how to counter that claim? I don't even know how to approach it. It feels like a freight company refusing to use the new "train" for shipping because they are loud and big and scary.
Echoing what another commenter said, depending on your industry, the data you deal with, and competitive landscape, what your CIO said may be sensible. Note especially that parent commenter works at a small startup, where security posture is typically more lax, for good business reasons.
One very common risk is simply giving another company your clients data. Your company may have confidentiality contracts with its corporate clients that prevent this. Alternatively, consider for example the situation where you are trying to integrate AI into a customer service process. Your customers may have some legally protected expectation of how their personal data is processed by your company.
Why? That seems like a sensible take tbh. You don’t want some other company having chat logs with all your internal code and strategic documents inside.
And is that also true for the various AI tool offerings? If they train their AI on your data, how could they guarantee that the AI is not reproducing parts of it for requests of a different customer?
OpenAI by default includes transcripts in its training data. You have to explicitly opt out (and trust that the op out actually does anything).
I wouldn’t trust that everyone correctly opts out. If they don’t, the X months from now “tell me about Foo company’s strategic plans” could regurgitate their internal docs.
Even if we don't opt out, it's not plaintext data right? Further, I feel like it's likely to be the tokenized version right?
And as far as strategic plans, I don't think it regurgitates novel, one source information does it? Isn't the regurgitation more like boilerplate, seen many times type stuff?
Has anyone actually been able to prompt a password out of an LLM? That kind of thing should be ignored because it's high entropy and rare afaik. For example, my name is bongodongobob and my password=hunter17. Do you really think someone will be able to pull this fact out of an LLM at some point? That doesn't seem to be the way they work as I understand it.
Yes, it’s the full transcript including whatever data you upload. It’s pretty trivial to prompt an LLM to regurgitate its training data, especially the largest models. Even if it is rare, single dicument instance. You might get some word substitutions, but certainly the gist of the original will come through. Not high entropy passwords, but full documents? Yes.
I explicitly said that high entropy things like passwords and license keys are not likely to work due to how information is compressed in the training process. But if you meant the windows server license agreement text:
The sad thing is all the damage it’s done in the meantime.
I just saw a mention about how a homework help company called Chegg has had their stock drop 99% because everyone is just using ChatGPT.
They were a real functioning company, with hundreds or thousands of employees and contractors. All of whom are basically going to be laid off because some company burned through a bunch of VC money to lure everyone away with “the new thing“.
All the artists who lost jobs or commissions. All the companies who ended up wasting a ton of time trying to build AI or integrate AI features that aren’t actually useful. And maybe they’ll end up in a product in two years and by then no one will care or want them.
All the electricity, the silicon, the water for cooling, the new data center is being built that won’t be needed.
Just tons and tons of waste everywhere.
ChatGPT is neat. For all we know we’re near a local maxima of what we’re capable of achieving without another completely new approach that will take 10 or 15 years to figure out. There’s no proof that the acceleration and capabilities we’ve seen over the last 2 to 3 years will continue like that.
I know my company has been asked about adding AI into The main product I work on. I don’t see any benefit. I’ve been told when they ask the customers what it would do for them, they can’t say either. But they seem to have been trained to ask for it by the hype.
Remind me of all the nonsense about chat bots being integrated into every company’s webpage five or six years ago. They’re not helpful. But they were the thing.
ChatGPT has some uses, but is also way more expensive/wasteful.
I hope the hype moves on fast. I’d like this stuff that shakes out to stick around but what’s going on right now is just way too wasteful for my taste.
Feels like almost everyone is trying to build the biggest Z-Ray they can because they’ve been told it’s an amazing discover. No one actually knows what it is, or how to build one, but that hasn’t stopped trillions of dollars from being poured into it. And if we get there, it may not be worth anywhere near what was paid.
I've already seen enough of the drive-by PRs of hallucinated bugfixes and people putting comments through an LLM to "make them more professional" while also changing the meaning completely, to have been completely turned off the idea of using LLMs for anything that requires precision and reliability.
On the other hand I think arts and entertainment is where AI will likely still survive scrutiny, as imprecision is tolerated reasonably.
> ChatGPT is neat. For all we know we’re near a local maxima of what we’re capable of achieving without another completely new approach that will take 10 or 15 years to figure out. There’s no proof that the acceleration and capabilities we’ve seen over the last 2 to 3 years will continue like that.
Two issues here:
1) we are only about ~10 years into the deep learning boom
2) we've seen deep learning scale with compute over this 10 years, not only over the last 2-3 years.
It could be we've reached the end of the road for NLP, no one really knows. But generally we see breakthroughs in lockstep with big jumps in compute capability (typically, GPU releases, occasionally with architecture changes).
>... local maxima ... new approach that will take 10 or 15 years ...
I was listening to the recent interviews with Sam Altman and the Anthropic guy who are familiar with current research and they are very not like that. It's more wow we've got so much to build, AGI in a couple of years. (For it seems to me a rather limited version of AGI - more can code well rather than can fix your plumbing.)
Their future success is heavily tied to that set of opinions being correct and drumming up further investment. Even with the best will in the world, this type of quantitative opinion will be hugely positively biased.
They are CEOs, half the job is public cheerleading.
this will be the new "fusion in 10 years", but with the added downside of expending a small-country's worth of carbon per day while not actually getting us there.
like, of course sam altman is going to talk about how close they are and how they need more money.
That's a very charitable take on that site. I'd call it a site where people pay money for solutions to assignments they can't be assed to complete themselves.
Agree on this, wife is a professor at a local college and the loss of Chegg will not be mourned. Honestly,
AI at least can provide predictive text that might send the student down a path of realization. Chegg… is not useful for this.
What you're describing is the opposite of hype. You're describing a better product that displaced less effective/valuable products. Should I be feeling sad for Google because I use GPT more than their search engine? At the end of the day, I want the most effective tool.
The other is (effectively) being dumped on the market by VCs in hopes of being a valuable unicorn.
It may never turn a profit.
My argument is that that VC bet on something that may not work out may have destroyed lots of real jobs. It wasn’t harmless, even if you ignore all the other externalities about resource usage.
It doesn't matter if some company turns a profit. AI is out of the box. I can run llama at home on a 3070 and it does a pretty serviceable job. It's not going to go away and it's not going to get any worse. Whether or not X company survives isn't relevant. There are plenty of auto companies that didn't survive idk 1910 but cars didn't go away. Tech doesn't go away if it's clearly useful.
It’s not OK in my mind for VC to just stand-up companies that destroy parts of the economy and then go out of business because it turns out they were never profitable in the first place.
It would be insane for that to be considered good economic policy.
> It’s not OK in my mind for VC to just stand-up companies that destroy parts of the economy and then go out of business because it turns out they were never profitable in the first place.
Is this a real thing that happens? Can you name some historical examples? What is stopping from destroyed businesses from being rebuilt pretty quickly? Uber subsidized taxi rides for millions of people for years, but if Uber went out of business, legacy taxis would come right back. And even if they didn't for X amount of time, X would have to be > the amount of subsidy that was received in the meantime. And if the legacy business was so vulnerable that people were willing to try and replace it with some big matrixes, the replacement company might still escape the local maximum that the previous company was trapped in.
There was value in the market. VCs destroyed it through dumping. And for all we know they’ll give up in a year and we’ll have nothing. No jobs and no existing solution.
OpenAI alone employs thousands of engineers _more_ than Chegg, and does a better job than it. Destroying old stuff is part of creating something new. "The economy" wasn't destroyed by a legacy company folding.
Exactly. Homework is one of the best areas for ChatGPT. My daughter uses it for French -- and while I'm sure its probably sometimes wrong (I actually don't speak French), it has helped my daughter to a 99% so far in her class. She considers it her own private tutor. It generates quizzes for her, explains not only what the answer is, but why she probably answered the way that she did -- the number of times my daughter has said, "Oh, now that makes sense!" has been plenty.
Chegg wasn't a victim — it was a middleman profiting from locked-up educational content and exploiting students’ needs. ChatGPT didn't "lure" users; it provided a superior, accessible alternative, democratizing learning rather than hiding it behind paywalls. The argument against AI due to resource usage is selectively blind to the inefficiencies of legacy systems like Chegg. Calling this hype is like dismissing the internet as a fad — it’s a failure of imagination. Disruption always displaces incumbents, but clinging to outdated, exploitative models is far worse than embracing a tool that genuinely empowers users.
They had no real original content of their own, just worked solutions to homework problems they pulled from textbooks. They were good at SEO and would appear at the top. You clicked on it because it lied to you: showing you part of the content you wanted. Just enough for the search engine preview. That probably boosted them further, wasting more time by others tricked by the same fake results.
To see the rest of the answer, they wanted you to pay money and hope it was what you wanted. Who would subscribe to that other than students desperate for homework answers?
Then ChatGPT comes in without any of the scammy tactics. Sure, it's often wrong, but so are Chegg and Quora.
As others have pointed out, some of your examples are very bad, because they're the opposite of hype - they're companies that innovated and created a better product by leveraging new technology. This gave actual consumers a better, cheaper option, as judged by the consumers themselves (who are the judges that matter).
You express a lot of concern that these are just "VC pumped up companies" or something, as if that negates the technology. But it doesn't! The technology has already been developed because of these VC investments, and much of it is public.
Moreover, even if these companies go out of business tomorrow and aren't replaced, having years of consumers paying less money and getting a better product is a good unto itself. Yes, a company that made something inferior went out of business in the process - but if, after the big companies are shut down, there really is somehow no alternative - a new company based on the old method can always start again.
I just don't understand how you can spin as bad the idea that VCs are spending billions on unprofitable companies, meaning that they are spending billions that go straight into either innovation, or consumer's pockets. Who loses out here except for the VCs?
And while I have empathy and respect for people who lose their jobs, companies going out of business is an everyday occurrence. We should wish it only happened because better-for-consumers solutions came along.
I think the upside is that we stop spending limited human time on mundane/easy things and focus on higher-value pursuit.
Because you can no longer be a cheap artist, because you can no longer help students on easy problems en masse, because family businesses no longer need a webmaster.
That's a step in the right direction, maybe even towards UBI.
On growth, I disagree that we reached the plateau already. We won't fundamentally change things but larger context windows, speed, compute and cost? Obviously.
That in itself is a major evolution.
It looks like it is fading out of hype maybe, but that's just like all things. LLMs aren't going anywhere, just like Rails got version 8 out and it's better than ever.
> All the electricity, the silicon, the water for cooling, the new data center is being built that won’t be needed.
> Just tons and tons of waste everywhere.
I worry about this not just for AI, but in general. That's capitalism right there, profit now - who cares later. And I am becoming radicalized against it.
The only AI product I've found to be useful is ChatGPT and the like. Just chatting it up with a GPT, exploring ideas, getting feedback etc. All derivative products, including coding tools, have been unhelpful at best and actively harmful to productivity at worst.
ChatGPT is also frequently incorrect. It becomes obvious in some cases, particularly in areas you are an expert in.
Reminds me of a PG essay about a "Dunning-Kruger pass". [0]
When searching for ideas, look in areas where you have some expertise. If you're a database expert, don't build a chat app for teenagers (unless you're also a teenager). Maybe it's a good idea, but you can't trust your judgment about that, so ignore it. There have to be other ideas that involve databases, and whose quality you can judge. Do you find it hard to come up with good ideas involving databases? That's because your expertise raises your standards. Your ideas about chat apps are just as bad, but you're giving yourself a Dunning-Kruger pass in that domain.
To some extent we have already developed some filtering against Reddit et al to protect our bayesian priors, but many who lack those filters could suffer greatly from encountering the internet.
I am not surprised at all. The excitement is receding in accordance with the hype cycle theory. People tried the tools and saw their worth and the limitations. This is actually good news, it means we are entering a moment of truth, a moment when transforming knowledge into productivity and profits becomes crucial ! (https://www.lycee.ai/blog/large-language-models-productivity...)
The big winner for AI at the end of the day is going to be Microsoft and Microsoft-like companies that can integrate AI and Copilots into existing tools, with an understanding of how those tools are used by daily users and without significantly increasing prices.
Incumbents (Google, Microsoft, Adobe) seem to broadly struggle at reasonable integration of AI. Just look at windows/github copilot compared claude desktop or cursor.
Copilot is ridiculously easy for a CTO/IT to enable for an entire org. That doesn't mean it's providing the level of value that Cursor is. It's just like the number of "users" Microsoft Teams has, is implicitly tied to the number of Office/Outlook customers. Whether or not people actually use the tool is frankly unrelated to how many customers they can report as "using" a tool.
It feels like a lot of big companies will lose money on AI for a while since they need it to stay relevant but it doesn't seem to be generating all that much in profits yet.
> The big winner for AI at the end of the day is going to be Microsoft and Microsoft-like companies that can integrate AI and Copilots into existing tools, with an understanding of how those tools are used by daily users and without significantly increasing prices.
so far the only, like ONLY, valid benefit i've gotten from Copilot and AI in the workplace is some basic syntax checking for scrips, and improved search features for internal documents.
The former is mostly a parlor trick, while the latter is a massive improvement over Confluence and Sharepoint's search. That's not so much AI win as it is how crappy their built-in search is.
The problem for me with all these HypeTech's is they appear to promise something valuable that is always just out of reach.
With Crypto it was the idea of micro payments. I want to pay to read news articles, or watch a movie, or tip someone I value online, but I don't want to sign up to some monthly subscription or give over my card details. It seemed to offer a viable alternative to the advertising economy which drives everything now.
With LLMs it was the idea that I no longer have to trawl through marketing websites or endless social media posts to find the nugget of information I'm interested in. As a dev, I shouldn't have to care about responsive designs or tech stacks or accessibility or versions of node libraries, all to provide a website. Instead just pump the data directly to the AI and call it a day.
The 2 concepts could even work together so my "original thoughts" can be monetised so I can be paid royalties for my "art", like musicians are today.
What if you weren't making a chair, but a hammock. They can both do the same thing, but are very different approaches and requirements.
Same with websites. They are a medium for conveying information. You don't need (traditional) websites or apps to do that. I can even ask for information and receive information completely independently of a website.
The only reason devs need to care about all these things is because of the medium being used.
It's nice to see that the hype is cooling. There will be more room to focus discussions on what's actually useful, and stop with the endless "I love it", vs. "I hate it", vs. "I fear it" discussions.
> Nearly half (48%) of all desk workers would be uncomfortable admitting to their manager that they used AI for common workplace tasks. The top reasons for workers’ discomfort are 1) feeling like using AI is cheating 2) fear of being seen as less competent and 3) fear of being seen as lazy
First, I love that so many are uncomfortable, and that there are limits to the LLM cheating frenzy.
Second, I wonder whether there are additional big reasons (perhaps conflated with the above reasons):
* Not wanting to be seen as performing low quality work.
* Not wanting to suggest that their job can be replaced by AI tools.
* Not wanting to get caught leaking company IP or client/customer/partner data to various services.
* Not wanting to attract attention to possible copyright infringement or plagiarism scandal by LLM or other model (whether the company has rules about that, or not).
Maybe part of the problem is most of the people claiming to be "AI experts" are riding the wave but don't really have much to offer.
My company was recently offered a $5k/mo package that would "supercharge our sales with AI." I don't think the presenter had anything material to offer besides some very basic workflow integrations that anyone who is using AI in their day to day has (mostly) already identified.
I am calling it. GPT-5 if it ever comes out will be the first sign to the layman and outsiders that we are officially and unambiguously on the downwards stage of the hype cycle. I don't see OpenAI and DeepMind going anyware but I suspect all the API wrapper companies will disappear in 2025
The API wrapper companies will likely limp along until the model market consolidates and they get their faces ripped off by skyrocketing prices, or the features they're selling get integrated into models/platforms themselves. I don't see why Zoom/Google Meet can't integrate a bot-less call recorder+summarizer and immediately put half of them out of business.
Well uh, I dont know if this is bad news or good news but GPT-5 might never be released, their "Orion" model seems to be barely better if not worse than GPT-4.
The hype is self-fulfilling. We say it's going to drastically impact everything, and then we rush to build it into our products and slap it all over the marketing materials. Enough people do this simultaneously, and it's drastically impacted everything.
Yes, but it's drastically impacted everything in a different way that was promised... It was supposed to be revolutionary, to usher in whole new unheard-of levels of productivity, but at the end of the day the experience with a lot of these AI tack-ons for a lot of end-users is that they haven't been very useful...
So... what becomes of the tech industry when we run out of hype fads? Two more years of stagnant hiring? It's good that interest rates are starting to lower again, at least.
who is actually making sustainable revenue solving a real problem using AI? I can only think of the foundational models(OpenAI, Gemini etc), coding helpers(Cursor, GH Copilot)
It might not always be possible to directly translate it to revenue, but in my company, we started using an RAG system to help the Customer Support team answer inquiries. It replaced the old Knowledge Base articles search system, and it improved the number of processed inquiries by about 30%. This decreased customer support times and probably helped make customers happier, which is a pretty nice win, even if it would be pretty much impossible to attribute any revenue numbers directly to it.
My experience says, soon there will be a reduction in the company of the customer support jobs, bringing the customer satisfaction on par to before aka lower, but with slightly more money in the shareholders' pockets.
It's a custom-built Slack bot that uses our KB articles, and previous customer tickets, to formulate responses and find links to relevant data. Our Customer Service staff uses those generated responses to write actual replies to customers. We currently don't have plans to let the system respond to customer queries directly.
- Language translation: is, in general, much better with the ML models than prior automated attempts.
- Internal search tools (GPT + all your internal docs, private)
- Voice-to-text transcription in lots of medical and medical adjacent fields. HN tends to be skeptical of this ("It's going to hallucinate diagnoses!"), but it has a lower error rate than traditional speech rec and human transcription. I met someone who built their own no-code speech to text for their partner's veterinary practice and saved them an hour a day of notetaking.
> - Internal search tools (GPT + all your internal docs, private)
the actual hard part of this which is searching for relevant stuff to feed the LLM, which it just formulates into a readable stuff has been around for years. vector databases have been around forever.
Can we please get voice-to-text/action using LLM for cars and home automation. Voice activated anything in a car is such a hassle, because you have to say exactly, "Hey car, turn the temperature to x degrees". You can't say, "Hey car, turn up the heat" Or "Hey car, more AC please". Or "Hey car, max cold".
Alexa in the home is much better now than it used to be, but still way worse than voice chat with ChatGPT.
At GE Healtchare, "AI", i.e. Neural Network simulation, is used as an optional PET scan image enhancement transformation (glossing over important details of what that actually means). Now, the revenue is from the scanner + software, but it's a contributor to revenue, and arguably sustainable.
There are quite a bit of AI for academic researching tools out there that are generating quite a bit of revenue or being bought up by academic publishing companies. You also have dedicated copy writing and article writing tools that basically automate (read plagiarize) the entire academic writing process.
If it's not grounded in the right, updated contexts or provided tools to interact with the apps a user cares about... it's a glorified chatbot.
I expect what we'll see in the next 5 years for enterprise adoption is progress on both those fronts.
And by progress, I mean "supports interactions with that VB 6 app that's still a critical piece of a company's workflow." (Or SAP, Salesforce, Epic, etc.)
I don't think it is at all, it is just shaping into different forms. A lot of AI solutions are trivial still and we are long way from sophisticated ones.
I think people affected, which is most of us, are really hurting and want this to be true.
There have been rumours [1] coming out from one of top frontier companies that they have a hit a wall with the existing technology. They simply can't just throw more data at the problem and expect generational advances.
Which means just like the last ML/AI/DS hype cycles we will be going through a winter until the next research breakthrough is invented.
My 2024 stance is "buy every AI add-on and decide whether to keep it next year"
So our team has access to Enterprise ChatGPT, Gemini, Notion AI, Slack AI, and basically every AI add-on in every SaaS platform that offers it as an upsell (Github Copilot, ReadMe AI, etc).
2024 is the year of "I don't know what the hell these AI tools are going to be useful for, so let's buy them all"
2025 will be the year of "Ok, we spent $xxx on all these AI tools last year, is anyone actually using them?"
I predict we'll be canceling a lot of those subscriptions. Which all in cost us over $100/mo per employee.