This is just a bad article, bordering on blogspam. How could that possibly make them go bankrupt when they've secured $10 billion in funding just this year.
Also, noone forces them to provide chatgpt for free. If that was actually going to bankrupt them, they would just stop doing that.
They are eating the cost because it provides them with the largest dataset of human feedback in existence. Is that worth 700k a day? I don't know, but they seem to think so, otherwise they'd stop doing it.
Is feedback the only reason to subsidize this product?
Making AI tools free, for 12-18 months, would be enough time to get individuals reliant on it for their workflows and enough time to discover the full range of their potential. At which point then the vendor can withdraw the bridge and monetise access.
I would see this as a big VC bet. Will it pay off? Maybe!
Sam Altman has said that ChatGPT is break even on compute. Maybe he meant ChatGPT Pro, not sure, but the idea they're bleeding money because of the number of users doesn't seem to be the case.
Obvs that's still a recipe to lose a lot of money on talent, training and other things. But they have a lot of room to increase prices.
Also the idea that a small drop in usage over the summer indicates a problem is nonsensical. Most new products have a big launch spike and then usage collapses! See how Threads usage spiked at launch and then dropped like a stone. If ChatGPT has only lost a bit of traffic over the summer (which is slow time anyway) then they're doing great.
Many here are not accounting for other costs such as training, fine-tuning, inflated salaries and most importantly the competition from either other cloud-based AI models and especially $0 or close to free AI models and their services.
Eventually, they will catch up and GPT-5 will take much longer to train and right now, they currently cannot raise prices as their current AI models begin to degrade.
2024 is extremely unlikely, but the AI bros reacting here that it is fine to burn $2B+ a year over a $10B investment with rapidly increasing costs for one AI model with no sight of profitability is beyond amusing. (and of course the AI bros flagged the post)
The only result I can predict is that OpenAI will have to IPO with in the next 4 - 6 years or Microsoft just 100% acquires them.
I don’t get why it’s not fine? I worked for small companies (the whole company had less than 100M revenue but profitable) to large tech companies that burn money. End of the day, it’s a business strategy. The money they’re “burning” is an investment. Whether it crash and burns or rockets to the moon is part of the risk.
> Some tech experts would even go as far as to say that Altman is having a Frankenstein moment–one, where he is somewhat regretful of the monster that he has created, although it seems that would be a farfetched reading of the situation
Please fire those experts for not recognizing an obvious attempt of regulatory capture
That article does seem to be jumping to conclusions. I don't think you can assume that the expense for running the free models will stay constant. And they can restrict the free version more if necessary, they obviously don't want to so far.
It'll be interesting how this all turns out, and the business model might be difficult depending on how things develop. For example if the fair use arguments are seriously challenged, that might make things very complicated for OpenAI and add a lot of uncertainty for potential customers.
Would be interesting to know the internal projections on revenue and p/l if the free model was reduced in ability and Plus increased in price.
As a paying user, I'm often scratching my head as to why I do that considering the very capable free options even with just OpenAI/Bing. Its annoying seeing my peers meh-ing at the whole thing and getting by without my outlay.
For those of us who see this as a significantly useful and very cheap service the current strategy doesn't appear to have any goal beyond making OpenAI the Google of AI.
That is probably a sound approach, but this is one of the most phenomenal products in living memory; I'm happy to see them capitalise on that in a reasonable and realistic charging model, and am curious as to what they see as the future path - bankruptcy is obviously not the intent.
idk many companies lose a lot of money and are dependent on outside investment for a long time. they clearly have a category defining product that has captured the public’s imagination and changed peoples workflows. i would be very surprised if the server bills made them go bankrupt
ChatGPT may not be profitable (highly doubt that) but their inference API certainly is; they literally price it so they are profitable, and many companies are using it.
The current large language model neural networks have successfully exploited a zero day on human cognition. The output closely resembles high quality writings and as such our psyche will consider it quality too. (See Kahneman's Thinking Fast And Slow on more such.) This is extremely dangerous as there is little to nothing of information in there -- this sometimes can be revealed by negating the prompt and getting the same answer. I can't find the article right now investigating this but there was one. This shouldn't be surprising since all the system is capable of is providing something the answer would sound like. Yes, sometimes it happens to be the answer but unless you already know how to evaluate that there's no way to know.
The biggest danger lies in the future. According to https://www.npr.org/sections/health-shots/2022/05/13/1098071... some 300 000 COVID deaths could've been avoided if all adults gotten vaccinated and without a doubt anti vaxx propaganda paid a role in the low vaccination rate. We know only 12 people were behind most of that propaganda https://www.npr.org/2021/05/13/996570855/disinformation-doze... now imagine the next pandemic when this misinformation will be produced on an industrial scale using these LLMs. Literally millions will die because of it.
Because then we will have even more economic growth based upon providing people things they don't need, and we will have even more advanced AI that will make human interaction even more unnatural.
They are eating the cost because it provides them with the largest dataset of human feedback in existence. Is that worth 700k a day? I don't know, but they seem to think so, otherwise they'd stop doing it.