Hacker Newsnew | past | comments | ask | show | jobs | submit | gitremote's commentslogin

These numbers are off.

> $20/month ChatGPT Pro user: Heavy daily usage but token-limited

ChatGPT Pro is $200/month and Sam Altman already admitted that OpenAI is losing money from Pro subscriptions in January 2025:

"insane thing: we are currently losing money on openai pro subscriptions!

people use it much more than we expected."

- Sam Altman, January 6, 2025

https://xcancel.com/sama/status/1876104315296968813


That doesn't seem compatible with what he stated more recently:

> We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.

Source: https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...

His possible incentives and the fact OpenAI isn't a public company simply make it hard for us to gauge which of these statements is closer to the truth.


Does anybody really think in this current time that what a CEO says has anything to do with reality and not just with hyping up ala elon recipe

Specifically, a connected CEO in post-law America.

This sort of thing used to be called fraud, but there's zero chance of criminal prosecution.


Criminal persecution? This scheme has been perfected, like what do you want to persecute. Can you say with certainty that he means it's profitable overall? What if he means it's profitable right now today it is profitable, but not yesterday or in the last week. or what if he meant if you take the mean user its profitable? so much room for interpretation, that's why there is no risk for them

> That doesn't seem compatible with what he stated more recently:

Profitable on inference doesn't mean they aren't losing money on pro plans. What's not compatible?

The API requests are likely making more money.


Yes, API pricing is usage based, but ChatGPT Pro pricing is a flat rate for a time period.

The question is then whether SaaS companies paying for GPT API pricing are profitable if they charge their users a flat rate for a time period. If their users trigger inference too much, they would also lose money.


This can be true if you assume that there exists a high number of $20 subscribers who don't use the product that much, but $200 subscribers squeeze every last bit and then some more. The balance could be still positive, but if you look at the power users alone, they might cost more than they pay.

They might even have decided “hey, these power users are willing to try and tells us what LLMs are useful for, and are even willing to pay us for the opportunity!”

> If we didn't pay for training

it is comical that something like this was even uttered in the conversation. It really shows how disconnected the tech sector is from the real world.

Imagine Intel CEO saying "If we didn't have to pay for fabs, we'd be a very profitable company." Even in passing. He'd be ridiculed.


I'm not entirely sure the analogy is fair - Amazon for example was 'ridiculed' for being hugely unprofitable for the first decade, but had underlying profitability if you removed capex.

As a counterpoint, if OpenAI were actually profitable at this early stage that could be a bad financial decision - it might mean that they aren't investing enough in what is an incredibly fierce and capital-intensive market.


Also admitting it would make this business impossible if they had to respect copyright law, so the laws shall be adjusted so that it can be a business.

I just straight up don't trust him

Saying that is the equivalent of him saying "our product is really valuable! use it!"


There's the usual issue of a CEO "talking their book" but there's also the fact that Sam has a rich, documented history of lying. That was the central issue of his firing. "Empire of AI" has a detailed account of this. He would outright tell board member A that "board member B said X", based on his knowledge of the social dynamics of the board he assumed that A and B would never talk. But they eventually figured it out, it unraveled, and they confronted him in a group. Specifically, when they confronted him about telling Ilya Sutskever that Tasha McCauley said Helen Toner should step off the board, McCauley said "I never said that" and Altman was at a loss for words for a minute before finally mumbling "Well, I thought you could have said that. I don't know."

That is my interpretation, that it's a marketing attempt. A form of "The value of our product is so good that it's losing us money. It's practically the Costco hotdog combo!".

Doesn't he have an incentive to make it look like that, though? The way he phrased it, that they are losing money because people use it so much, makes it seem like Pro subscribers are some super power-users. As long as inference has a nonnegative, nonzero cost, then this case will lose money, so Sam isn't admitting that the business model is flawed or anything

https://news.ycombinator.com/item?id=45053741

> The most likely situation is a power law curve where the vast majority of users don't use it much at all and the top 10% of users account for 90% of the usage.

That'll be the Pro users. My wife uses her regular sub very lightly, most people will be like her...


Anyone paying attention should have zero trust in what Sam Altman says.

What do you think his strategy is? He has to make money at some point.

I don’t buy the logic that he will “scam” his investors and run away at some point.


He makes money by convincing people to buy OpenAI stock.

If OpenAI goes down tomorrow, he will be just fine. His incentive is to sell the stock, not actually build and run a profitable business.

Look at Adam Neumann as an example of how to lose billions of investor dollars and still walk out of the ensuing crash with over a billion.

https://en.wikipedia.org/wiki/Adam_Neumann

His strategy is to sell OpenAI stock like it was Bitcoin in 2020, and if for some reason the market decides that maybe a company that loses large amounts of cash isn't actually a good investment... he'll be fine, he's had plenty of time to turn some of his stock into money :)


Why not build a profitable business like Zucc, Bill gates, Jensen, Sergey etc? These people are way richer much more powerful.

I believe, but have no proof, that the answer is "because it's easier to sell stock in an unprofitable business than build a profitable one", although given the other comment, there's a good chance I'm wrong about this :)

Altman doesn't have any stock. He's playing a game at a level people caught up on "capitalism bad" can't even conceptualize.

I'm more "capitalism good" (8 billion people on earth, 7 billion can read, 5 billion have internet, and almost no one dies in childbirth anymore in rich countries, which is several billion people), but that is really interesting that he has no stock and just gets salary.

I guess if other people buying stock in your company is what enables you to have a super high salary (+ benefits like company plane, etc), you are still kinda selling stock though, and honestly, having considered the "start a random software company aligned with the present trend (so ~2015 DevOps/Cloud, 2020 cryptocurrency/blockchain, 2024 AI/ML), pay myself a million dollar a year salary and close shop after 5 years because 'no market lol'" route to riches myself, I still wouldn't consider Altman to be completely free of perverse incentives here :)

Still, very glad you pointed that out, thanks for sharing that information ^^


Again incorrect. He doesn’t have a super high salary.

Holy shit you are right. He owns no equity and just gets a salary. I have no idea about the game he’s playing.

> He has to make money at some point.

Yes, but two paths to doing that are to a) build a profitable company, and b) accumulate personal wealth and walk away from a non-profitable company.

I'm not saying OpenAI is unprofitable, but nor do I see Altman as the sort who'd rule out option b.


Trusting the man about costs would be even more misplaced than trusting an oil company's CEO about the environment.

That's interesting but it doesn't mean they're losing money on the $20/month users. The Pro plan selects for heavy-usage enthusiasts.

Losing money on o1-pro. That makes sense and also why they axed that entire class of models.

Every o1-pro and o1-preview inference was a normal inference times how many replica paths they made.


Apologies, should be Plus. I'll update the article later.

I don't think the author meant they don't include /v1 in the endpoint in the beginning. The point is that you should do everything to avoid having a /v2, because you would have to maintain two versions for every bug fix, which means making the same code change in two places or having extra conditional logic multiplied against any existing or new conditional logic. The code bases that support multiple versions look like spaghetti code, and it usually means that /v1 was not designed with future compatibility in mind.

If you really care about maintaining v1 long-term you'd re-implement it as a small shim above v2.

For the vast majority of the situations in a English-speaking society, the term for that type of person is a "white supremacist". For example, a person who does Nazi salutes, or wants to update refugee policy so that the majority of immigrants are White South Africans, is a white supremacist.


No, that just describes white supremacists, and ignores Black supremacists, East Asian supremacists, Hispanista supremacists etc.


"You’ve heard of animals chewing off a leg to escape a trap? There’s an animal kind of trick. A human would remain in the trap, endure the pain, feigning death that he might kill the trapper and remove a threat to his kind."

- Dune, the gom jabbar's test for humanity


> Not justifying it, but many applications consider the uniqueness of the URL enough protection to prevent discovery.

Yes, that's why it's the #1 most common web security vulnerability in production code:

https://owasp.org/Top10/A01_2021-Broken_Access_Control/

"Permitting viewing or editing someone else's account, by providing its unique identifier (insecure direct object references)"

What vibe coding promoters don't understand is that the average web developer hasn't learned web security 101. Proof: HN commenter points out that "A01:2021 – Broken Access Control" is completely normal in production code.


... in the same way that women's shelters are common but men's shelters are rare looks like a gender war.


This suggests the marketing professors have an incorrectly optimistic view of LLMs, and that's why they were surprised that those who understood AI mechanics embraced it less.


Inference contributes to their losses. In January 2025, Altman admitted they are losing money on Pro subscriptions, because people are using it more than they expected (sending more inference requests per month than would be offset by the monthly revenue).

https://xcancel.com/sama/status/1876104315296968813


So people find more value than they thought so they'll just up the price. Meanwhile, they still make more money per inference than they lose.


This assumes that the value obtained by customers is high enough to cover any possible actual cost.

Many current AI uses are low value things or one time things (for example CV generation, which is killing online hiring).


  Many current AI uses are low value things or one time things (for example CV generation, which is killing online hiring).
We are talking about Pro subs who have high usage.


True.

At the end of the day, until at least one of the big providers gives us balance sheet numbers, we don't know where they stand. My current bet is that they're losing money whichever way you dice it.

The hope being as usual that costs go down and the market share gained makes up for it. At which point I wouldn't be shocked by pro licenses running into the several hundred bucks per month.


Currently, they lose more money per inference than they make for Pro subscriptions, because they are essentially renting out their service each month instead of charging for usage (per token).


Do you have a source for that?


When an end user asks ChatGPT a question, the chatbot application sends the system prompt, user prompt, and context as input tokens to an inference API, and the LLM generates output tokens for the inference API response.

GPT API inference cost (for developers) is per token (sum of input tokens, cached input tokens, and output tokens per 1M used).

https://openai.com/api/pricing/

https://azure.microsoft.com/en-us/pricing/details/cognitive-...

(Inference cost is charged per token even for free models like Meta LLaMa and DeepSeek-R1 on Amazon Bedrock. https://aws.amazon.com/bedrock/pricing/ )

ChatGPT Pro subscription pricing (the chatbot for end users) is $200/month

https://openai.com/chatgpt/pricing/

"insane thing: we are currently losing money on openai pro subscriptions!

people use it much more than we expected."

- Sam Altman, January 6, 2025

https://xcancel.com/sama/status/1876104315296968813

Again, this means that the average ChatGPT Pro end user's chattiness cost OpenAI too much inference (too many input and output tokens sent and received, respectively, for inference) per month than would be balanced out by OpenAI receiving $200/month in revenue from the average Pro user.

The analogy is like Netflix losing money on their subscriptions because their users watch too much streaming, so they ban account sharing, causing many users to cancel their subscriptions, but this actually helps them become profitable, because the extra users using their service too much generated more costs than revenue.


> Protip: look for an Asian market in your area for food. ... Discovering the Asian market has been one of the best financial things to happen to me.

Whenever I see this protip, I feel bad for struggling Asians getting validated that they and their extended family already fully optimized all their opportunities.


Asian always min maxxing their whole life, the moment you hit adulthood it hit you hard that you always been capped for life


"The Cloud Act is a law that gives the US government authority to obtain digital data held by US-based tech corporations irrespective of whether that data is stored on servers at home or on foreign soil. It is said to compel these companies, via warrant or subpoena, to accept the request."

https://www.theregister.com/2025/07/25/microsoft_admits_it_c...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: