His possible incentives and the fact OpenAI isn't a public company simply make it hard for us to gauge which of these statements is closer to the truth.
Criminal persecution?
This scheme has been perfected, like what do you want to persecute. Can you say with certainty that he means it's profitable overall? What if he means it's profitable right now today it is profitable, but not yesterday or in the last week. or what if he meant if you take the mean user its profitable? so much room for interpretation, that's why there is no risk for them
Yes, API pricing is usage based, but ChatGPT Pro pricing is a flat rate for a time period.
The question is then whether SaaS companies paying for GPT API pricing are profitable if they charge their users a flat rate for a time period. If their users trigger inference too much, they would also lose money.
This can be true if you assume that there exists a high number of $20 subscribers who don't use the product that much, but $200 subscribers squeeze every last bit and then some more. The balance could be still positive, but if you look at the power users alone, they might cost more than they pay.
They might even have decided “hey, these power users are willing to try and tells us what LLMs are useful for, and are even willing to pay us for the opportunity!”
I'm not entirely sure the analogy is fair - Amazon for example was 'ridiculed' for being hugely unprofitable for the first decade, but had underlying profitability if you removed capex.
As a counterpoint, if OpenAI were actually profitable at this early stage that could be a bad financial decision - it might mean that they aren't investing enough in what is an incredibly fierce and capital-intensive market.
Also admitting it would make this business impossible if they had to respect copyright law, so the laws shall be adjusted so that it can be a business.
There's the usual issue of a CEO "talking their book" but there's also the fact that Sam has a rich, documented history of lying. That was the central issue of his firing. "Empire of AI" has a detailed account of this. He would outright tell board member A that "board member B said X", based on his knowledge of the social dynamics of the board he assumed that A and B would never talk. But they eventually figured it out, it unraveled, and they confronted him in a group. Specifically, when they confronted him about telling Ilya Sutskever that Tasha McCauley said Helen Toner should step off the board, McCauley said "I never said that" and Altman was at a loss for words for a minute before finally mumbling "Well, I thought you could have said that. I don't know."
That is my interpretation, that it's a marketing attempt. A form of "The value of our product is so good that it's losing us money. It's practically the Costco hotdog combo!".
Doesn't he have an incentive to make it look like that, though? The way he phrased it, that they are losing money because people use it so much, makes it seem like Pro subscribers are some super power-users. As long as inference has a nonnegative, nonzero cost, then this case will lose money, so Sam isn't admitting that the business model is flawed or anything
> The most likely situation is a power law curve where the vast majority of users don't use it much at all and the top 10% of users account for 90% of the usage.
That'll be the Pro users. My wife uses her regular sub very lightly, most people will be like her...
His strategy is to sell OpenAI stock like it was Bitcoin in 2020, and if for some reason the market decides that maybe a company that loses large amounts of cash isn't actually a good investment... he'll be fine, he's had plenty of time to turn some of his stock into money :)
I believe, but have no proof, that the answer is "because it's easier to sell stock in an unprofitable business than build a profitable one", although given the other comment, there's a good chance I'm wrong about this :)
I'm more "capitalism good" (8 billion people on earth, 7 billion can read, 5 billion have internet, and almost no one dies in childbirth anymore in rich countries, which is several billion people), but that is really interesting that he has no stock and just gets salary.
I guess if other people buying stock in your company is what enables you to have a super high salary (+ benefits like company plane, etc), you are still kinda selling stock though, and honestly, having considered the "start a random software company aligned with the present trend (so ~2015 DevOps/Cloud, 2020 cryptocurrency/blockchain, 2024 AI/ML), pay myself a million dollar a year salary and close shop after 5 years because 'no market lol'" route to riches myself, I still wouldn't consider Altman to be completely free of perverse incentives here :)
Still, very glad you pointed that out, thanks for sharing that information ^^
I don't think the author meant they don't include /v1 in the endpoint in the beginning. The point is that you should do everything to avoid having a /v2, because you would have to maintain two versions for every bug fix, which means making the same code change in two places or having extra conditional logic multiplied against any existing or new conditional logic. The code bases that support multiple versions look like spaghetti code, and it usually means that /v1 was not designed with future compatibility in mind.
For the vast majority of the situations in a English-speaking society, the term for that type of person is a "white supremacist". For example, a person who does Nazi salutes, or wants to update refugee policy so that the majority of immigrants are White South Africans, is a white supremacist.
"You’ve heard of animals chewing off a leg to escape a trap? There’s an animal kind of trick. A human would remain in the trap, endure
the pain, feigning death that he might kill the trapper and remove a threat to his kind."
"Permitting viewing or editing someone else's account, by providing its unique identifier (insecure direct object references)"
What vibe coding promoters don't understand is that the average web developer hasn't learned web security 101. Proof: HN commenter points out that "A01:2021 – Broken Access Control" is completely normal in production code.
This suggests the marketing professors have an incorrectly optimistic view of LLMs, and that's why they were surprised that those who understood AI mechanics embraced it less.
Inference contributes to their losses. In January 2025, Altman admitted they are losing money on Pro subscriptions, because people are using it more than they expected (sending more inference requests per month than would be offset by the monthly revenue).
At the end of the day, until at least one of the big providers gives us balance sheet numbers, we don't know where they stand. My current bet is that they're losing money whichever way you dice it.
The hope being as usual that costs go down and the market share gained makes up for it. At which point I wouldn't be shocked by pro licenses running into the several hundred bucks per month.
Currently, they lose more money per inference than they make for Pro subscriptions, because they are essentially renting out their service each month instead of charging for usage (per token).
When an end user asks ChatGPT a question, the chatbot application sends the system prompt, user prompt, and context as input tokens to an inference API, and the LLM generates output tokens for the inference API response.
GPT API inference cost (for developers) is per token (sum of input tokens, cached input tokens, and output tokens per 1M used).
Again, this means that the average ChatGPT Pro end user's chattiness cost OpenAI too much inference (too many input and output tokens sent and received, respectively, for inference) per month than would be balanced out by OpenAI receiving $200/month in revenue from the average Pro user.
The analogy is like Netflix losing money on their subscriptions because their users watch too much streaming, so they ban account sharing, causing many users to cancel their subscriptions, but this actually helps them become profitable, because the extra users using their service too much generated more costs than revenue.
> Protip: look for an Asian market in your area for food. ... Discovering the Asian market has been one of the best financial things to happen to me.
Whenever I see this protip, I feel bad for struggling Asians getting validated that they and their extended family already fully optimized all their opportunities.
"The Cloud Act is a law that gives the US government authority to obtain digital data held by US-based tech corporations irrespective of whether that data is stored on servers at home or on foreign soil. It is said to compel these companies, via warrant or subpoena, to accept the request."
> $20/month ChatGPT Pro user: Heavy daily usage but token-limited
ChatGPT Pro is $200/month and Sam Altman already admitted that OpenAI is losing money from Pro subscriptions in January 2025:
"insane thing: we are currently losing money on openai pro subscriptions!
people use it much more than we expected."
- Sam Altman, January 6, 2025
https://xcancel.com/sama/status/1876104315296968813
reply