Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> “The intelligence of an AI model roughly equals the log of the resources used to train and run it,” [Sam Altman]

Taking that at face value, it means we would have to invest exponential resources just to get linear improvements. That’s not exactly an optimistic outlook.



Even today's frontier models, without any further improvement, have incredible commercial potential. And even when improvements have diminisching returns in terms of result, the market shifts towards improved models in a supralinear fashion. So a linear improvement resulting from exponential investment might still net you exponential commercial return.

Also, the LLM space is a red queen environment. Stop investing and you are done.

All that said. IMHO Short to medium term breakthrouhgs will come from hybrid AI systems, the LLM being the universal putty between all users and systems.


> have incredible commercial potential

That assertion is unsupported and unproven.

Also, if a commercial use for LLMs is ever found, it will be in the local, personal computing market, not the "cloud".


It's actually completely been disproven, Open"AI" is burning over ten billion a quarter in net losses. It has no commercial value.


To play the Devil's advocate, OpenAI actually has hundreds of millions in income, even though they are spending far more in training new models.


> has hundreds of millions in income

By selling a dollar for ninety cents? This metric is meaningless.


The issue is, as it stands, LLMs are going to become a commodity. Could that happen and OAI is still a valuable firm? Maybe. Do I personally think OpenAI has the product prowess to pull it off? Nah, I think they are overly-concentrated with technologists (same problem Google has). Apple is still king when it comes to figuring out the ideal form of the product the buyer desires.

Apple can take a future OSS model and produce the winning product. That truth would be very bitter for many to swallow. Cook maintaining good relations with China could be the thing that makes Apple topple everyone in the long-run.


Aren’t LLMs being used by business successfully in many “unsexy” domains like translation, sentiment analysis, and image recognition?

Though I do agree that many of the breathless claims that you can stop hiring or even layoff developers because of LLMs seem unsubstantiated


I wouldn't characterize that as “incredible commercial potential”. And especially in the case of translation, you save some of money, but also get poorer quality.


Speaking from experience I was recently in a tiny fountain pen shop in Sendai[0] where the owner doesn’t speak English and I don’t speak Japanese but we were able to talk for an hour or more about fountain pens and tomoe river paper alternatives using Google translate’s dialogue feature.

Maybe not massive commercial potential but it was pretty amazing and reminded me a bit of the Babel fish which use to seem like impossible sci-fi

[0]: https://share.google/64xTBRThXcFR72r3G


OpenAI literally lost $12b last quarter, where is this "incredible commercial potential" you are talking about sir?? Is it monetizing the seven second slop memes? Where is the commercial potential you scam artist, stonk pumper?


In its first decade (1998-2008), Google's total revenue was approximately $27.9 billion.


> In its first decade (1998-2008), Google's total revenue was approximately $27.9 billion.

And now OpenAI has Google as their competitor. Besides, Google established a search monopoly by side deals, Android and browser push but they still lost the Asian market. Now OpenAi's got to overcome not only Google, Amazon and Microsoft but also Baidu, Deepseek and the other Asian and European competitors because nobody wants to lose the "AI race" - it's too risky. Without a monopoly, there's no high profit.


So is Open AI going to replace Google? What is even your point?


Haha:)

Even if you want to give OpenAI the benefit of the doubt by comparing it to other software primos, they're doing terribly. Google, Facebook, Apple, Amazon, etc. were profitable almost immediately after their founding. In the cases where they accumulated losses it was a deliberate effort to capture as much of the market as possible. They could simple hit the brakes and become profitable at will.

In OpenAI's case, every week yet another little-known lab in China releases a 99% competitive LLM at a fraction of their costs.

It's not looking good at all now or in the long-term.


I think he means resources as in compute cycles and the like. Those tend to increase exponentially for the same number of dollars in a Moore's law type way to intelligence should increase in an approximately linear way, something similar to a few IQ points per year.

You can see a similar effect in computer chess ELO scores over time, with the odd blip up see https://www.reddit.com/r/dataisbeautiful/comments/1iovlb0/oc... (1985 - 2023) and https://wiki.aiimpacts.org/speed_of_ai_transition/range_of_h... (1960 - 2000)


And in his interviews he talks about the near vertical progress they're making. Of course nobody agrees but if those two are true, that's doubly exponential to keep pace.


"A deep incisive point, it seems like you want to turn the entire mass of the solar system into computronium to run ChatGPT27".


Grey goo scenario, but the goo are NVIDIA cards used to train LLMs.


We probably already live in a simulation where an LLM is trying to compute how many “r”s are on “strawberry”


If so, expect seahorses to never have existed by next week


> Taking that at face value, it means we would have to invest exponential resources just to get linear improvements.

Not necessarily. Approaches such as mixture of experts help lower training costs by covering domains with specialized models.


haha good one, so why haven't they done this yet? What are they waiting for? Let's see these super advanced "experts" with "specialized models"!!


> haha good one, so why haven't they done this yet? What are they waiting for? Let's see these super advanced "experts" with "specialized models"!!

I understand it's very easy to post ignorant messages in internet forums, but the answer to your question is yes, "they have done it" and it does result in cheaper training costs. See models such as DeepSeek-MoE or Mixtral.

https://github.com/deepseek-ai/DeepSeek-MoE

https://mistral.ai/news/mixtral-of-experts


I encourage you to rethink your identity. You are way out of your depth on this, and posting nonsensical things as fact.


Make openai a non corporate entity that belongs to the UN. It can't be profit driven. Fuck the economic pundits. This tech is beyond humans it needs to be done right. "All tech have been Enshitified to satisfy investor greed, sufficiently advanced tech can be called magic and these fools will enshitify magic also"


Create a cern




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: