Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>GPT-5 will be even less of an improvement on GPT-4.5 than GPT-4.5 was on GPT-4. The pattern will continue for GPT-5.5 and GPT-6, the ~1000x and 10000x models they may train by 2029 (if they still have the money by then). Subtle quality-of-life improvements and meaningless benchmark jumps, but nothing paradigm-shifting.

It's easy to spot people who secretly hate LLMs and feel threatened by them these days. GPT-5 will be a unified model, very different from 4o or 4.5. Throwing around numbers related to scaling laws shows a lack of proper research. Look at what DeepSeek accomplished with far fewer resources; their paper is impressive.

I agree that we need more breakthroughs to achieve AGI. However, these models increase productivity, allowing people to focus more on research. The number of highly intelligent people currently working on AI is astounding, considering the number of papers and new developments. In conclusion, we will reach AGI. It's a race with high stakes, and history shows that these types of races don't stop until there is a winner.



It's also easy to spot irrational zealots. Your statement is no more plausible than OP's. No one knows whether we'll achieve AGI, especially since the definition is very blurry.


What the author is referring to there as GPT-5, GPT-5.5, and GPT-6 are, respectively, "The models that have a pre-training size 10x greater than, 100x greater than, and 1,000x greater than GPT-4.5." He's aware that what OpenAI is going to actually brand as GPT-5 is the router model that will just choose between which other models to actually use, but regards that as a sign that OpenAI agrees that "the model that is 10x the pre-training size of GPT-4.5" won't be that impressive.

It's slightly confusing terminology, but in fairness there is no agreed upon name for the next three orders of magnitude size-ups of pretraining. In any case, it's not the case that the author is confused about what OpenAI intends to brand GPT-5.


> In conclusion, we will reach AGI

I'm a little confused by this confidence? Is there more evidence aside from the number of smart people working on it? We have a lot of smart people working on a lot of big problems, that doesn't guarantee a solution nor a timeline.


Some hard problems have remain unsolved in basically every field of human interest for decades/centuries/millennia -- despite the number of intelligent people and/or resources that have been thrown at them.

I really don't understand the level optimism that seems to exist for LLMs. And speculating that people "secretly hate LLMs" and "feel threatened by them" isn't an answer (frankly, when I see arguments that start with attacks like that alarm bells start going off in my head).


I logged in to specifically downvote this comment, because it attacks the OP's position with unjustified and unsubstantiated confidence in the reverse.

> It's easy to spot people who secretly hate LLMs and feel threatened by them these days.

I don't think OP is threatened or hates LLM, if anything, OP is on the position that LLM are so far away from intelligence that it's laughable to consider it threatening.

> In conclusion, we will reach AGI

The same way we "cured" cancer and Alzheimer's, two arguably much more important inventions than a glorified text predictor/energy guzzler. But I like the confidence, it's almost as much as OP's confidence that nothing substantial will happen.

> It's a race with high stakes, and history shows that these types of races don't stop until there is a winner.

So is the existential threat to humanity in the race to phase out fossil fuels/stop global warming, and so far I don't see anyone "winning".

> However, these models increase productivity, allowing people to focus more on research

The same way the invention of the computer, the car, the vacuum cleaner and all the productivity increasing inventions in the last centuries allowed us to idle around, not have a job, and focus on creative things.

> It's easy to spot people who secretly hate LLMs and feel threatened by them these days

It's easy to spot e/acc bros feeling threatened that all the money they sunk into crypto, AI, the metaverse, web3 are gonna go to waste and try to fan the hype around it so they can cash in big. How does that sound?


I appreciate the pushback and acknowledge that my earlier comment might have conveyed too much certainty—skepticism here is justified and healthy.

However, I'd like to clarify why optimism regarding AGI isn't merely wishful thinking. Historical parallels such as heavier-than-air flight, Go, and protein folding illustrate how sustained incremental progress combined with competition can result in surprising breakthroughs, even where previous efforts had stalled or skepticism seemed warranted. AI isn't just a theoretical endeavor; we've seen consistent and measurable improvements year after year, as evidenced by Stanford's AI Index reports and emergent capabilities observed at larger scales.

It's true that smart people alone don't guarantee success. But the continuous feedback loop in AI research—where incremental progress feeds directly into further research—makes it fundamentally different from fields characterized by static or singular breakthroughs. While AGI remains ambitious and timelines uncertain, the unprecedented investment, diversity of research approaches, and absence of known theoretical barriers suggest the odds of achieving significant progress (even short of full AGI) remain strong.

To clarify, my confidence isn't about exact timelines or certainty of immediate success. Instead, it's based on historical lessons, current research dynamics, and the demonstrated trajectory of AI advancements. Skepticism is valuable and necessary, but history teaches us to stay open to possibilities that seem improbable until they become reality.

P.S. I apologize if my comment particularly triggered you and compelled you to log in and downvote. I am always open to debate, and I admit again that I started too strongly.


I am with you that when smart people combine their efforts together and build on previous research + learnings, nothing is impossible.


I started the conversation off on the wrong foot. Commenting with “ad hominem” shuts down open discussion.

I hope we can have a nice talk in future conversations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: