Hacker News new | past | comments | ask | show | jobs | submit login

> it likely means we might be headed for an AI improvement pause for couple of years after GPT5.

I suspect that a pause in base LLM performance won’t be an AI improvement pause; there’s a whole lot of space to improve the parts of AI systems around the core “brain in a jar” model.




I agree, there will be other things to be improved in AI system, but IMHO (tea leaves reading really) it would only lead to incremental improvements in overall systems. Also there is a lot of 'interfacing' work that needs to happen & i suspect that would end up filling the pause, which really is LLM productization loosely speaking.

far as AGI is concerned I dont believe LLMs are really the right architecture for it, AGI likely needs some symbolic logic and a notion of physicality (ie.. physical laws & energy/power).


> but IMHO (tea leaves reading really) it would only lead to incremental improvements in overall systems.

It will reach a point where that is the case, sure; it is not there now, and if we are within one model generation of exhausting (for now) major core model improvements, I don’t think we’ll have reached the point of gradual incremental improvement from rest-of-system improvements yet.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: