Hacker News new | past | comments | ask | show | jobs | submit login

I'd bet that what he, and the competition, is realizing is that the bigger models are too expensive to run.

Pretty sure Microsoft swapped out Bing for something a lot smaller in the last couple of weeks; Google hasn't even tried to implement a publicly available large model. And OpenAI still has usage caps on their GPT-4.

I'd bet that they can still see improvement in performance with GPT-5, but that when they look at the usage ratio of GPT3.5 turbo, gpt3.5 legacy, and GPT4, they realized that there is a decreasing rate of return for increasingly smart models - most people don't need a brilliantly intelligent assistant, they just need a not-dumb assistant.

Obviously some practitioners of some niche disciplines (like ours here) would like a hyperintelligent AI to do all our work for us. But even a lot of us are on the free tier of ChatGPT 3.5; I'm one of the few paying $20/mo for GPT4; and idk if even I'd pay e.g. $200/mo for GPT5.




> I'd bet that what he, and the competition, is realizing is that the bigger models are too expensive to run.

I think it's likely that they're out of training data to collect. So adding more parameters is no longer effective.

> most people don't need a brilliantly intelligent assistant, they just need a not-dumb assistant.

I tend to agree, and I think their pathway toward this will all come from continuing advances in fine tuning. Instruction tuning, RLHF, etc seem to be paying off much more than scaling. I bet that's where their investment is going to be turning.


Once they can add videos they will have a lot of new training data.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: