First of all, if Altman continually makes misleading statements about AI he will quickly lose credibility, and that short term gain in whatever 'financial incentive' that birthed the lie would be eroded in short order by a lack of trust of the head of one of the most visible AI companies in the world.
Secondly, all the competitors of OpenAI can plainly assess the truth or validity of Altman's statements. There are many companies working in tandem on things at the OpenAI scale of models, and they can independently assess the usefulness of continually growing models. They aren't going to take this statement at face value and change their strategy based on a single statement by OpenAI's CEO.
Thirdly, I think people aren't really reading what Altman actually said very closely. He doesn't say that larger models aren't useful at all, but that the next sea change in AI won't be models which are orders of magnitude bigger, but rather a different approach to existing problem sets. Which is an entirely reasonable prediction to make, even if it doesn't turn out to be true.
All in all, "his word is basically worthless" seems much to harsh an assessment here.
I've seen Altman say in an interview that training GPT-4 took "hundreds of little things".
I don't find this implausible, but it folds slightly to Ockham's razor when you consider that this is the exact type of statement that would be employed to obfuscate a major breakthrough.
It just makes me crook my eyebrow and look to more credible sources.
It is possible that GP meant that Altman’s word is basically worthless to them, in which case that’s not something that can be argued about. It’s a factually true statement that that is their opinion of that man.
I personally can see why someone could arrive at that position. As you’ve pointed out, taking Sam Altman at face value can involve suppositions about how much he values his credibility, how much stock OpenAI competitors put in his public statements, and the mindsets people in general have when reading what he writes.
Secondly, all the competitors of OpenAI can plainly assess the truth or validity of Altman's statements. There are many companies working in tandem on things at the OpenAI scale of models, and they can independently assess the usefulness of continually growing models. They aren't going to take this statement at face value and change their strategy based on a single statement by OpenAI's CEO.
Thirdly, I think people aren't really reading what Altman actually said very closely. He doesn't say that larger models aren't useful at all, but that the next sea change in AI won't be models which are orders of magnitude bigger, but rather a different approach to existing problem sets. Which is an entirely reasonable prediction to make, even if it doesn't turn out to be true.
All in all, "his word is basically worthless" seems much to harsh an assessment here.