> OpenAI's board should book themselves for an IQ test, it is just beyond my understanding why they hate their own successful business so much.
OpenAI's board doesn't run a business, they run a nonprofit with a mission oriented around safety and public benefit, not private profit.
It has a for-profit subsidiary, explicitly subordinated to the nonprofit's mission, as a funding mechanism, but being successful at bringing in money is only a net benefit if the method by which the funding is generated isn't more harmful to the mission than what it enables is beneficial.
They won’t be able to accomplish their mission without external funding, though (longterm anyway). Seemingly what they did significantly compromised their ability to obtain it in the future.
So I guess they’d prefer not accomplishing their mission at all instead of only partially (and having to compromise/change parts of it)? Which is not unreasonable if you assume nobody else will ever be able to catch up with them…
>They won’t be able to accomplish their mission without external funding
The 10 billion in compute will get them reasonably far (and I don't see how Microsoft legally gets out of this, unless it said "you can't fire sam" in the contract).
That said I also wouldn't be surprised if Elon[1] jumps back into the fray somehow given how close he is to Ilya and that the change towards commercialization was what put him off openAI.
> They won’t be able to accomplish their mission without external funding, though (longterm anyway). Seemingly what they did significantly compromised their ability to obtain it in the future.
Their goals are mitigating what they see as an existential risk to human civilization, and also delivering social benefit from a properly aligned version of the same technology posing the risk.
From that perspective, actively compromising the first part of the mission in the short term potentially renders longer-term financial viability and the second part of the mission irrelevant concerns.
Well chances are that because of this entire debacle they simply won’t have any say in it. So yeah if that was indeed their goal their hand will be “clean”.
But that’s it. The world will move on and at best they just delayed those “existential threats” by 6-12 months. If they thought this preferable to any form of compromise. Well.. that’s fine.
However I doubt this was even the primary reason of this whole conflict in the first place.
Or they are just incompetent/ had ulterior motives. All of this is just speculation.
However if their goal was indeed what you said they have failed miserably at it. They simply won’t have any say in what happens next because of this situation
Indeed. Sutskever should have been able to foresee the move. It's just not hard to see Altman will just get another place to continue what he has done in OpenAI. Maybe his hope is he can slow him down like a year or so? But now by joining MS, it's effectively no much difference.
I was thinking exactly this. If the original idea was to slow things down, then that kind of turmoil is exactly what was required. OpenAI will slow down under the new CEO and because talent is leaving. Altman will slow down, because he will need to build thing from the ground up.
Isn't that exactly what the research director wanted? As you said, he hates the business side and just wants to do research. So he got what he asked for.