No. Prison exists for incapacitation. It is to protect society from miscreants. If punishment and/or rehabilitation occurs, that is nice, but it is not it's purpose.
To clarify, and to make it easier for anyone who wants to do further reading, I'll note that being in prison is punishment, and the purposes of punishment are variously deemed to be retribution, deterrence, rehabilitation, and incapacitation.
(And prison only as incapacitation doesn't really make any sense given that prison terms differ based on the crime.)
It's a bit easy to understand that in the context of gravity, because gravity bends light down. If you shine a flashlight on Earth, that light is in free-fall, bending down. Time follows the same path, because the speed of light is fixed.
There shouldn't be much controversy over fixing pandemic-era over hiring. But that's just the public-facing rationale. Betting more heavily on AI tech might be the real reason.
You should fix overhiring by firing the people that overhired. That never happens though because there's no accountability in leadership in any major tech company.
I wonder if this move is sound. Granted, inflation of consumer products hasn't died down, so they are getting choosier in how they spend, affecting Google's advertising revenue. So Google is betting more heavily on AI as a potential vehicle for revenue. But will the current AI tech really prove so disruptive, with high pricing?
I wasn't there; I was in college at that time, but I remember the attitude from military magazines, radio, etc. Iraqi soldiers in Kuwait were seen as universally guilty of "raping" Kuwait, and any who died deserved his punishment.
It would be interesting if Altman and Brockman, assuming that they want accelerated development, ended up in some high level roles at Microsoft. That seems like a win-win-win all the way around since OpenAI could follow their new path, Microsoft and Google could build new things fast, and the open model proponents can keep up their good work.
Except for a clumsy fast press release, this doesn’t really have to end badly for anyone.
Even though I have been an OpenAI fan every since I used their earliest public APIs, I am also very happpy that there is such a rich ecosystem, other commercial players like Anthropic, open model support from Meta and Hugging Face, and the increasingly wonderful small models like Mistral that can be easily run at home.
Sam and those that want to continue stealing ip and destroying entire industries in the process will leave, while ethical machine learning scientists will remain.