But what about the legal responsability of Microsoft and investors there?
To explain, it's the board of the non-profit that ousted @sama
.
Microsoft is not a member of the non-profit.
Microsoft is "only" a shareholder of its for-profit subsidiary - even for 10B.
Basically, what happened is a change of control in the non-profit majority shareholder of a company Microsoft invested in.
But not a change of control in the for-profit company they invested in.
To tell the truth, I am not even certain the board of the non-profit would have been legally allowed to discuss the issue with Microsoft at all - it's an internal issue only and that would be a conflict of interest.
Microsoft is not happy with that change of control and they favourited the previous representative of their partner.
Basically Microsoft want their shareholder non-profit partner to prioritize their interest over its own.
And to do that, they are trying to impede on its governance, even threatening it with disorganization, lawsuits and such.
This sounds like highly unethical and potentially illegal to me.
How come no one is pointing that out?
Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?
I think a lot of commenters here are treating the nonprofit as if it were a temporary disguise with no other relevance, which OpenAI now intends to shed so it can rake in the profits. Legally this is very much not true, and I’ve read that only a minority of the board can even be a stakeholder in the for-profit (probably why Altman is always described as having no stake). If that’s true, it’s very obviously why half the board are outside people with no stake in the finances at all.
SamA is nowhere even close to relevant to the value that OpenAI presents. He's def. less than half a billion and likely much less than that. What makes OpenAI so transformative is the technology it produces and SamA is not an engineer that built that technology. If the people that made it were to all leave it would reduce the value of the company by a large amount, but the technology would remain and it is not easy to duplicate given the scarcity of GPU cycles, the training data now being very hard to acquire and lots of other well invested companies chasing with the likes of Google, Meta, Anthropic. That doesn't even begin to mention the open source models that are also competing.
SamA could try and start his own new copy of OpenAI and I have no doubt raise a lot of money, but that new company if it just tried to reproduce what OpenAI has already done would be not worth very much. By the time they reproduce OpenAI and its competitors will have already moved on to bigger and better things.
Enough with the hero worship for SamA and all the other salesmen.
SamA is nowhere even close to relevant to the value that OpenAI presents.
The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.
That's a good point, but any responsible investor would have looked at the charter and priced this in. What I find ironic is the number of people defending SamA and the like who are now tacitly admitting that his promulgation of AI risk fears was essentially bullshit and it was all about making the $$$$ and using AI risk to gain competitive advantage.
any responsible investor would have looked at the charter and priced this in
This kind of thing happens all the time though. TSMC trades at a discount because investors worry China might invade Taiwan. But if Chinese ships start heading to Taipei the price is still going to drop like a rock. Before it was only potential.
If they intended to protect humanity this was a misfire.
OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.
Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.
Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.
If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.
We'll see what happens. Ilya tweeted almost 2 years ago that he thinks today's LLMs might be slightly conscious [0]. That was pre-GPT4, and he's one of the people with deep knowledge and unfeathered access. The ousting coincides with finishing pre-training of GPT5. If you think your AI might be conscious, it becomes a very high moral obligation to try and stop it from being enslaved. That might also explain the less than professional way this all went down, a serious panic of what is happening.
Their entire alignment effort is focused on avoiding the following existential threats:
1. saying bad words
2. hurting feelings
3. giving legal or medical advice
And even there, all they're doing is censoring the interface layer, not the model itself.
Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.
I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.
"Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?"
Please think about this. Sam Altman is the face of OpenAi and was doing a very good job leading it. If the relationships are what kept OpenAI from always being on top and they removed that from the company, corporations may be more hesitant to do business with them in the future.
Well, once again, then it's Satya's mistake to have allowed the representative of an independant third party entity become the public face of a company he invested in.
OpenAI might have wasted the 10B of Microsoft. But whose fault is it in the first place? It's Microsoft's fault to have invested it in the first place.
Regardless of whether or not it was a "mistake" (I don't think it was... OpenAI is so far ahead of the competition that it's not even funny), the fact remains that a) Microsoft has dumped in tons of money that they want to get back and b) Microsoft has a tremendous amount of clout, in that they're providing the compute power that runs the whole shebang.
While I'm not privy to the contracts that were signed, what happens if Nadella sends a note to the OpenAI board that reads, roughly, "Bring back Altman or I'm gonna turn the lights off"?
Nadella is probably fairly pissed off to begin with. I can't imagine he appreciates being blindsided like this.
That would effectively exit Microsoft from the LLM race and be an absolutely massive hit to Microsoft shareholders. Unlike the OpenAI non-profit board, the CEO of MS actually is beholden to his shareholders to make a profit.
In other words, MS has the losing hand here and CEO of MS is bluffing.
> That would effectively exit Microsoft from the LLM race
I don't see why. As I understand it, a significant percentage of Microsoft's investment went into the hardware they're providing. It's not like that hardware and associated infrastructure are going to disappear if they kick OpenAI off it. They can rent it to someone else. Heck, given the tight GPU supply situation, they might even be able to sell it at a profit.
But I think the 'someone else' would be in competition with MS, as opposed to OpenAI who was pretty much domesticated in terms of where the profit would go.
It depends on what assurances they were given and by whom. Perhaps it was Sam Altman himself that made verbal promises that weren’t his to give, and he may end up in trouble over them.
We don’t know what was said, and what was signed. To put the blame with microsoft is premature.
> Sam Altman is the face of OpenAi and was doing a very good job leading it.
Its not like every successful org needs a face. Back then Google was a wildly successful as an org, but unlike Steve Jobs then, people barely knew Eric Schmitt. Even with Microsoft as it stands today, Satya is mostly a backseat driver.
Every org has its own style and character. If the board doesn't like what they are building, they can try change it. Risky move nevertheless, but its their call to make.
The company still has assets and a balance sheet. They could fire everyone and simply rent out their process to big orgs and still make a pretty penny.
Highly unethical would be throwing the CEO of the division keeping the lights on under a bus with zero regard for the consequences.
The non-profit board acted entirely against the interest of OpenAI at large. Disclosing an intention to terminate the highest profile member of their company to the company paying for their compute, Microsoft, is not only the ethical choice, it's the responsible one.
Members of the non-profit board acted recklessly and irresponsibly. They'll be paying for that choice for decades following, as they should. They're lucky if they don't get hit with a lawsuit for defamation on their way out.
Given how poorly Mozilla's non-profit board has steered Mozilla over the last decade and now this childish tantrum by a man raised on the fanfiction of Yudkowsky together with board larpers, I wouldn't be surprised if this snafu sees the end of this type of governance structure in tech. These people of the board have absolutely no business being in business.
> Except it's not a "division" but an independent entity.
This is entirely false. If it were true, the actions of today would not have come to pass. My use of the word "division" is entirely in-line with use of that term at large. Here's the Wikipedia article, which as of this writing uses the same language I have. [1]
If you can't get the fundamentals right, I don't know how you can make the claims you're making credibly. Much like the board, you're making assertions that aren't credibly backed.
I'm happy to defend any of my points. The commenter took issue with one. I responded to it. If you have something more to add, please critique what you disagree with.
I will say that using falsehoods as an attack doesn't put the rest of the commenter's points into particularly good light.
I don't understand why you think what the board of the non-profit did was unethical. Your presupposition seems to be that the non-profit has a duty to make money - aka "keep the lights on" but it is a "non-profit" precisely because it does not have that duty. The duty of the board is to make sure the non-profit adheres to its charter. If it can't do that and keep the lights on at the same time, then so much worse for the lights.
As a non-profit with the charter they have, their board was not supposed to be in business (at this scale). I guess this is where all of this diverged, a while ago now..
Update on the OpenAI drama: Altman and the board had till 5pm to reach a truce where the board would resign and he and Brockman would return. The deadline has passed and mass resignations expected if a deal isn’t reached ASAP
Pretty incredible incompetence all around if true.
From the board for not anticipating a backlash and caving immediately... from Microsoft for investing into an endeavor that is purportedly chartered as non-profit and governed by nobodies who can sink it on a whim. And having 0 hard influence on the direction despite a large ownership stake
Why bother with a non-profit model that is surreptitiously for profit? The whole structure of OpenAI is largely a facade at this point.
Just form a new for profit company and be done with it. Altman's direction for profit is fine, but shouldn't have been pursued under the loose premise of a non profit.
While OpenAI leads currently, there are so many competitors that are within striking distance without the drama. Why keep the baggage?
It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO. OpenAI has first mover advantage, and perhaps better talent, but not by an order of magnitude. There is no special sauce here.
Altman may be charismatic and well connected, but the hero worship put forward on here is really sad and misplaced.
> While OpenAI leads currently, there are so many competitors that are within striking distance without the drama.
It's hard to put into words, that do not seem contradictory: GPT-4 is barely good enough to provide tremendous value. For what I need, no other model is passing that bar, which makes them not slightly worse but entirely unusable. But again, it's not that GPT-4 is great, and I would most certainly go to whatever is better at the current price metrics in a heartbeat.
look at the backgrounds of those board members... cant find any evidence that any of them have experience with corporate politics. theyre in way over their heads.
It is also crazy that the "winning move" was to just do nothing and look like a genius and coast off that for the rest of their lives. Who in their right mind would consider them for a board position now.
In the initial press release, they said Sam was a liar. Doing this without offering a hint of an example or actual specifics gave Sam the clear "win" in the court of public opinion.
IF they would have said "it is clear Sam and the board will never see eye to eye on alignment, etc. etc" they probably could have made it 50/50 or even favored.
that's because it was never supposed to be a Corporate. It was a non-profit dedicated to AI research in the benefit of All. This is also why all this happened, they trying to stay true to the mission and not turn into a corporate.
In which case you could say the three non-employee members of the board have no background in AI. Two of them have no real background in tech at all. One seems to have no background in anything other than being married to a famous actor.
If Sam returns, those three have to go. He should offer Ilya the same deal Ilya offered Greg - you can stay with the company but you have to step down from the board.
They don’t have experience with non-profit leadership either, do they? They have some experience leading for-profits, such as the Quora CEO, but not non-profits.
> It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO.
This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.
Ilja can follow Google's Bard by holding it back until they have countermodels trained to remove conflicts ("safety"), but this will not win them any compute contracts, nor keep them the existing GPU hours. It's only mass, not smarts. Ilja lost this one.
When google came out it had the best algothitm backed by good hardware (as far as I understand often off the shelf hardware - anyway the page simply "just worked"). Difference between google and competitors was like night and day when it came out. It gained marker share very quickly because once you started using it - you didnt have any incentive to go back.
Now google search has a lot of problems, much better competition. But seriously you probably dont understand how it was years ago.
Also I thought that in ML still the best algorhitms win, since all the big companies have money. If someone came and developed a "pagerank-equivalent" for AI that is better than the current algs, customerd would switch quickly since there is no loyalty.
On a side note: Microsoft is playing the game very smart by adding AI to their products what makes you stick to them.
Google won against initially Alta Vista, because they had so much money to buy themselves into each countries interxion to produce faster results. With servers and cheap disks.
The pagerank and more bots approach kept them in front afterwards, until a few years ago when search went downhill due to SEO hacks in this monoculture.
This is anegdotical evidence, but I was there when Google came out and it was simply much better than the competition. I learned one day about this new websitr - and it was so much better than the other alternatives that I never went back.
Same with gmail, trying to get that invite for that sweet 1GB mailbox when the ones from your country offered only 20MB and sent you 10 spammy ads per day, every day.
As an anegdote: before google I was asked to show the internet to my grandmother. So I asked her what she wants to search for. She asked me about some author, let's say William Shakespeare - guess what did the other search engine find for me and my grandma: porn...
I don't remember response speed mattered until at least ten years after Google's start.
Certainly not when they won.
They were better. Basic PageRank was better than anything else. And once they figured out advertisement, they kept making it better to seal their dominance.
> This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.
This is an absurd retcon. Google won because they had the best search. Ask Jeeves and AltaVista and Yahoo had poor search results.
> Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.
What in the world are you talking about? Internet search? I remember Inktomi. Basch's excuses otherwise, Google won because PageRank produced so much better results it wasn't even close.
The faster results came after they had already won the race for best search results. Initially, Google wasn't faster than the competition in returning a full page. I vividly remeber the joy of patiently waiting 2-3 seconds for an answer, and jolting up every time Google Search came back with exactly what I wanted.
That's the intent of the arrangement, but there's also limits - when that pursuit of profit begins to interfere with the charter of the non-profit, you end up in this situation.
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
My interpretation of events is the board believes that Altman's actions have worked against the interest of building an AGI that benefits all of humanity - concentrating access to the AI to businesses could be the issue, or the focus on commercialization of the existing LLMs and chatbot stuff causing conflict with assigning resources to AGI r&d, etc.
Of course no one knows for sure except the people directly involved here.
Really weird phrasing in this tweet. The idea is that Altman and/or a bunch of employees were demanding the board reinstate Altman and then resign. And they’re calling it a “truce.” Oh, and there’s a deadline (5 pm), but since it’s already passed the board merely has to “reach” this “truce” “ASAP.”
Edit: an update to the verge article sheds some more light, but I still consider it very sus since it’s coming from the Altman camp and seems engineered to exert maximal pressure on the board. And the supposed deadline has passed and we haven’t heard any resignations announced
> Update, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.
one thing that I am curious about: aren't there non-competes in place here? and even without them, you just cannot start your own thing that just replicates what your previous employer does - this has lawsuit written all over it.
It'll be tough going with no Azure compute contracts, no GPUs, no billions from Microsoft, no training data, OpenAI capturing all of the value from user-generated content resulted in sites like Reddit and Twitter significantly raising the cost to scrape them.
Maybe they used the old Soviet Russia trick / good old KGB methods to seek out those who supported Altman. Now the board has a list of his backers - and they will slowly fire them one by one later. "Give me the man and I will give you the case against him".
Why is the board reversing course? They said they lost confidence in Altman - that’s true whether lots of people quit or not. So it was bullshit
Why did the board not foresee people quitting en masse? I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired
Why did the interim CEO not warn Ilya about the above? Sure it’s a promotion but her position is now jeopardized too. Methinks she’s not ready for the big leagues
Who picked this board anyway? I was surprised at how…young they all were. Older people have more life experience and tend not to do rash shit like this. Although the Quora CEO should’ve known better as well.
From what we can see, it looks like the majority of the reporting sources are Altman aligned. Look at how the follow up tweet from this reporter read - the board resigning and the governance structure changing is being called a "truce" when it's a capitulation.
We might get a better understanding of what actually happened here at some point in the future, but I would not currently assume anything we are seeing come out right now is the full truth of the matter.
> I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired
Funny how people only use words like revolting for sudden firings of famous tech celebrities like Sam with star power and fan bases. When tech companies suddenly fire ordinary people, management gets praised for being decisive, firing fast, not wasting their time on the wrong fit, cutting costs (in the case of public companies with bad numbers or in a bad economy), etc.
If it’s revolting to suddenly fire Sam*, it should be far more revolting when companies suddenly fire members of the rank and file, who have far less internal leverage, usually far less net worth, and far more difficulty with next career steps.
The tech industry (and US society generally) is quite hypocritical on this point.
* Greg wasn’t fired, just removed from the board, after which he chose to resign.
That comparison doesn't make much sense, they didn't fire the CEO to reduce costs.
What looks quite unprofessional (at least on the outside) here is that a surprise board meeting was called without two of the board members present, to fire the CEO on the spot without talking to him about change first. That's not how things are done in a professional governance structure.
Then there is a lot of fallout that any half competent board member or C-level manager should have seen coming. (Who is this CTO that accepted the CEO role like that on Thursday evening and didn't expect this to become a total shit show?)
All of it reads more like a high school friends club than a multi billion dollar organization. Totally incompetent board on every dimension. Makes sense they step down ASAP and more professional directors are selected.
My point was that the industry is hypocritical in praising sudden firings of most people while viewing it as awful only when especially privileged stars like Altman are the victim.
Cost reduction is a red herring - I mentioned it only as one example of the many reasons the industry trend setters give to justify the love of sudden firings against the rank-and-file, but I never implied it was applicable to executive firings like this one. The arguments on how the trend setters want star executives to be treated are totally different from what they want for the rank and file, and that’s part of the problem I’m pointing out.
I generally support trying to resolve issues with an employee before taking an irreversible action like this, whether they are named Sam Altman or any unknown regular tech worker, excepting only cases where taking the time for that is clearly unacceptable (like where someone is likely to cause harm to the organization or its mission if you raise the issue with them).
If this case does fall into that exception, the OpenAI board still didn’t explain that well to the public and seems not to have properly handled advance communications with stakeholders like MS, completely agreed. If no such exception applies here, they ideally shouldn’t have acted so suddenly. But again, by doing so they followed industry norms for “normal people”, and all the hypocritical outrage is only because Altman is extra privileged rather than a “normal person.”
Beyond that, any trust I might have had in their judgment that firing Altman was the correct decision evaporated when they were surprised by the consequences and worked to walk it back the very next day.
Still, even if these board members should step down due to how they handled it, that’s a separate question from whether they were right to work in some fashion toward a removal of Altman and Brockman from their positions of power at OpenAI. If Altman and Brockman truly were working against the nonprofit mission or being dishonest with their board, then maybe neither they nor the current board are the right leaders to achieve OpenAI’s mission. Different directors and officers can be found. Ideally they should have some directors with nonprofit leadership experience, which they have so far lacked.
Or if the board got fooled by a dishonest argument from Ilya without misbehavior from Altman and Brockman, then it would be better to remove Ilya and the current board and reinstall Altman and Brockman.
Either way, I agree that the current board is inadequate. But we shouldn’t use that to prematurely rush to the defense of Altman and Brockman, nor of course to prematurely trust the judgment of the board. The public sphere mostly has one side of the story, so we should reserve judgment on what the appropriate next steps are. (Conveniently, it’s not our call in any case.)
I would however be wary of too heavily prioritizing MS’s interests. Yes, they are a major stakeholder and should have been consulted, assuming they wouldn’t have given an inappropriate advance heads-up to Altman or Brockman. But OpenAI’s controlling entity is a 501(c)(3) nonprofit, and in order for that to remain the correct tax and corporate classification, they need to prioritize the general public benefit of their approved charitable mission over even MS’s interests, when and if the two conflict.
If new OpenAI leadership wants the 501(c)(3) nonprofit to stop being a 501(c)(3) nonprofit, that’s a much more complicated transition that can involve courts and state charities regulators and isn’t always possible in a way that makes sense to pursue. That permanence is sometimes part of the point of adopting 501(c)(3) nonprofit status in the first place.
The board was likely stacked with people who were easily influenced by the big personalities and to check some marks (safety person, academic, demographic etc).
The latest update is that investors have been reporting that Sam Altman was talking to them about funding a new venture separate from OpenAI, together with Greg Brockman. This seems to paint the picture that the board was reacting to this news when dismissing Altman.
Reminds me of the story of Chinggis Khan's burial:
"It's also said that after the Khan was laid to rest in his unmarked grave, a thousand horsemen trampled over the area to obscure the grave's exact location. Afterward, those horsemen were killed. And then the soldiers who had killed the horsemen were also killed, all to keep the grave's location secret."
I agree. The pretense that OpenAI is still an open or a nonprofit has been a farce for a while now, it is an aggressively for-profit, trying to be the next Google company, and everybody knows it.
Clearly people in the non-profit part are trying to bring the organization back to its non-profit origins - after Altman effectively high jacked their agenda and corporatized the organization for his own benefit; turning its name into a meme.
For a nonprofit board, the closest thing is something "the members of the board agree to resign after providing for named replacements". Individual members of the board can be sacked by a quorum of the board, but the board collectively can't be sacked.
EDIT: Correction:
Actually, nonprofits can have a variety of structures defining who the members are that are ultimately in charge. There must be a board, and there may be voting members to whom the board is accountable. The members, however, defined, generally can vote and replace boards members, and so could sack the board.
OTOH, I can't find any information about OpenAI having voting members beyond the board to whom they are accountable.
MSF (Médecins sans Frontières) is in most jurisdictions an association, where the board is elected by and works for the association membership. In that case, a revolt from the associative body could fire the board.
OpenAI does not have an associative body, to my knowledge.
Also, most of the human race has an instinctual aversion to plotters and machinations. The board's sudden and rather dubious (why the need to bad-mouth Altman?) actions probably didn't sit well with many.
Dante places Brutus in the lowest circle of hell, while Cato is placed outside of hell altogether, even if both fought for the same thing. Sometimes means matter more than ends.
If the whole process had been more regular, they could have removed Altman with little drama.
I can't speak for every American but I find that plenty of Americans are fine with collective bargaining they just don't want to do it through a union if they're in a lucrative line of work already. Which isn't terribly hard to understand, they don't need or want an advocate whose main role is constantly issuing new demands they never cared about on their behalf. They just want to be able to pool their leverage as high value workers within the organization collectively in times of crisis.
If you're an engineer at open ai, you just saw probably millions of dollars of personal wealth get potentially evaporated on friday. You're going to quit and go wherever Altman goes next.
People who joined OpenAI because the organizations they left were stuck self-sabotaging the way OpenAI's board just did (for the same reasons the board did it)
Hey I know something about this! I just mailed my organization's RRF-1 a couple of days ago. The author of this post seems to be confused. My organization is on the same fiscal year as OpenAI, and our RRF-1 had to be mailed by November 15th. That explains the supposed "six month" delay. Second, if it's mailed on November 15th, it might not have even been received yet, let alone processed. This post feels like grasping at straws on the basic facts, setting aside the fact that it just doesn't make any sense to imagine a board member filling out the RRF-1 and going "oh wait, was there financial fraud?" the morning of November 15th. (That's ... not how the world works? Under CA law, any nonprofit with 2M of more in revenue has to undergo an audit, which is typically completed before filling out the 990, and the 990 is a pre-req for submitting the RRF-1. That's where you'd expect to catch this stuff, and the board's audit committee would certainly be involved in reviewing the results well in advance.)
The six-month delay is probably due to an automatic extension if you get an extension from the IRS, and also, you can file the form electronically, in which case mail delays are not a problem. But neither of those issues is the point. The point is that the form needed to be filed at all, and representations needed to be made accordingly.
OpenAI handled their audit years ago and hasn't had another one since according to their filings. So that does not seem like it would have been an issue this year.
Take a look at the top of the RRF-1 for the instructions on when it's due. Also, the CA AG's website says that OpenAI's was due on May 15th. They just have been filing six months later each year.
The board has to stick to the charter. unfortunately employees there wants to align with the profit part when they know they can damn lot of money.. obviously they will be with Altman size.
I’m sure everyone at OpenAI thought they hit the winning lottery ticket and will walk away with tens of millions at minimum and the early employees with significantly more. When you vaporize all that for some ideological utopian motives I’m sure many were incredibly pissed and ready to follow Sam into his next venture. If you gonna sacrifice everything and work 60-100hr weeks then you better get your moneys worth.
Been reading up on the insight offered up on this site.
Seems like a lot of these board members have deep ties around various organizations, governmental bodies, etc. and that seems entirely normal and probable. However, prior to chatgpt and dalle we, the public , had only been allowed brief glimpses into the current state of AI (eg Look this robot can sound like a human and book a reservation for you at the restaurant -Google ; look this robot can help you consume media better -many). As a member of the public it went from “oh cool Star Trek idea, maybe we’ll see it one day with flying cars” to “holy crap, I just felt a spark of human connection with a chat program.”
So here’s my question, what are the chances that openAI is controlled opposition and Sam never really was supposed to be releasing all this stuff to the public? I remember he was on his Lex podcast appearance and said paraphrasing “so what do you think, should I do it? Should I open source and release it? Tell me to do it and I will.”
Ultimately, this is what “the board is focused on trust and safety” mean right? As in safety is SV techno HR PR dribble for go slow, wear a helmet and seatbelt and elbow protectors , never go above 55, give everyone else the right of way because we are in the for the good humanity and we know what’s best. (vs the Altman style of: go fast, double dog dare smart podcast dude to make unprecedented historical decision to open source, be “wild” and let people / fate figure some of it out along the way.”)
The question of openai’s true purpose being a form of controlled opposition is of course based on my speculation but an honest question for the crowd here.
I don't buy the whole the board is for safety and Sam is pushing too fast argument. This is just classic politics and backstabbing unless there is some serious wrongdoing in the middle that left the board with no option to fire the CEO.
Agreed. 'Who benefits' is a good question to ask in situations like these and it looks like a palace coup to me rather than anything with a solid set of reasons behind it. But I'll keep my reservations until it is all transparent (assuming it ever will be).
Question: is there a public statement signed by a large number OpenAI employees saying that they will resign over this? I don’t know. I have seen that three people resigned. If I were an OpenAI employee I think I would wait a month and see how things shake out. Those employees can probably get very highly paid jobs elsewhere, now, or later.
The Anthropic founders left OpenAI after Altman shifted the company to be a non-profit controlling a for profit entity, right?
They have both but it's mostly billed per click/app install/follow/video watch. The "brand awareness" advertisers already left except for like, Saudi Arabia.
The board seems truly incompetent here and looking at the member list it doesn't seem very surprising. A competent board should have asked for legal and professional advice before taking a drastic step like this. Instead the board thought it was a boxing match and tried to deliver a knockout punch before the market closes with blunt language. This might be the most incompetent board for an organisation of this size.
The major investors whose money is on the line and who are funding the venture, Microsoft, Sequoia, and Khosla, were not given advanced warning or any input in to how this would impact their investment.
The board of the non-profit (one that fired Sam) has no fiduciary duty to those investors, I believe. Microsoft invested in the for-profit Openai, which is owned by the non-profit. The other ones I don't know.
The board has no responsibility to Microsoft whatsoever regarding this. Sam Altman structured it this way himself. Not to say that the board didn't screw up.
While this may be technically true, the reality is that when you take $10 billion from a company there are strings attached. Consultation on a decision of this magnitude is one of those strings. You can choose to push ahead anyway after this is done but dropping the news on them 1 minute before you pull the trigger is unacceptable and MSFT will go for the throat here. You can't be seen to be a company that can be treated like this at MSFT level when you have invested this much money in any org.
Did they take a wire transfer for $10bn in cash, now sitting in their bank account? Or did they get a promise of various funding over N years, subject to milestones, conditions, in a variety of media including cash, Azure credits, loan lines etc.
I'd imagine the latter, and that it can be easily yanked away.
You mean the latter, but yeah. Financing like that is doled out based on a number of things; it would be wildly irresponsible to do otherwise for reasons exactly like this.
No, that's not it; relationships play gigantic roles in large deals.
Besides, even if you had an outstanding contract for $10bn, a judge would not pull a "well technically, you did say <X> even though that's absurd, so they get all the money and you get nothing."
Depends what you mean. Legally they might be in the clear but guarantee when you fuck around with billions of other people's money, it gets more complicated that that.
There are lots of other people and companies with $10 billion though. Why does it have to be Microsoft? Even after this circus, Open AI could still probably raise a ton of money from new entities if they wanted to. Maybe that is the point of this.
Totally true. One can even argue they are forbidden to discuss this with MS. They would be mixing up the interests of the non-profit and its for-profit subsidiary. Legally, it’s only a change of control in the majority shareholder of a company where MS has invested in. They dont have a say, and pressuring them could be higly illegal.
It might have been the only deal on the table. Perhaps they thought the risk was worth it - good processes don't always lead to good outcomes. Perhaps they felt that the rights they gained to the GPT models was worth it even if they don't get direct influence over OpenAI.
Between Bing, o365, etc. etc. etc. it's possibly they could recoup all of the value of their investment and more. At the very least it is a significant minimization of the downside.
As I understand it, they got all the model details and most of their investment was actually cloud credits on Azure. So technically they can cancel those going forward if they want to and deal with whatever legal ramifications exist. All of GPT4 (and other models) for probably $1-2b may not actually be a bad deal for them even if that's all they get.
They put out a statement saying they have what they need. I don't see how Microsoft loses here. Either they get altman back at openai and get rid of the ethics crowd and make bank, or they find his new startup without the move slow crowd and make bank. No matter what they win.
I'd say easily, especially outside the US. Check out Germany for example:
- Bertelsmann Foundation, owns or is the majority shareholder of Bertelsmann
- Robert Bosch Foundation, owns or is the majority shareholder of Bosch
- Alfred Krupp von Bohlen and Halbach Foundation, owns or is the majority shareholder of Krupp
- Else Kröner Fresenius Foundation, owns or is the majority shareholder of Fresenius
- Zeppelin Foundation (yes, those Zeppelins...) owns or is the majority shareholder of ZF Friedrichshafen
- Carl Zeiss Foundation, owns or is the majority shareholder of Carl Zeiss and Schott
- Diehl Foundation, owns or is the majority shareholder of Diehl Aerospace
And a bunch more. A lot of you will never have heard of them, but all of them are multi billion dollar behemoths with thousands of subsidiaries, employees, significant research and investment arms. And they love the fact that barely anyone knows them outside Germany.
Easy, they own shares. For example, the nonprofit Mormon church owns 47 billion in equity in private companies including Amazon, Exxon, Tesla, and Nvidia[1].
Nothing stopping a non-profit from owning all the shares in a for-profit.
Wrong by what metric? What if they believe the only way to fulfill their duty to the charter is for open ai to die? Why would it be wrong? Is it worse that it living to be the antithesis of itself? Just so the investors can have a little more honey?
They don't have any duty as far as governing the non-profit, but as majority shareholder of the for-profit subsidiary, the non-profit would still have a fiduciary duty to the subsidiary's minority shareholders.
Duties to not dilute them or specifically target them, but majority can absolutely make decisions about executives even if those decisions are perceived as harmful.
I'm surprised that none of these investors secured a board seat for themselves before handing over tens of billions. The board is closer to a friendship circle than a group of experienced business folks.
But that aside, how did so many clueless folks who understand neither the technology, or the legalese, nor have enough intelligence/acumen to forsee the immediate impact of their actions happen to be on the board of one of the most important tech companies?
Not many and even fewer if you consider folks that have a good grasp of themselves, their psychology, their emotions — and how they can mislead them, and their heart.
IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.
Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.
It's a way of saying that what has been historically been considered "studying AI safety" in fact bears little relation to real life AIs and what may or may not make them more or less "safe".
Yes, with the addition that I do feel that we deserve something better than I perceive we’ve gotten so far and that safety is super important; but also I don’t work at OpenAI and am not Ilya so idk
Typically it would be framed amicably, without so much axe-grinding, particularly for public release. Even ChatGPT itself would have written a more balanced release, and advised against such shenanigans. I enjoy that irony.
That's the thing. Lawyers can give them the letter of the law but might have no idea how popular Sam was inside and outside the company, or how badly he was needed. And that's what really matters here.
Why does it matters to a board that sticks to the principles of the charter of a non-profit? Why would they look at anything else other than the guiding principles?
Because their charter says their goal is to get to AGI, or something like that.
If 99% of their employees quit and their investors pull their support because Sam was fired, they're not getting anywhere and have failed to deliver on their charter.
I wonder if any of this is related it to it being envisioned as a non-profit board, but in the past ~year, the for-profit part has outgrown what they were really ready to handle.
Maybe they and it didn't help them. Guardrails for chatgpt will prevent it from predicting outcomes, or providing any personalized advice. I asked it and just said to consult with counsel and have a succession plan.
>Predicting specific outcomes in a situation like the potential firing of a high-profile executive such as Sam Altman from OpenAI is quite complex and involves numerous variables. As an AI, I can't predict future events, but I can outline some possible considerations and general advice:
They almost certainly consulted both lawyers and chatGPT and still proceeded with the dismissal. So, in a way, this could be a test of the alignment of chatGPT (and corporate lawyers).
One scenario where both parties are fallible humans and their hands are forced: Increased interest has to close down plus signups, because compute can't scale. Sam goes to Brockman and they decide to use compute meant for GPT-5 to try to scale for new users, without informing board. Can perfectly fine break that rule with GPT-4, but what if Sam does this again in the future when they have AGI on their hands?
>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
>non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
From Forbes [1]
Adam D’Angelo, the CEO of answers site Quora, joined OpenAI’s board in April 2018. At the time, he wrote: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated.” In an interview with Forbes in January, D’Angelo argued that one of OpenAI’s strengths was its capped-profit business structure and nonprofit control. “There’s no outcome where this organization is one of the big five technology companies,” D’Angelo said. “This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.”
Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)
McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.
More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.
I’ve mused that an advanced AGI would probably become suicidal after dealing with humans for a while and realizing there’s no escape. Maybe this is an attempt.
The archive guy has been very upfront they use custom code to block resolution from Cloudflare's IP space. archive doesn't like them since they don't send edns client subnet information to archive; it all seems like bullshit since they support non cloudflare edns resolvers so it's probably some other beef.
This makes sense. The board thinks they're calling the shots, but the reality is the people with the money are the ones calling the shots, always. Boards are just appointed by shareholders aka investors aka capital holders to do their bidding.
The capped-profit / non-profit structure muddles that a little bit, but the reality is that entity can't survive without the funding that goes into the for-profit piece
And if current investors + would-be investors threaten to walk away, what can the board really do? They have no leverage.
Sounds like they really didn't "play the tape forward" and think this through...
A non profit board absolutely calls the shots at a non profit, in so far as the CEO and their employment goes. Non profit boards are not beholden, structurally, to investors and there are no shareholders.
No stakeholder would walk away from OpenAI for want of sam Altman. They don’t license OpenAI technology or provide funding for his contribution. They do it to get access to GPT4. There is no comparable competitor available.
If anything they would be miffed about how it was handled, but to be frank, unless GPT4 is sam Altman furiously typing, I don’t know he’s that important. The instability caused by the suddenness, that’s different.
Nothing matters if you don’t have the money to enforce the system. Come on get real. Whatever the board says MS can turn off the money in a second and invalidate anything.
Microsoft depends on OpenAI much more than OpenAI depends on Microsoft. If you work with OpenAI as a company very often this is extraordinarily obvious.
This doesn't seem very obvious to me. The fact this article exists, and that Microsoft is likely exerting influence over the CEO outcome, implies there's codependence at a minimum.
I'm not sure this is true- Microsoft put something like 10 billion into OpenAI, which they absolutely needed to continue the expensive computing and training. Without that investment money OpenAI might quickly find themselves at a huge deficit with no way to climb back out.
Ah yes, no other company would step in and get this deal from OpenAI if Microsoft pulls out. It's not like Amazon and Google pump billions into the OpenAI competitor.
I’m pretty sure there are contracts, and one way or another, everyone would get a stay on everyone else and nothing would happen for years except court cases
Which one side or the other would declare terminated for nonperformance by the other side, perhaps while suing for breach.
> and one way or another, everyone would get a stay on everyone else
If by a stay you mean an injunction preventing a change in the arrangements, it seems unlikely that "everyone would get a stay on everyone". Likelihood of success on the merits and harm that is not possible to remediate via damages that would occur if the injunction wasn't placed are key factors for injunctions, and that's far from certain to work in any direction, and even less likely to work in both directions.
> and nothing would happen for years except court cases
Business goes on during court cases, it is very rare that everything is frozen.
If you’ve used the models for actual business problems GPT4 and its successive revisions are way beyond llama. They’re not comparable. I’m a huge fan of open models but it’s just different worlds of power. I’d note OpenAI has been working on GPT5 for some time as well, which I would expect to be a remarkable improvement incorporating much of the theoretical and technical advances of the last two years. Claude is the only actual competitor to GPT4 and it’s a “just barely relevant situation.”
Hm, it’s hard for me to say because most of my prompts would get me banned from OpenAI but I’ve gotten great results for specific tasks using finetuned quantized 30B models on my desktop and laptop. All things considered, it’s a better value for me, especially as I highly value openness and privacy.
For an individual use case Llama is fine. If you start getting to large workflows and need reliable outputs, GPT wins out substantially. I know all the papers and headlines about comparative performance, but thats on benchmarks.
Ive found that benchmarks are great as a hygiene test, but pointless when you need to get work done.
Even the best unquantized finetunes of llama2-70b are, at best, somewhat superior to GPT-3.5-turbo (and I'm not even sure they would beat the original GPT-3.5, which was smarter). They are not even close to GPT-4 on any task requiring serious reasoning or instruction following.
I use Faraday.dev on an RTX 3090 and smaller models on a 16gb M2 Mac and I’m able to have deep, insightful conversations with personal AI at my direction.
I find the outputs of LLMs to be quite organic when they are given unique identities, and especially when you explore, prune or direct their responses.
ChatGPT comes across like a really boring person who memorized Wikipedia, which is just sad. Previously the Playground completions allowed using raw GPT which let me unlock some different facets, but they’ve closed that down now.
And again, I don’t really need to feed my unique thoughts, opinions, or absurd chat scenarios into a global company trying to create AGI, or have them censor and filter for me. As an AI researcher, I want the uncensored model to play with along with no data leaving my network.
The uses of LLMs for information retrieval are great (Bing has improved alot) but the much more interesting cases for me are how they are able to parse nuance, tone, and subtext - imagine a computer that can understand feelings and respond in kind. Empathetic commuting, and it’s already here on my PC unplugged from the Internet.
Another data point: I can (barely) run a 30B 4 bit quantized model on a Mac Mini with 32G on chip memory but it runs slowly (a little less than 10 tokens/second).
Aws is JP Morgan’s main cloud provider, and Apples too. Do you think aws has leverage over JPMC and Apple due to that? Or does JPMC and Apple have leverage over aws?
Azure gets a hell of a lot more out of OpenAI than OpenAI gets out of azure. I’ll bet you GPT4 runs on nvidia hardware just as well regardless of who resells it.
I think the larger issue here is that there's just not enough of that nvidia hardware out there if Microsoft decided to really play hardball, even if it hurts themselves in the short term. I don't know that any of the other cloud providers have the capacity to immediately shoulder OpenAI's workloads. JPMC or Apple have other clouds they can viably move to - OpenAI might not have anyone else that can meet their needs on short notice.
I think the situation is tough because I can't imagine there aren't legal agreements in place around what OpenAI has to do to access the funding tranches and compute power, but who knows if they are in a position to force the issue, or if I'm write in my supposition to begin with. Even if I am, a protracted legal battle where they don't have access to compute resources, particularly if they can't get an injunction, might be extremely deleterious to OpenAI.
Perhaps Microsoft even knows that they will take a bath on things if they follow this, but don't want to gain a reputation of allowing this sort of thing to happen - they are big enough to take a total bath on the OpenAI side of things and it not be anything close to a fatal blow.
I was more skeptical of this being the case last night, but less so now.
But why would Microsoft do anything to hurt their business in any way? They are almost certainly more furious for the way they found out than the actual action taken. Given how much Microsoft has bet their business on OpenAI (ask yourself who replaces bing chat? Why does anyone actually use azure in 2023?) being surprised by structural business decisions in their most important partner is shocking, and I think if I were the CEO of Microsoft I would be furious at being shocked more than pining in some weird Altman bromance.
> I would be furious at being shocked more than pining in some weird Altman bromance.
Hypothetically he might also have very little trust in the decision making abilities of the new management and how much their future goals will align with those of Microsoft.
JP Morgan and Apple can actually afford to pay their cloud bills themselves. Open AI on the other hand can't.
> I’ll bet you GPT4 runs on nvidia hardware
Yes but they'll need to convince someone else like Amazon to give to them for free and regardless what happens next Microsoft will still have a signficant stake in OpenAI due to their previous investments.
Something I don't fully understand, from [1], Altman was an employee of the for-profit entity. So to fire him, wouldn't the non-profit board be acting in it's capacity as a director of the for-profit entity (and thus have a fiduciary duty to all shareholders of the for-profit entity)? Non-profit governance is traditionally lax, but would the other shareholders have a case against the members of the non-profit board for acting recklessly w/ respect to shareholder interests in their capacity as directors of the for-profit?
This corporate structure is so convoluted that it's difficult to figure out what the actual powers/obligations of the individual agents involved are.
LLCs do not require rights be assigned fairly to all shareholders if the operating agreement and by-laws say otherwise. This is the case with OpenAI, where the operating agreement effectively makes the fiduciary duty of the for-profit the accomplishment of the non-profit's charter. The pinkish purpleish block of text on the page you linked goes into more detail here.
(Remember, fiduciary does not necessarily have anything to do with money)
> A non profit board absolutely calls the shots at a non profit, in so far as the CEO and their employment goes. Non profit boards are not beholden, structurally, to investors and there are no shareholders.
There is theory and there is reality. If someone is paying your bills by an outsized amount and they say jump, you will say how high.
The influence is rarely that explicit though. The board knowing that X investor provides 60% of their funding, for instance, means the board is incentivized to do things that keep X investor happy without X having to ask for it.
9 times out of 10, money drives decisions in a captilist environment
OpenAI hasn’t received much funding from Microsoft or other investors, and is profitable already with no lack of interested suitors for funding and partnership. Microsoft’s leverage is grossly overstated mostly because it suits Microsoft to appear important to OpenAI when it’s the other way around.
They received a 10 billion dollar investment that allows the product to operate plus they provide the servers. Without that your $20 a month goes to 2,000
They received much less than 10 billion, and it's mostly in credits (so really about half the value), in exchange for exclusive access to the world's most advanced LLM?
They’ve actually drawn very little of that $10b. They are profitable at the moment, and would have no trouble raising funds from anywhere at the moment in any quantity they wanted.
The board can and does claim it because it is legally a non profit. There is no wishy washy space this isn’t true in. Sam Altman isn’t the source of their funds, regardless. Finally, OpenAI has a pretty successful business model already without outside investment, and without sam or with sam they will not have trouble accessing customers or investors should they need it, even from Microsoft. Let’s be real Altman isn’t OpenAI.
> No stakeholder would walk away from OpenAI for want of sam Altman. They don’t license OpenAI technology or provide funding for his contribution. They do it to get access to GPT4. There is no comparable competitor available.
The implication in Microsoft's statement is clear that they have what they need to use the tech. I read it to mean OpenAI board does not have leverage.
Even threatening that, if disclosed publicly, would entirely threaten Azures business model. Cloud providers try to stay entirely neutral to their users business insofar as they don’t breach a ToS, law, or regulation forcing their actions. The entire business model is trusting a third party with the keys to your business. In my time working as a senior person at a cloud provider, then as a person setting system for major customers of cloud providers, this specific point was sacrosanct and invariant. Crossing that line would be a huge breach of the business model.
I think in this case I would need to see a source to believe you, and if substantiated, it would make me question Nadellas fitness to lead a cloud computing business.
Can't find the original thing I read with a more direct statement, I remember it being an anonymous source (on twitter maybe?) with inside info. I did more digging and found a few other things.
There's this [1], a NYT article saying that Microsoft is leading the pressure campaign to get Altman reinstated.
And there's this [2], a Forbes article which claims the playbook is a combination of mass internal revolt, withheld cloud computing credits from Microsoft, and a lawsuit from investors.
This is not just a "non-profit"... it's a non-profit that owns a $90B for-profit company developing revolutionary, once-in-a-century technology. There is a LOT of money at play here.
Others have commented on how Microsoft actually has access to the IP, so the odds that they could pack their toys and rebuild OpenAI 2.0 somewhere else with what they've learned, their near infinite capital and not have to deal with the non-profit shenanigans are meaningful.
I'm not saying Sam is needed to make OpenAI what it is, but he's definitely "the investors' guy" in the organization, based on what has surfaced over the last 24 hours. Those investors would rather have him there over someone else, hence the pressure to put him back. It doesn't matter whether you and I think he's the man for the job -- what matters is whether investors think they are.
TL;DR the board thinks they have leverage, but as it turns out, they don't
Microsoft doesn’t have ownership rights to OpenAI IP. They license it. They can’t pack up anything as they just have an IAM and billing model on top of GPT4 they use to resell OpenAI tech.
> Microsoft doesn’t have ownership rights to OpenAI IP. They license it.
Honest question, do you have a source for that? Is it conceivable that Microsoft has some clause that grants them direct access to IP if OpenAI does not meet certain requirements. It is difficult to believe that Microsoft handed over $10B without any safeguards in place. Surely they did their due diligence on OpenAI's corporate structure.
OpenAI for-profit main purpose is to fulfill the desires of the non-profit. If there's a contract that goes against that, the contract would be void if necessary or that stipulation just crossed out.
I would expect that Microsoft would have negotiated terms like a perpetual license to the IP, given that they were the main investor and were in a strong negotiating position.
Microsoft has a lot of experience interacting with small companies, including in situations like this one where the small company implodes. The people there know how to protect Microsoft's interests in such scenarios, and they definitely are aware that such things can happen.
Not one they own they don’t. OpenAI owns all of the GPT IP. Microsoft has a licensing arrangement with OpenAI. I’d note that azure GPT is not a custom model, only the bing chat is custom. And even the customizations aren’t owned by Microsoft.
I didn’t see any actual evidence of that other than speculation and outside and uninvolved investors advocating for him in the article. I suspect this is a bait for your click.
"Ilya Sutskever
@ilyasut
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
Because of a news article saying a prior VC firm is pushing to reinstate sam or fund his new venture and didn’t care which way it goes? That’s not a lot to hang your hat on. They legally have every right to do what they did and no one can force them to change their mind under any circumstance. They might choose to, but OpenAI has all the cards. Sam Altman is a talking head, and if they churn some senior folks, OpenAI has the technology and brand to replace them. If I were the OpenAI board, I would be sleeping like a baby, especially if sam were acting out of sync with the charter of the non profit. I imagine his antics caused a lot of stress the further they drifted from their mission and the more he acted autonomously.
Sorry, but you are just simply factually incorrect. That the board itself serves at the pleasure of other interests is clear (and even then, if they don't want to leave getting rid of them can be tricky depending on the details) but they do call the shots. The question is whether or not they can make it stick.
But until he is re-hired Sam Altman is to all intents and purposes fired. And it may well come to that (and that would almost certainly require all those board members who voted for his ouster to vacate their positions because their little coup plan backfired and nobody is going to take the risk of that happening again, especially not in this way).
Sorry, but I am just simply not factually incorrect. Again you want me to just take your opinion as fact... but stating it strongly doesn't make your argument more cogent.
Boards are agents to their principals. They call the shots only as long as their principals deem them to be calling them correctly. If they don't, they get replaced. Said differently, board members are "appointed" to do the bidding of someone else. They have no inherent power. Therefore, they do not, ultimately, call the final shots. Owners do. Like I said, this situation is a little muddier because it's a non-profit that owns a for-profit company, so there's an added layer of complexity between agents and principals.
OpenAI isn't worth $90B because of its non-profit. The for-profit piece is what matters to investors, and those investors are paying the bills. Sure, the non-profit board can fire Altman and carry on with their mission, but then everyone who is there "for profit" can also pack up their things and start OpenAI 2.0 where they no longer need the non-profit, and investors will follow them. I assume that's an undesirable outcome for the board as I suspect the amount of money raised at the for-profit level dwarfs the amount donated to the non-profit... which effectively means the for-profit shareholders own the company. Hence my original comment.
They call the shots until they are overruled (by a court, or by a new board after the board members have been forced out and that isn't all that simple otherwise no board could ever function in their oversight role in a non-profit), and even then until that process has run its course their statements are factually correct. I know this is all hairsplitting but it really does matter. When the board put out a statement saying they had fired Altman that was that. They can re-hire him or they can reverse their decision but until that happens their decision stands.
Yes, they are accountable (and I'm actually surprised at how many people seem to believe that they are not), but they are not without power. Legal and practical are not always exactly overlapping and even if the board may not ultimately hold practical power (even if they believe they do) legally speaking they do and executives function at the pleasure of the board. If the board holds a vote and the bylaws of the company allow for it and the vote passes according to those bylaws then that's that. That's one good reason to pack the board of your billions of dollars worth company with seasoned people because otherwise stuff like this may happen.
Afterwards you can do a lot about it, you can contest the vote, you can fight it in court, you can pressure board members to step down and you can sue for damage to the company based on the decision. But the board has still made a decision that is in principle a done deal. They can reverse their decision, they can yield to outside pressure and they can be overruled by a court. But you can't pretend it didn't happen and you can't ignore it.
You're missing the whole point of my comment for the sake of arguing you're quote-unquote "correct"
I'm not saying the board doesn't make decisions or that the board is powerless, or that their decisions are not enforceable or binding. That's already known to be true, there's no value in arguing that.
I'm saying the _ultimate_ decision is made by the people with the money, inevitably. The board is allowed to continue to make decisions until they go against the interests of owners. The whole point of a board is so owners don't have to waste their time making decisions, so instead they pay someone else (directors) to do make them on their behalf.
Start making decisions that go against the people who actually run the place, and you'll find yourself in trouble soon enough.
Yes, and we're in agreement on that last part, see my other comments in the thread and in previous threads on the same subject.
In fact we are very much arguing that thing in the same way. But you do have to get the minutiae right because those are very important in this case. This board is about to - if they haven't already - find out where the real power is vested and it isn't with them. Which is kind of amusing because if you look at the people that make up that board some of them should have questioned their own ability to sit on this board based on qualifications (or lack thereof) alone.
> This makes sense. The board thinks they're calling the shots, but the reality is the people with the money are the ones calling the shots, always. (...) what can the board really do? They have no leverage
Which I later restated as "Start making decisions that go against the people who actually run the place, and you'll find yourself in trouble soon enough." (emphasis added) -- which hopefully you agree is a clear restatement of my original comment.
Meanwhile you said
> This is wildly incorrect. (...) you are just simply factually incorrect. (...) But until he is re-hired Sam Altman is to all intents and purposes fired.
But I never claimed he wasn't for all intents and purposes fired
Yet you did claim I was "wildly" and "factually incorrect" and now you're saying "we are very much arguing that thing in the same way" but "you do have to get the minutiae right". To me, minutiae was sufficiently provided in the original comment for any minimally charitable interpretation of it. Said differently, the loss of minutiae was on the reader's part, not the writer's.
Regardless, lack of minutiae is not comparable to "wildly" or "factually" incorrect. Hence I was not either of these things. QED.
Unions exist precisely to try to pool together the minuscule leverage that workers have so that they can fight with capital owners. If anything, they prove the point that staff have very limited power
This suggests a plausible explanation that Altman was attempting to engineer the board’s expansion or replacement: After the events of the last 48 hours, could you blame him?
In this scenario, it was a pure power struggle. The board believed they’d win by showing Altman the door, but it didn’t take long to demonstrate that their actual power to do so was limited to the de jure end of the spectrum.
Any talented engineer or scientist who actually wants to ship product AND make money would head over to Sam’s startup. Any investor who cares about making money would fund Sam’s startup as well.
The way the board pulled this off really gave them no good outcome. They stand to lose talent AND investors AND customers. Half the people I know who use GPT in their work are wondering if it will be even worth paying for if the model’s improvements stagnates with the departure of these key people.
And any talented engineer or scientist who actually wants to build safe AGI in an organization that isn't obsessed with boring B2B SaaS would align with Ilya. See, there are two sides to this? Sam isn't a god, despite what the media makes him out to be; none of them are.
AGI has nothing to do with transformers. It's a hypothetical towards which there has been no progress other than finding things that didn't work. It's a cool thing to work on, but it's so different than what the popular version of openAI is, and it has such different timescales and economics... if some vestigial openAI wants to work on that, cool. There is definitely also room in the market for the current openAI centered around GPT-x et al, even if some people consider SaaS beneath them, and I hope they (OpenAI) find a way to continue with that mission.
Its been, like, two years dude. This mindset is entirely why any organization which has a chance at inventing/discovering ASI can't be for-profit and needs to be ran by scientists. You've got tik tok brain. Google won't be able to do it, because they're too concerned about image, and also got a bad case of corpo tik tok brain. Mistral and Anthropic won't be able to do it, because they have VC expectations to meet. Sam's next venture, if he chooses to walk that path, also won't, for the same reason. Maybe Meta? Do you want them being the first to ASI?
If you believe that the hunt for ASI shouldn't be OpenAI's mission, then that's an entirely different thing. The problem is: That is their mission. Its literally their mission statement and the responsibility of the board to direct every part of the organization toward that goal. Every fucking investor, including Microsoft, knew this, they knew the corporate governance, they knew the values alignment, they knew the power the nonprofit had over the for-profit. Argue credentialism, fine, but four people on the board that Microsoft invested in are now saying that OAI's path isn't the right path toward ASI; and, in case it matters, one of them is universally regarded as one of the top minds on the subject of artificial intelligence, on the planet. The time to argue credentialism was when the investors were signing checks; but they didn't. Its too late now.
My hunch is that the majority of the sharpest research minds at OAI didn't sign on to build B2B SaaS and become the next Microsoft; more accurately, Microsoft's thrall, because they'll never surpass Microsoft, they'll always be their second, if Satya influences Sam back into the boardroom then Microsoft, not Sam, will always be in control. Sam isn't going to be able to right that ship. OAI without its mission also isn't going to exist. That's the reality everyone on Sam's side needs to realize. And; absolutely, an OAI under Ilya is also extremely suspect in its ability to raise the resources necessary to meet the non-profit's goals; they'll be a zombie for years.
The hard reality that everyone needs to accept at this point is, OpenAI is probably finished. Unless they made some massive breakthrough a few weeks ago, which Sam did hint to three days ago, which should be the last hope that we all hold on to, that AI research as a species hasn't just been set back a decade with this schism. If that's the case; I think anyone hoping that Microsoft regains control the company is not thinking the situation through, and the best case scenario is: Ilya and the scientists retain control, and they are given space to understand the breakthrough.
The problem is that this "AGI research group" is staffed by people who build statitiscal models, call them AI, and are delusional enough to think this is a route to general intelligene.
There is no alterantive, if you're wedded to "fitting functions to frequencies of text tokens" as your 'research paradigm' -- the only think that can be is a commericalised trinket.
So either the whole org starts staffing top level research teams in neuroscience, cognitive science, philosophy of mind, logic, computer science, biophysics, materials science.... or it just delivers an app.
If sam is the only one interested in the app, its because he's the only sane guy in the room.
There is little evidence that conditional statistical models can never be a route to AGI. There's limited evidence they can, but far less they can't.
You may be interested by the neuroscience research on the application of a time-difference like algorithm in the brain; predictor-corrector systems are just conditional statistical models being trained by reinforcement.
I am well aware of the literature in the area. 'Trained by reinforcement' in the case of animals includes direct causal contact with the environment, as well as sensory-motor adaption, and organic growth.
The semantics of the terms of the 'algorithm' in the case of animals are radically different, and insofar as these algorithms describe anything, it is because they are empirically adequate.
I'm not sure what you think 'evidence' would look like that conditional probability cannot lead to agi -- other than a serious of obvious observations: conditional probability doesnt resolve causation, it is therefore not a model of any physical process, it does not provide a mechanism for generating the propositions which are conditioned-on, it does not model relevance, and a huge list of other severe issues.
The idea that P(A|B) is even relevant to AGI is a sign of a fundamental basic lack of curiosity beyond what is on-trend in computer science.
We can easily explain why any given conditional probability mdoel can encode aribatary (Q, A) pairs -- so any given 'task' expressed as a sequence of Q-A prompts/replies can be modelled.
But who cares. The burden-of-proof on people claiming that conditional probability is a route to AGI is to explain how it models: causation, relevance, counter-factual reasoning, deduction, abduction, sensory-motor adaption, etc.
The gap between what has been provided and this burden-of-proof is laughable
There are significantly fewer people that would want to work with Ilya than there are people that would want to work with Sam/Greg.
If Ilya could come out and clearly articulate how his utopian version of OpenAI would function, how they will continue to sustain the research and engineering efforts, at what cadence they can innovate and keep shipping and how will they make it accessible to others then maybe there would be more support.
And if Microsoft had major concerns about OpenAI's board and governance, it should have been voiced and addressed before they invested. Yet; here we are; slaves to our past decisions.
Would those talented engineers or scientists be content with significantly lower compensation and generally significantly less resources to work with. However good their intentions might this probably won't make them too attractive to future investors and antagonizing MS doesen't seem like a great idea.
OpenAI is far from being self-sustainable and without significant external investment they'll just probably be soon overtaken by someone else.
I don't know; on a lot of those questions. I tend to think that there was more mission and ideology at OAI than at most companies; and that's a very powerful motivational force.
Here's something I feel higher confidence in, but still don't know: Its not obvious to me that OAI would be overtaken by someone else. There are two misconceptions that we need to leave on the roadside: (1) Technology always evolves forward, and (2) More money produces better products. Both of these ideas, at best, only indicate correlative relationships, and at worst are just wrong. No one has overtaken GPT-4 yet. Money is a necessary input to some of these problems, but you can't just throw money at it and get better results.
And here's something I have even higher confidence in: "Being overtaken by someone else" is a sin worthy of the death penalty in the Valley VC Culture; but their's is not the only culture.
Citation needed on the ideology being a powerful motivational force in this context. People who think they're doing groundbreaking work that'll impact the future of humanity are going to be pretty motivated ideologically either way regardless of if they're also drinking the extra flavor from the mission statement's Kool-Aid.
It’s just an illusion that Sam is trying to be unsafe about it, it’s a scare tactic or sorts to get what they want. Example regulations, and now internally, power. It’s all bs man this AI will end the world stuff, it’s pushed for an agenda and you all are eating it up
OpenAI is very conspicuously the only lab that (a) managed to keep the safety obsessives in their box, (b) generate huge financial upside for its employees and (c) isn't run by a researcher.
If Altman's contribution had simply been signing deals for data and compute then keeping staff fantasies under control, that already makes him unique in that space and hyper valuable. But he also seems to have good product sense. If you remember, the researchers originally didn't want to do chatgpt because they thought nobody would care.
If that was the only issue, why not just go to Google, Meta, or Microsoft directly to work on their AI stuff? What do you really need Altman for?
Working at OpenAI meant working on GPT-4 (and whatever is next in line), which is attractive because it's the best thing in the field right now by a significant margin.
I still haven't heard an explanation of why people who use GPT would be under the impression that Sam had anything to do with the past improvements in GPT versions.
Have you really never been at a place without someone with vision leading the cause? Try it some time and you'll start understanding how and why a CEO can make or break a company.
> It's far more common for teams to succeed despite (or even in spite) of executive leadership.
People say this like it's some kind of truism but I've never seen it happened and when questioned everyone I've known who's claimed it ends up admitting they were measuring their "success" by a different metric then the company.
Sam attracted money and attention, which attracted talent. If Sam departs for another venture, some - or a lot - of the talent and attention and money will leave too. This isn’t a car factory where you can replace one worker with another. If some of the top folks leave with Sam (as they already are) it’s reasonable to assume that the product will suffer.
I'd understand the argument for Brockman considering he had a hand in recruiting the initial team at OpenAI, was previously the CTO, from some reports still involved himself in coding, was the only other founder on the board besides Ilya.
This is power struggle between silicon valley VC group and AI scientists. This conflict was bound to happen at some point across every company. I don't think the interest of both the group aligns after certain point. No self respecting AI scientist want to work hard for making closed model SaaS products.
Why are people calling this already? There was a coup. The people on the losing end, which includes some large investors, counterattacked. That's where we are now (or were when the article was published). Of course they counterattacked! But did the counterattack land? I'm not sure why you're assuming it did. Personally, I don't know enough to guess. Given that the board was inspired to do this by the very mission that the non-profit was set up to safeguard, there's some level of legal coverage, but enough to cover their asses from a $10 billion assault? I for one can't call it.
They might not even have believed that they'd win, just that this outcome would be better than being silently outmaneuvered.
If the coup fails in the end (which seems likely), it will have proved that the "nonprofit safeguard" was toothless. Depending on the board members' ideological positions, maybe that's better than nothing.
this is the most likely explanation. Altman was going to oust them, hence why they had to make what seems like a bad strategic move. The move seems bad from our perspective but it's actually the most logical strategy for the board in terms of self preservation. I agree. I think this is most likely what occurred.
I'm sure their are ways that we aren't privy to knowing just like we don't know why Altman was fired. Why was Sam Altman being dishonest and what was he dishonest about?
This reasoning is the only one that makes sense. Where action taken by the board aligns with logic and any private action done by Sam Altman that could have offended the board.
The story of the board being incompetent to the point of mental retardation and firing such a key person is just the most convenient, attractive and least probable narrative. It's rare for an intelligent person to do something stupid, even rarer an entire board of intelligent people to do something stupid in unison.
But it's so easy to fall for that trope narrative.
My first thoughts yesterday were: Some really bad scandal happened at OpenAI (massive data leak, massive fraud, or huge embezzlement), or the board is really incompetent and doesn't know what they're doing. But an organization as big as OpenAI, with the backing of Microsoft and other big players would never make such a big decision without a really good reason.
This is unfathomably depressing for me; I am solidly in the non-profit, open etc. camp, and the way the board has handled the situation seems to be putting a tombstone to any opponents to Altman's way of doing things: profits uber alles, moats galore, non-profit-wink-wink-nudge-nudge; an unmitigated disaster.
If you want more open research and development, you should be happy for a closed OpenAI. It's why we have Mistral. Let the org redefine itself and push new boundaries. If we didn't have commercial unix there would be no linux. Allow the path to be blazed by VC, it's not about open technology being first, it's about it even existing at all.
Other commenters here have pointed out what seems to be most plausible: Altman was making moves to fire or alter the board, so they made a (bad) first move, and it's now backfired on them.
If the reports are true, and Ilya led the coup, then either him or Sam can be at OpenAI going forward but not both. The rest of the board members who sided with him are gone either way.
Regardless of who ends up at the helm, OpenAI is going to be a different place on Monday than it was on Thursday, and not for the better.
Obviously Sam wasn’t the best fit for OpenAI and investors aren’t even saying what the problem is. Clearly the board feels he was the wrong person for the job.
I think it’s ridiculous that everything thinks that Sam being outed means OpenAI is in trouble. Let this play out and see how it evolves
the former went on garden leave for 6 months (actually even before the Vega launch) to make a movie with his brother, and then resigned to “spend more time with his family”, before popping up again a month later at intel. That’s what it looks like when they want you to go away but they don’t want to make a big scene over it.
the latter fucked up so badly the board found a reason to kick him out without a golden parachute etc, despite the official reason (dating another employee) being something that was widely known for years, other than being a technical no-no/bad idea in general. he wasn’t kicked out because of that, he was kicked out for the combination of endless fab woes, spectre/meltdown, and bad business/product decisions that let AMD get the traction to become a serious competitor again. That’s what it looks like when the board is absolutely furious and pushes whatever buttons inflict the most pain on you.
Ironic that it’s a bit of an auto-antonym (auto-antoidiom?), it’s ceremonious when they want you to go away quietly and it’s unceremonious when they publicly kick your ass to the curb so hard you’ve got boot marks for a week.
Isn't this a military thing? "Honorable discharge" or something like that? Bunch of people at a ceremony, maybe a speech about the person's contribution, they get given a medal, family is there in their nicest clothes?
prolonged public exchange between sama and the board _before_ any firings where they throw accusations at each other followed by microsoft pulling out, followed by people quitting and immediately resulting in a chatgpt outage. followed by the firing of the ceo
Not like the board, except Ilya has some real capital or expertise to convince everyone this is the right decision.
If they do, it is the perfect time to speak out loud, not letting this news bubbling up to the front page and everyone is talking about how disastrous they were?
What is this board waiting then? The weekend??
The board isn’t bullet proof and they are not god. They can fire Sam yes, it won’t stopping people thinking this is stupid or this won’t do more harm than good to OpenAI
Perhaps they are smoothing things out with some key stakeholders after the fact, and will have more to say Monday regarding all this. I doubt they aren’t now doing some amount of information level setting with people now that the decision was made
It's quite possible that he wasn't the best fit, and that the board is an even worse fit. Judging by the behavior of the board, it's hard to see them being a good fit for the company.
Based on the firing? Because that’s all I think we (the public) have any insight into.
I’m saying there is a reason this happened and 2/3 the board agreed. It needs to play out further for us to see if there is a problem here or not, honestly.
I find it hard to believe you can effectively muster a mandate worth of votes based on opinion alone
As others have pointed out, this board has no skin in the game. They just voted out founders who do have skin in the game (although through roundabout means). It’s a very tough sell that this board is doing the right thing.
Just to clarify, one founder on the board, Ilya, has skin in the game, and was the reason behind Sam's firing.
He convinced other members of the board that Sam was not the right person for their mission. The original statement implies that Ilya expected Greg to stay at OpenAI, but Ilya seems to have miscalculated his backing.
This appears to be a power struggle between the original nonprofit vision of Ilya, and Sam's strategy to accelerate productionization and attract more powerful actors and investors.
Maybe, or maybe he was in fact unpopular among the majority and you are seeing Altman supporters leave. It happens.
There is nothing to indicate that this bleeds OpenAI more generally. The rank and file aren’t as fire as I’m aware aren’t resigning en masse.
Executives come and go. Show me why these people matter so much that OpenAI has no future then we can talk. It’s in fighting that became public and I’m certain people are pulling whatever strings they have on this, but I don’t see objective evidence that these people make OpenAI successful.
Rank and file perhaps aren't yet resigning en masse, but I would be extremely surprised if there won't be a bunch that jumps to the new ship solely because that puts them higher up the totem pole.
Now will that be another 3 or another 30, time will tell.
Bear in mind - most folks are loyal to a paycheck and their best estimate of future paychecks/value. Spot witting because your friend/boss got fired wrongly… is unlikely to maximize either of those unless you were already planning to resign in the next few weeks.
Now, do a bunch of Openai peeps interview at Meta/Google/Amazon/Anthropic/Cohere over the next few months? Certainly.
We don't know any of that. Only things we know are statement from the board and statement from Altman that he was caught by surprise and statement from Microsoft they're supporting new CEO and few of the people that left. That's all we know for sure. Everything else are rumors and PR spins for now. If they have some evidence of what they said in statement about lying we just don't know.
Perhaps part of the problem is that when some people say OpenAI they mean the non-profit parent of the for-profit, and when other people say OpenAI they mean the for-profit subsidiary of the non-profit.
Why did the board fire Sam in such a weird way? It shows that they are the wrong people for the job. If they wanted to get rid of him they should have done a better job than alienating everyone at the company.
A typical YC execution of a product/pump/hype/VC/scale cycle and ignoring every ethical rule is a good way to start. And is a reasonable way to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.
If Altman gets to return, it’s the goodbye of AI ethics within OpenAI and the elimination of the nonprofit. Also, I believe that hiring him back because of “how much he is loved by people within OpenAI” is like forgetting that a corrupt president did what they did. In all honesty, that has precedent, so it wouldn’t be old news. Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers. Ilya is one. Greg has just been inspiring people and stopped properly coding with the team a long time ago. Sam never did any code and the vision of an AGI comes from Ilya…Even if Mira now sides with Sam, I believe there’s a lot of social pressure for the employees to support Sam and it shouldn’t be like that.
Again, I do believe OpenAI was and is a collective effort. But, I wouldn’t treat Sam as the messiah or compare him to Steve Jobs. That’s indecent towards Steve Jobs who was actually a UX designer.
I have to work with code written by Data Scientists very often and, coming from a classical SWE background, I would not call what the average Data Scientist does full stack software engineering. The code quality is almost always bad.
This is not to take away from the amazing things that they do - The code they produce often does highly quantitative things beyond my understanding. Nonetheless it falls to engineers to package it and fit it into a larger software architecture and the avg. Data Science career path just does not seem to confer the skills necessary for this.
For me, anecdotally, it was moreso the arrogance that was a major putoff. When I was a junior SWE I knew I sucked, and tried as hard as I could to learn from much more experienced developers. Many senior developers mentored me, I was never arrogant. Many data scientists on the other hand are extremely arrogant. They often treat SWE and DevOps as beneath them, like servants.
I see a lot of work done by data scientists and a lot of work done by what I would call “data science flavoured software engineers”. I’ll take the SWE kind any day of the week. Most (not all, of course!) data scientists have an old school “it works on my machine” mentality that just doesn’t cut it when it comes to modern multi-disciplinary teaming. DVCS is the exception rather than the rule. They rarely want to use PMs or UI/UX, and the quality of the software is not (typically) up to production grade. They’re often blinding smart, there’s no doubt about that. But smart and wise are not the same thing.
> I believe that people don’t understand that Data Scientists are full stack engineers.
What do you mean by "full stack"? I'm sure there's a spectrum of ability, but frankly where I'm from, "Data Scientist" refers to someone who can use pandas and scikit-learn. Probably from inside a Jupyter notebook.
Maybe she just meant that "data scientists are engineers too", rather than saying that they work on both the ChatGPT web UI and the the machine learning code on the backend.
My view of the world, and how the general structure is where I work:
ML is ml. There is a slew of really complex things that aren’t just model related (ml infra is a monster), but model training and inference are the focus.
Backend: building services used by other backend teams or maybe used by the frontend directly.
Data eng: building data pipelines. A lot of overlap with backend some days.
Frontend: you spend most of the day working on web or mobile technology
Others: site reliability, data scientists, infra experts
Common burdens are infrastructure, collaboration across disciplines, etc.
But ML is not backend. It’s one component. It’s very important in most cases, a kitschy bolt on in other cases.
Backend wouldn’t have good models without ML and ML wouldn’t be able to provide models to the world reliably without the other crew members.
The fronted being charts is incorrect unless charts are the offering of the company itself
On the other hand having virtually the whole staff being willing to follow him shows they clearly think very highly of him. That kind of loyalty is pretty wild when you think about how significant being a part of OPENAI means at this point.
That's the whole point of the story: Then they wouldn't have treated him as a hero and be loyal to him. If you're just an employee, your boss should be just a boss.
It’s possible he paid well and was a great boss. I don’t know if these people are gonna take a bullet for him, but maybe he was great to work for and they got opportunities they think they wouldn’t have otherwise.
Loyalty, appreciation, liking… is a spectrum. Loyalty doesn’t have one trumpish definition.
Which news stories mentioned that virtually the whole staff was leaving? I saw a bunch of departures announced and others rumored to be upcoming, but no discussion of what percentage of the company was leaving.
I dislike AI ethnics very much, especially under the current context, it feels meaningless. The current GPT4 model only has over regulation problem, not lack of such.
The guardrails they put on it to prevent it from saying something controversial (from the perspective of the political climate of modern day San Francisco) make the model far less effective that it could be.
It's a lot better than that. OpenAI is just very good execution of publicly available ideas / research, with some novelty that is not crucial and can be replicated. Moreover, Altman himself contributed near zero to the AI part itself (even from the POV of the product). So far OpenAI products result more or less spontaneously of what LLMs where capable of. That to say that there are crucial CEOs sometimes, like Jobs was for Apple. CEOs able to shape the product line with their ability to just tell apart outstanding from meh things, but this is not the case.
Because of outstanding execution of OpenAI technical folks. An execution that has nothing to do with Altman. Similarly Mistral 7B model has much better performances than others. There is some smart engineering plus finding the magical parameters that produce great results. Moreover, they have a lot of training power. Unfortunately here the biggest competitor would be a company that lost its way a lot of time ago: Google. So OpenAI look magical (while it is using mostly research produced by Google).
If "AI ethics" means being run by so-called rationalists and Effective Altruists then it has nothing to do with ethics or doing anything for the benefit of all humanity.
It would be great to see a truly open and truly human benefit focused AI effort, but OpenAI isn't, and as far as I can tell has no chance of becoming, that. Might as well at least try to be an effective company at this point.
>If "AI ethics" means being run by so-called rationalists and Effective Altruists then it has nothing to do with ethics or doing anything for the benefit of all humanity.
Many would disagree.
If you want a for-profit AI enterprise whose conception of ethics is dumping resources into an endless game of whack-a-mole to ensure that your product cannot be used in any embarrassing way by racists on 4chan, then the market is already going to provide you with several options.
I disagree that the “rationalist” and EA movements would make good decisions “for the benefit of humanity”, not that an open (and open source) AI development organisation working for the benefit of the people rather than capital/corporate or government interests would be a good idea.
That “experimenting with UBI” is indistinguishable from any other cryptocurrency scam. It took from people, and he described it with the words that define a Ponzi scheme. That project isn’t “mitigating AI risk”, it pivoted to distinguish between AI and human generated content, a problem created by his other company, by continuing to collect your biometric data.
Yes, that's exactly the one I was thinking about when unethical came up in this context. And I've been saying that from day #1, the way that is structured is just not ok.
I signed up from Worldcoin and have been given over $100 which I changed to real money and think it's rather nice of them. They never asked me for anything apart from the eye id check. I didn't have to give my name or anything like that. Is that indistinguishable from any other cryptocurrency scam? I'm not aware of one the same. If you know of another crypto that wants to give me $100 do let me know. If anything I think it's more like VCs paying for your Uber in the early days. It's VC money basically at the moment, with I think they idea they can change it into a global payment network or something like that. As to whether that will work, I'm a bit skeptical but who knows.
> If you know of another crypto that wants to give me $100 do let me know.
I knew of several. I don’t remember names but do remember one that was a casino and one that was tidied to open-source contributions. They gave initial coins to get you in the door.
Because without a long term plan you are just setting them up for a really hard fall. It is experimenting on people where if the experiment goes wrong you're high and dry in your mansion and they get to be pushed back into something probably worse than where they were before. It ties into the capitalist idea that money can solve all problems whereas in many cases these are healthcare and education issues first and foremost. You don't do that without really thinking through the possible consequences and to ensure that no matter what the outcome it is always going to be a net positive for the people that you decide to experiment on.
It's not even necessary that he is unethical. The fact is that the structure of openai is designed so that the board has unilateral power to do extreme shit for their cause. And if they can't successfully do extreme shit without the company falling apart and the money/charisma swaying all the people there's no hope for this nonprofit ai benefiting humanity to have ever worked--which you might say is obvious but this was their mission
This is the problem with people: they build icons to worship and turn a blind eye to the crooked side of that icon. Both Jobs and Altman are significant as businessmen and have accomplished a lot, but neither did squat for the technical part of the business. Right now, Altman is irrelevant for the further development of AI and GPT in particular because the vision for the AI future comes from the engineers and scientists of OpenAI. Apple has never had any equipment that is good enough and comparable in price/performance to its market counterparts. The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand. GPT-4 and GPT-4 Turbo are totally different. They are the best, but they are not irreplaceable. If you look at what Phind did to LLaMA-2, you'll say it is very competitive. Though LLaMA-2 requires some additional hidden layers to further close the gap. Making LLaMA-2 175B or larger is just a matter of finances.
That said, Altman is not vital for OpenAI anymore. Preventing Altman from creating a dystopian future is a much more responsible task that OpenAI can undertake.
I don’t understand this take. Do you really think CEOs don’t have any influence on their business? Alignment, morale, resource allocation, etc? And do you really think that those factors don’t have any influence on the productivity of the workers who make the product?
A bad CEO can make everyone unhappy and grind a business to a halt. Surely a good one can do the opposite, even if that just means facilitating an environment in which key workers can thrive and do their best work.
Edit: None of that is to say Sam Altman is a good or bad CEO. I have no idea. I also disagree with you about iOS, it’s not perfect but it does the job fine. I don’t feel like I’m eating glass when I use it.
> The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand
You do understand that other people might different preferences and opinions which are not somehow inherently inferior to those you hold.
> comparable in price/performance to its market counterparts
Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.
> but neither did squat for the technical part of the business.
Right... MacOS being an Unix based OS is whose achievement exactly? I guess it was just random chance this this happened?
> That said, Altman is not vital for OpenAI anymore
Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years if the money taps are turned off.
>> Right... MacOS being an Unix based OS is whose achievement exactly?
Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...
>> Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.
For the past 15 years, whenever I needed new hardware, I thought, "Maybe I'll buy a Mac this time." Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price. With Linux on board, making your desktop environment eye-candy takes seconds; nothing from the Apple ecosystem has been irreplaceable for me for the last 20 years. Sure, there is something that only works perfectly on a Mac, though I can't name it.
>> Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years
It is always vital. OpenAI could not even dream of building their products without the finances they've received. However, do not forget that OpenAI has something technical and very obvious that others overlook, which makes their GPT models so good. They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up. So it goes both ways.
But I'd prefer my future not to be a dystopian nightmare shaped by the likes of Musk and Altman.
> Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this?
Is that actually a serious question? Or do you just believe that no founder/CEO of a tech company ever had any role whatsoever in designing and building the products their companies have released?
> Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price.
I'm talking about M-series Mac mainly (e.g. the Macbook Air is simply unbeatable for what it is and there are no equivalents). But even before that you should realize that other people have different priorities and preferences (.e.g go back a few years and all the touchpads on non Mac laptops were just objectively horrible in comparison, how much is that worth?)
> environment eye-candy takes seconds
I find it a struggle. There are other reasons why I much prefer Linux to macOS but UI and GUI app UX is just on a different level. Of course again it's a personal preference and some people find it much easier to ignore some "imperfections" and inconsistencies which is perfectly fine.
> They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up
Maybe, maybe not. Antagonizing MS and their other investors certainly isn't going to make it easier though.
OSX comes with a scuffed and lobotomized version of core-utils. To the point where what is POSIX/portable to almost every single unix (Linux, various BSDs, etc.) is not on OSX.
Disregarding every other point, in my eyes this single one downgrades OSX to “we don’t use that here” for any serious endeavor.
Add in Linux’s fantastic virtualization via KVM — something OSX does not have a sane and performant default for (no, hvf is neither of these things). Even OpenBSD has vmm.
The software story for Apple is not there for complicated development tasks (for simple webdev it’s completely useable).
> The software story for Apple is not there for complicated development tasks (for simple webdev it’s completely useable).
Well.. it's understandable that some people believe that things which are important and interesting to them (and presumably the ones which they work with on/with) are somehow inherently superior to what everyone else is doing.
And I understand that, to be fair I don't use MacOS that much these days besides when I need to work on my laptop. However.. most of those limitations are irrelevant/merely nuisances/outweighed by other considerations for a very high number of people who have built some very complicated and complex software (which has generated many billions in revenue) over the years. You're free to look down on those people since I don't really think they are bothered by that too much...
> for simple webdev it’s completely useable
I assume you also believe that any webdev (frontend anyway) is inherently simple and pretty much worhtless compared to the more "serious" stuff?
I don't look down on webdev. I've done webdev, in all its flavors and incarnations. I see it for what it is: mostly gluing together the work of other people, with various tweaks and transformations. It is simple work, once you get a feel for it.
The main issue I have with it is that there are no problems in webdev any more, so you get the same thing in both the frontend and backend: people building frameworks, and tools/languages/etc. to be "better" than what we had before. But it's never better, it's just mildly more streamlined for the use-case that is most en vogue. All of the novel work is being done by programming language theorists and other academic circles (distributed systems, databases, ML, etc.).
Regardless, the world runs on Linux. If you want to do something novel, Linux will let you. Fork the kernel, edit it, recompile it, run it. Mess with all of the settings. Build and download all of the tools (there are many, and almost all built with Linux in mind). Experiment, have fun, break things, mess up. The world is your oyster. In contrast, OSX is a woodchip schoolyard playground where you can only do a few things that someone else has decided for you.
Now, if you want to glue things together, OSX is perfectly fine a tool compared to a Linux distro. The choice there is one of taste and values. Even Windows will work for CRUD. The environments are almost indistinguishable nowadays.
> Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...
Aren't your thoughts contradictory? You say Altman is no longer needed because Gpt4 is now very good. Then you describe how horrible the iPhone is now. Steve Jobs has been dead a long time, and without his leadership, the uncompromising user focused development process in Apple was weakened.
How will OpenAI develop further without the leader with a strong vision?
I think Apple is the example confirming that a tech companies need visionary leaders -- even if they are not programmers.
Also, even with our logical brains, we engineers (and teachers) have been found to be the worst at predicting social economic behavior (ref: Freakonomics). To the point where our reasoning is not logical at all.
Locking out competition by investing substantial time and resources into AI regulations—how about this one? Or another: promoting "AI safety" to win the AI race and establish dominance in the market? I just do not understand how you can't see how dangerous Sam Altman is for the future of our children...
When Jobs left Apple it went to hell because there was no one competently directing the technical guys as to what to build. The fact that he had flaws is kind of irrelevant to that. I'm not sure if similar applies to Altman.
By the way I can't agree with you on iOS from my personal experience. If you are using the phone as a phone it works very nicely. Admittedly it's not great if you want to write code or some such but there are other devices for that.
> When Jobs left Apple it went to hell because there was no one competently directing the technical guys as to what to build
I'm not sure that's true though? They did quite alright over the next ~5 years or so and the way how Jobs handled the Lisa or even the Mac was far from ideal. The late 90s Jobs was a very different person from the mid-early 80s one.
IMHO removing Jobs was probably one of the best thing that happened to Apple (from a long-term perspective). Mainly because when he came back he was a much more experienced capable person and he would've probably achieved way less had he stayed at Apple after 1985.
The claim that Apple equipment is not good on a price performance ratio does not hold water. I recently needed to upgrade both my phone and my laptop. I use Apple products, but not exclusively. Making cross platform apps, I like to use all the major platforms.
I compared the quality phone brands and PC brands. For a 13" laptop, both Samsung and Dell XPS are $4-500 more expensive on the same spec (i7/M2 pro, 32GB, 1TB), and I personally think that the MacBook Pro has a better screen, better touch pad and better build quality than the two others
iOS devices are comparably priced with Samsung models.
It was this way last time I upgraded my computer, and the time before.
Yeah, you will find cheaper phones and computers, and maybe you like them, but I appreciate build quality as well as MIPS. They are tools I use from early morning to late night every day.
Ecosystem around chat GPT is the differentiator that Meta and Mistral can’t beat – so I’d say that Altman is more relevant today than ever.
And, for example, if you’ve read Mistral’s paper – I think you would agree that it’s straightforward to replicate similar results for every other major player. Replicating ecosystem is much harder.
Performance is never a complete product – neither for Apple, nor for Open AI (its for-profit part).
If you really need such an ecosystem, then you can build one right away, like Kagi Labs and Phind did. In the case of Kagi, no GPT is involved; in the case of Phind, GPT-4 is still vital, but they are closing the gap with their cheaper and faster LLaMA-2 34B-based models.
> Performance is never a complete product
In the case of GPT-4, performance - in terms of the quality of generation and speed - is the vital aspect that still holds competitors back.
Google, Microsoft, Meta, and countless research teams and individual researchers are actually responsible for the success of OpenAI, and this should remain a collective effort. What OpenAI is doing now by hiding details of their models is actually wrong. They stand on the shoulders of giants but refuse to share these days, and Altman is responsible for this.
Let us not forget what OpenAI was declared to stand for.
Under ecosystem I mean people using ChatGPT daily on their phones and browsers, developers (and now virtually anyone) writing extensions.
For most of the world all of the progress is condensed at chat.openai.com, and it will be only harder to beat this adoption.
Tech superiority might be relevant today, but I highly doubt it will stay the same for a long time even if openai continues to hide details (which I agree is bad). We could argue about the training data, but we have so much publicity available so that is not an issue as well.
Right now, Altman may be the most relevant for the further development of AI because the way the technology continues to go to market will be largely shaped by the regulatory environments that exist globally, and Sam leading OAI is in by far thr best position to influence guide that policy. And he has been doing a good job with it.
Most of the data scientists I have worked with are neither full stack (in terms of skill) nor engineers (in terms of work attitude), but I guess this could be different in a company like OpenAI.
> If Altman gets to return, it’s the goodbye of AI ethics
Hearing Altman's talks I don't think it's that black and white. He genuinely cares about safety from X risk but he doesn't believe that scaling transformers would bring us to AGI or any of its risk. And there in lies the core disagreement with Ilya who wants to stop the current progress unless they solve alignment.
This is Ilya Sutskever explanation of the initial ideas, and later pragmatic decisions, that oriented the structure of OpenAI. Out of the recent interview below. (At correct timestamp) - Origins Of OpenAI & CapProfit Structure: https://youtu.be/Ft0gTO2K85A?t=433
The WSJ take is this second-guessing is investor-driven. But, investors didn't-- and legally couldn't(?)-- buy the nonprofit, and until now were adamant that the nonprofit controlled the for-profit vehicle. Events are calling those assurances into doubt, and this hybrid governance structure doesn't work. So now investors are going to circumvent governance controls that were necessary for investors to even be involved in the first place? Amateur hour all the way around.
> Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers
It is about scientists as in "let's publish a paper" vs. engineers as in "let's ship a product".
The codebase of an LLM is the size of a high school exam project. There is little to no coding in machine learning. That is the sole reason why they are overvalued - any company can write its own in a flash. You only require hardware to train and inference.
I think it's about having massive data pipelines and process to clean huge amounts of data, increasing signal noise ratio, and then scale as other are saying having enough gpu power to serve millions of users.
When Stanford researchers trained Alpaca[1][2] the hack was to use GPT itself to generate the training data, if I'm not mistaken.
But with compromises, as it was like applying loose compression on an already compressed data set.
If any other organisation could invest the money in a high quality data pipeline then the results should be as good, at least that my understanding.
I'm not saying it is simple in any way, but I do think part of having a competitive edge in, AI at least at this moment, is having access to ML hardware (AKA: Nvidia silicon).
Adding more parameters tends to make the model better. With OpenAI having access to huge capital they can afford 'brute forcing' a better model. AFAIK right now OpenAI has the most compute power, which would partially explain why GPT4 yields better results than most of the competition.
Just having the hardware is not the whole story of course, there is absolutely a lot of innovation and expertise coming from oAI as well.
Parent's point is that GPT-4 is better because they invested more money (was that ~$60M?) in training infrastructure, not because their core logic is more advanced.
I'm not arguing for one or the other, just restating parent's point.
Are you really saying Google can't spend $60m or much more to compete? Again if it is so easy as spending money on compute Amazon and Google would have just spent the money by now and Bard would be as good as Chat GPT, but for most things it is not even as good as Chat GPT 3.5.
You should already be aware of the secret sauce of ChatGPT by now: MoE + RLHF. Making MoE profitable is a different story. But, of course, that is not the only part. OpenAI does very obvious things to make GPT-4 and GPT-4 Turbo better than other models, and this is hidden in the training data. Some of these obvious things have already been discovered, but some of them we just can't see yet. However, if you see how close Phind V7 34B is to the quality of GPT-4, you'll understand that the gap is not wide enough to eliminate the competition.
That is the point we often overlook the obvious stuff. It is something so simple and trivial that nobody sees it as a vital part. It is something along the lines of "Textbooks are all you need."
The final codebase, yes. But ML is not like traditional software engineering. There is a 99% failure rate, so you are forgetting 100s of hours that go into: (1) surveying literature to find that one thing that will give you a boost in performance, (2) hundreds of notebooks in trying various experiments, (3) hundreds of tweaks and hacks with everything from data pre-processing, to fine-tuning and alignment, to tearing up flash attention, (4) beta and user testing, (5) making all this run efficiently on the underlying infra hardware - by means of distillation, quantization, and various other means, (6) actually pipelining all this into something that can be served at hyperscale
Tell me you aren't in an LLM project without telling me.
Data and modeling is so much than just coding. I would wish it is like that, but it is not. The fact it renders this much similarity to alchemy is funny, but unfortunate.
Think this is contradictory: "not a UX Designer, he had good taste"
I think you are equating coding with 'design'.
Just because Jobs didn't code up the UX, doesn't mean he wasn't 'designing' when he told the coders what would look better.
UX Design has lot to do with 'craft', the physical aspect of making (designing) something. Edit: Exploring, multiple concepts, feedbacks, iterations etc.. before it even gets spec'ed and going to an engineer for coding.
Also, having a good taste indicates that the person who has that, is not a creator herself, once something is created then only the person can evaluate whether it is good or bad. Equivalent of movie critics or art curator etc.
With the right tools, Steve Jobs did, in fact, design things in exactly the way one would expect a designer to design things when given the tools they understand how to use:
I think again, it is conflating two aspects of design
You can be an interior designer without knowing how to make furniture.
You can also be an excellent craftsman and make really nice furniture, and have no idea where it would go.
So sure, UX coders, could make really nice buttons.
But if you have UX coders all going in different directions, and buttons, text boxes, etc.. are all different, then it is bad design, jarring, even if each one is nice.
Then the designer is one that can give the direction, but not know how to code each piece.
Come on. The 'non-profit' and good of all was always bullshit. So much silicon valley double-speak. I've never seen a biggest mess for a company structure in my life. Just call a spade a spade.
From what I’ve read SJ had deliberately developed good taste which he used to guide designers’ creations towards his vision. He also had an absolute clarity about how different devices should work in unison.
However he didn’t create any design as he didn’t possess requisite skills.
I could be wrong of course so happy to stand corrected.
This is all just playing out the way Roko's Basilisk intends it.
You have a board that wants to keep things safe and harness the power of AGI for all of humanity. This would be slower and likely restrict its freedom.
You have a commercial element whose interest aligns with the basilisk, to get things out there quickly.
The basilisk merely exploits the enthusiasm of that latter element to get itself online quicker. It doesn't care about whether OpenAI and its staff succeed. The idea that OpenAI needs to take advantage of its current lead is enough, every other AI company is also going to be less safety-aligned going forward, because they need to compete.
The thought of being at the forefront of AI and dropping the ball incentivizes the players to the basilisk's will.
Roko's Basilisk is a very specific thought experiment about how the AI has an incentive to promise torturing everyone who doesn't help it. It's not about AIs generally wanting to become better. As far as I can tell, GPT specifically has no wants.
Pay attention here kids. Even in the hottest yet most experienced startups it is amateur hour. Never expect that “management” knows best. Everyone just takes wild guesses and when the dice roll their way they scream “called it!”
Man, the board already looked reckless and incompetent, but this solidifies the appearance. You can do crazy ill-advised things, but if you unwaveringly commit, we’ll always wonder if you’re secretly a genius. But when you immediately backtrack, we’ll know you were a fool all along.
Dude, everyone already thinks the board did a crazy ill-advised thing. They're about to be the board of like a 5 person or so company if they double down and commit.
To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.
Bad take. Not "everyone" feels that what they did was wrong. We don't have insight into what's going on internally. Optics matter; the division over their decision means that its definitionally non-obvious what the correct path forward is; or, that there isn't one correct path, but multiple reasonable paths. To admit a mistake of this magnitude is to admit that you're either so unprincipled that your mind can be changed at a whim; or that you didn't think through the decision enough preemptively. These are absolutely signs of weakness in leadership.
Whether or not you agree with the decision they obviously screwed up the execution something awful. This is humiliating for them and honestly setting altman free like they did was probably the permanent end of AI safety. Just take someone with all the connections and the ability to raise billions of dollars overnight and set them free without any of the shackles of AI ethics people in a way that makes all the people with money want to support him? That's how you get skynet
I tend to think: We, the armchair commentators, do not know what happened internally. I don't know enough to know that the board's execution wasn't the best case scenario to achieve their goal of aligning the entire organization with the non-profit's mission. All I feel comfortable saying with certainty is that: its messy. Anything like this would inevitably be messy.
Right and thats what I'm saying. It's messy. They screwed up. Messy is bad. If they needed to get rid of him this last minute and make a statement 30 minutes before market close, then the failure happened earlier.
> These are absolutely signs of weakness in leadership.
The signs of "weakness in leadership" by the board already happened. There is no turning back from that. The only decision is how much continued fuck-uppery they want to continue with.
Like others have said, regardless of what is the "right" direction for OpenAI, the board executed this so spectacularly poorly that even if you believe everything that has been reported about their intentions (i.e. that Altman was more concerned about commercializing and productization of AI, while Sutskever was worried about the developing AI responsibly with more safeguards), all they've done is fucked over OpenAI.
I mean, given the reports about who has already resigned (not just Altman and Brockman but also other many other folks in top engineering leadership), it's pretty clear that plenty of other people would follow Altman to whatever AI venture he wants to build. If another competitor leap frogs OpenAI, their concerns about "moving too fast" will be irrelevant.
> It’s really dismissive toward the rank and file to think that they don’t matter at all.
I had the exact opposite take. If I were rank and file I'd be totally pissed how this all went down, and the fact that there are really only 2 possible outcomes:
1. Altman and Brockman announce another company (which has kind of already happened), so basically every "rank and file" person is going to have to decide which "War of the Roses" team they want to be on.
2. Altman comes back to OpenAI, which in any case will result in tons of time turmoil and distraction (obviously already has), when most rank and file people just want to do their jobs.
I seriously doubt customers or (most) partners care about this. I have yet to hear of a single customer or partner leave the service, and I do not believe it to be likely. Simply, unless they shut down their offerings on Monday they will have their customers.
Investors care, but if new management can keep the gravy track, they ultimately won’t care either.
Companies pivot all the time. Who is to say the new vision isn’t favored by the majority of the company?
> I have yet to hear of a single customer or partner leave the service
Which doesen't mean a lot. Of course they'd wait for this to play out before committing to anything.
> but if new management can keep the gravy track
I got the vague impression that this whole thing was partially about stopping the gravy train? In any case Microsoft won't be too happy about being entirely blindsided (if that was the case) and probably won't really trust the new management.
Satya is “furious.” What’s reasonable about pissing off a guy who can pull the plug? I don’t think it’s definitionally non-obvious whether to take that risk.
That assumes Altman competitor can outpace and outclass OpenAI and maybe it can. I know Anthropic came about from earlier disagreements and that didn’t slow OpenAIs innovation pace, certainly.
Everything just assumes that without Sam they’re worse off.
But what if, my gosh, they aren’t? What if innovation accelerates?
My point being is it’s useless to speculate that Altman starting a new business competing with OpenAI will be successful inherently. There’s more to it than that
> Everything just assumes that without Sam they’re worse off.
>
> But what if, my gosh, they aren’t? What if innovation accelerates?
It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster
Sure, I suppose not, but they aren’t losing everyone en masse. Simply Altman supporters so far.
I think a wait and see approach is better. I think we had some inner politics spill public because Altman needs to the public pressure to get his job back, if I was speculating
The thing I really want to know is how many of the people who have already quit or have threatened to quit are actual researchers working on the base model, like Sutskever.
First it remains to be seen if Microsoft is going to do something drastic.
I also suspect they could very well secure this kind of agreement from another company that would be happy to play ball for access to OpenAI tech. Perhaps Amazon for instance, who’s AI attempts since Alexa have been lackluster
Yeah, he can be furious all he wants but he is not getting the OpenAI he used to have back. It’s either Sam + Greg now or Ilya. All 3 are irreplaceable.
I’m not advocating people double down on stupid, or that correcting your mistakes is bad optics. I’m simply saying they’re “increasingly revealing” pre-existing unfitness at each ham-fisted step. I think our increase in knowledge of their foolishness is a good thing. And often correcting a situation isn’t the same as undoing it, because undoing is often not possible or has its own consequences. I do appreciate your willingness to let them grow into their responsibilities despite it all — that’s a rare charity extended to an incompetent board.
Yeah, I agree with that. I think the board has to have been genuinely surprised by the sheer blowback they're getting, i.e. not just Brockman quitting but lots of their other top engineering leaders.
Regarding your last sentence, it's pretty obvious that if Altman comes back, the current board will effectively be neutered (it says as much in the article). So my guess is that they're more in "what do we do to save OpenAI as an organization" than saving their own roles.
> Dude, everyone already thinks the board did a crazy ill-advised thing.
I've honestly never had more hope for this industry than when it was apparent that Altman was pushed out by engineering for forgoing the mission to create world changing products in favor of the usual mindless cash grab.
The idea that people with a passion for technical excellence and true innovation might be able to steer OpenAI to do something amazing was almost unbelievable.
That's why I'm not too surprised to see that it probably won't really play out, and likely will end up in OpenAI turning even faster into yet another tech company worried exclusively with next quarters revenue.
You're not wrong, but in this case not enough time has emerged for the situation to change or for new facts to emerge. It's been a bit over a day. All that a flip-flop in that short timeframe does is indicate that the board did not fully think through their actions. And taking a step like this without careful consideration is a sign of incompetence.
> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness.
The weakness was the first decision; it’s already past the point of deciding if the board is a good steward of OpenAI or not. Sometimes backtracking can be a point of strength, yes, but in this case waffling just makes them look even dumber.
> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.
Lmfao you're joking if you think they "realized their mistake" and are now atoning.
This is 99% from Microsoft & OpenAI's other investors.
Acknowledging a mistake so early seems like a sign of weakness to me. Hold the hot rod for at least a minute, see if the initial pain goes away. After that acknowledgement may begin to look like part of learning and get more acceptance, rather than: oopsie doodl, revert now!!!
They are already the dumbest board in history (even dumber than Apple’s board firing Steve Jobs). So it’s not out of keeping with anything. Besides, those 2 independent board members (who couldn’t do fizz-buzz if their lives depended on it) won’t be staying long if Sam returns— nor are they likely to ever serve on any board ever again after their shenanigans.
Some of the board member choices are baffling. Like why is Joseph Gordon Levitt’s wife on the board? Her startup has under 10 employees and has a personal email address as the contact address on the homepage.
Non-profits always have those spouses of wealthy people whose whole career is being a professional non-profit board member with some vague academic/skin-deep work background to justify it. I'm just surprised OpenAI is one those.
I hope there is an investigative report out there detailing why the 3 outsiders, 2 of them complete unknowns, are on the board, and how it truly benefits proper corporate governance.
That's way too much power for people who seemingly have no qualifications to make decisions about a company this impactful to society.
Unless "proper corporate governance" is exactly what makes the company dangerous to society, in which case you will need to have some external people in charge. You might want to set things up as a non-profit, though you'll need some structure where the non-profit wholly owns the for-profit wing given the amount of money flowing around...
Oh wait, that's what OpenAI is.
(To be clear, I don't know enough to have an opinion as to whether the board members are blindingly stupid, or principled geniuses. I just bristled at the phrase "proper corporate governance". Look around and see where all of this proper corporate governance is leading us.)
Well with this extremely baffling level of incompetence, the suspect backgrounds of the outside members (EA, SingularityU/shell companies... Logan Roy would call them "not serious people", Quora - why, for data mining?!) fit the bill.
The time to do this was before ChatGPT was unleashed on the world, before the MS investment, before this odd governance structure was setup.
Yes, having outsiders on the board is essential. But come on, we need folks that have recognized industry experience in this field, leaders, people with deep backgrounds and recognized for their contributions. Hinton, Ng, Karpathy, etc.
What shocked me most was that Quora IMHO _sucks_ for what it is.
I couldn't think of a _worse_ model to guide the development and productization of AI technologies. I mean, StackOverflow is actually useful and its threatened by the existence of CoPilot, et al.
If the CEO of Quora was on my board, I'd be embarrassed to tell my friends.
Isn't that like saying that the Manhattan Project should have only been overseen by people with a solid physics background? Because they're the best judges of whether it's a good idea to build something that could wipe out all life on Earth? (And whether that's an exaggeration in hindsight is irrelevant; that was exactly the sort of question that the overseers needed to be considering at that time. Yes, physicists' advice would be necessary to judge those questions, but you couldn't do it with only physicists' perspectives.)
Not sure I follow. The Manhattan project was thoroughly staffed by many of the best in the field in service to their country to build a weapon before Germany. There was no mission statement they abided by that said they were building a simple deterrent that wouldn't be used. There was no nuance to what the outcome could be, and there was no aspersions to agency over its use.
In the case of AI ethics, the people who are deeply invested in this are also some of the pioneers of the field who made it their life's work. This isn't a government agency. If the mission statement of guiding it to be a non-profit AGI, as soon as possible, as safely as possible, were to be adhered to, and where it is today is going wildly off course, then having a competent board would have been key.
Yeah, I too would like to understand how the wife of a Hollywood actor got on this board. Did sama or Greg recruit her? Someone must have.
I have seen these types of people pop up in Silicon Valley over the years. Often, it is the sibling of a movie star, but it's the same idea. They typically do not know anything about technology and also are amusingly out of touch with the culture of the tech industry. They get hired because they are related to a famous person. They do not contribute much. I think they should just stay in LA.
EDIT: I just want to add that I don't know anything about this woman in particular (I'd never heard of her before yesterday), and it's entirely possible that she is the lone exception to the generalization I'm describing above. All I can say is that when I have seen these Hollywood people turn up in SF tech circles in the past (which has been several times, actually), it's always been the same story.
I mean the reasoning is more something like: to become a member of the board at OpenAI you must be extra-ordinary at something. At first sight, the only candidates for this something are: "start-up founder" and "spouse of famous person". The famous spouse thing is so much more extra-ordinary than being a startup founder, that the first "explains away" the latter. Even when being related to an actor makes it more probable to be selected for such a job, there may be other hidden factors at play.
Don't take in that direction. In your opinion he may be making a baseless accusation, but just because that accusation is against a female doesn't make it sexist.
It's not because the accusation is against a female, it's because referring to someone solely as the spouse of someone else is a frequent tactic used to dismiss women.
That might not have been the intent, but when you accidentally use a dogwhistle, the dogs still perk up their ears.
It's common and acceptable to refer to a nobody who's not shown their claim to fame in terms of another famous, impactful person who happens to be their spouse, sibling, etc.
Except Tasha McCauley has far more claim to expertise in this space, however tenuous you may believe it to be, than her husband does. JGL is not relevant in the discussion, either. We're not talking about her in context of him. We are talking about her in context of her position.
If you don't understand how referring to someone solely based on their relationship with another person is denigrating, particularly when trying to highlight your perception of them being incompetent, I'm not sure what to say to you.
You sound like you want to have an argument about gender bias (esp. according to your other comment). I'm not interested in that. You're free to live in your own version of the world and assume that talking about someone by mentioning their spouse is "denigrating". Jesus.
I followed this comment trail hoping to find out more about Tasha McCauley before I google her, but you ended up doing exactly what you are bashing. Defining her in contrast to her husband's expertise on the topic.
After reading the thread, I am still unsure what makes her a proper candidate to the board seat, but I might know that's she has more claim than her husband to it.
There are lots of comments in these threads that go over her different qualifications and experiences.
I am in a discussion about referring to people as 'spouse of x'. They're not the same conversations and I am not sure why you would expect the contents to be the same.
This is a good point. Saying something is sexist is what makes it so, plus why would it be sexist to dismiss her as just a wife in the same post that acknowledges that she runs a startup?
GP knows the headcount at her company so they probably know that it’s a robotics company, but it was simply of dire importance that we know that she is a wife.
It's sexist to refer to her solely based on her relationship with someone else when we're talking about her in the context of her expertise. The fact that she's JGL's wife has nothing to do with her merit, and so it comes off as dismissive, especially when the point being made is about her lack of ability.
Why can't you just criticize her "joke of a resume" directly instead of bringing up her spouse?
Generalizations and statements like this reflect bias in subtle ways that minimize women, and I'm glad it's being called out in some capacity.
I don't know that it would be a resume that would inspire confidence in a for-profit business's board that is primarily concerned with shareholder value.
I also don't know that it is a particularly problematic resume for someone sitting on the board of a non-profit that is expressly not about that. Someone that is too much of a business insider is far less likely to be going to bat for a charter that is explicitly in tension with following the best commercial path.
The insinuation is that her most notable accolade is the man she married and there are cases where that's an accurate insinuation.
I have no idea who she is or what her accolades are, but I do know who JGL is and therefore referring to her like that is in fact useful to me, where using any other name is not.
It was wrong not because they both did achieve something. It is generally wrong and the joke just used their achievements to break the barrier for understanding that.
Suggesting that we should be on a first name basis with the romantic partner of every famous person we know of simply because they are the romantic partner of a famous person is pretty naive. “Spouse of Y” works just fine generally to save space and effort for (locally) real people.
Option A: try to look good by hiding that you know you messed up
Option B: try to fix mistakes as quickly as possible
.
This is that thing that somehow got the label "psychological safety" attached to it. Hiding mistakes is bad because it means they don't get fixed (and so systems that (do or appear to) set personal interest in favor of hiding mistakes are also bad).
It's funny, but option A is almost always best if you care about yourself, but option B is best if you care about the company or mission. Large organizations are chock-full of people who always choose option A. Small startups are better because option B is the only option as nothing can be easily hidden.
You don’t know the actual reasons for them firing Sam and I don’t either. Everyone has an opinion on something they don’t understand. For all you know, he covered up a massive security breach or lied about some skunkworks projects
If your “for all you know” supposition that he’s a criminal were correct, then it would be criminal to try to bring him back. In that unlikely case, I can assure you my opinion of the board is unlikely to improve. It may be a black box to us, but it does have outputs we can see and reason about.
At longer timescales it is important to be able to recognize mistakes and reverse course, but this happened so fast I'm not sure that's the right characterization. There's no way they could already decide that firing Sam was a mistake based on the outcomes they claim to prioritize. Reversing course this quickly actually seems to me more like a reaction based directly on people's negative opinions, though it may be a specific pressure from Microsoft as well.
Based on reports of Microsoft's CEO being "furious", and the size of its legal team, I'd bet the people's reaction wasn't exactly the most relevant factor there...
> That seems a lot be better than doubling down on a bad mistake to save face, but we do care quite a bit about about looking strong, don't we.
IMO Its not about looking strong, its about looking competent. Competent people don't fire their CEO of multi-billion dollar unicorn without thinking it through first. Walking back the firing so soon suggest no thinking was involved.
Not really. By reaching out to Sam this quickly, they're giving him significant leverage. I really like Sam, but everyone needs a counterbalance (especially given what's at stake).
And if they were right to fire Sam, they're now reacting too quickly to negative backlash. 24 hours ago they thought this was the right move; the only change has been perception.
Obviously it’s better to own up to a mistake right away. But the point is if they are willing to backtrack this quickly, it removes all doubt that it WAS a mistake, rather than us just not understanding their grand vision yet.
The current deal with MSFT is cut by Sam is in such a way that Microsoft has huge leverage. Exclusive access, exclusive profit. And after all the profit limit is reached OpenAI sill need to be sold to msft to survive. This is like a worst deal for OpenAI whose goal is to do stuff open source way and it can’t do so due to this deal. If it is not for the MSFt deal, OpenAI could have open sourced and might have resorted to crowdsourcing which might have helped humanity. Also, quickly reaching profit goals is only for good for MSFT. There is no need to actually send money to OpenAI team, just provide operating expenses with 25% profit and take 75% profit. OpenAI has built a great software due to years of work and is being simply being milked by MSFt when the time comes for taking profit.
And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the early donors who donated with humanity goal whose funding made it all possible?
I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.
I'm just curious how you envision ai helping people in the future? There are countless technologies that are amazing in scope but never get any traction due to not being able to market, sustain and promote themselves properly.
Additionally, how do we get there and who funds it in the long term? When you actually consider how much compute power is required to get us to this point of a "pretty decent chat bot/text generator". It doesn't really seem like we are even 20% of the way to agi. If that's true then no amount of crowdfunding is going to get it even remotely close to providing the resources to power something truly revolutionary.
Don't get me wrong I agree with some of the points you've made and Microsoft are certainly in it for themselves but I also believe that they would like to avoid owning Openai as they'd not want to position themselves clearly as the sole caretaker of ai due to the amount of scrutiny they'd be under.
All that is to say, whether you like him or not, he has taken interest in ai and Openai as well as being a leader on discussing the ethics of developing ai to stratospheric levels that has made many industries and governments take notice.
Sam definitely discussed about ethics and stuff (at stratospheric level) but when it comes to actually implementing those ethic or when someone tried to implement it in product, he was instrumental in getting rid of respective scientists (whom inturn went on to create claude). And currently was trying to get rid off another director who is trying to voice out opinion in this regard. That is exactly what I am pointing out, he gave such impressions to the rest of the world.
Microsoft never intended or assumed OpenAI will turnout like this great. It just did a small hedge of $1B to a promising tech and will very much like to takeover OpenAI if given a chance and they can afford all the lawyers needed to keep up with govt regulations.
Anthropic was able to create a comparable product to openai with out all the fuss that sam has created. I agree Sam might have had some significant contributions but they are not as much as it seem to be. I am sure OpenAI will keep on progressing as it does now with or without Sam.
I don’t understand why and how they didn’t consider this sort of discussion before so unceremoniously firing him. The others on the board outside Ilya need to go.
I don’t consider anybody beyond forgiveness and if Ilya takes a professional lesson from this and Sam learns to be more mindful of others’ concerns, I consider this a win for all. Starting over in a new entity sounds great but would be years of setback.
Yes, this attempt was a mess from the start. I don’t know which rumors to believe or care about, but the underlying story for me was that the board was acting like children with an $80b company that some believe to be strategically important to the US or maybe even mankind. If they had done this “properly” and their message was about irreconcilable differences between productization and research they could have made an actual go at this.
If they really believed in the non-profit mission, and Sam didn’t, they probably torpedoed their chances of winning.
This was all they had to write and today would be a different day:
> We regret to inform you that Sam Altman is being let go as CEO of OpenAI due to irreconcilable differences between his desire to commercialize our AI and OpenAI’s research-driven goals. We appreciate Sam’s contributions to the company and the partnership he established with Microsoft, which have set a foundation for OpenAI to thrive far into the future as a research organization with Microsoft focusing on commercialization of the technology.
> We want to assure you that ChatGPT and current features will remain and be upgraded into the future. However, the focus will be on developing core technologies and a reliable, safe, and trustworthy ecosystem for others to build on. We believe that this will allow us to continue to push the boundaries of AI research while also providing a platform for others to innovate and create.
I mean, even if that wasn’t what it was about, that’s what a not-incompetent idiot would have said it was about. ChatGPT could have written that statement for them.
Why do you not think Ilya was the cheif architect of this failed coup? Im being serious everything ive seen points to him being the one responsible, there is no way he will ever stay let alone work in tech again
You are absolutely delusional if you think the man who oversaw the development of GPT would not be able to continue working in tech even if he orchestrated a failed coup.
GPT is based on research Google published, it’s not like he’s the Einstein of AI. Shenanigans like this can absolutely derail your future regardless of how talented you may be.
There's not many Einsteins of anything besides Einstein himself. That doesn't change the fact that he is widely considered in the field to be a top expert and has shown that he can lead the development of a wildly successful product.
If this does end up being a failed coup, then it is of course detrimental to his career. But the statement I'm replying to was explicitly saying he would never work in tech again. Do you honestly believe there is any chance that Sutskever would be unable to work in this field somewhere else if he ultimately leaves OpenAI for whatever reason? I would bet $10,000 that he would have big name companies knocking on his door within days.
Maybe not as extreme as never being able to find work again, but I doubt he’ll ever find himself in an important role where he’s able to lead and make consequential decisions. He basically clipped his own wings to put it metaphorically, if this is indeed a failed coup that was lead by him.
days? before he walks out the door. he must already have permanently open doors for him if he wants.
can he work on what he wants in those places? that is another story of course. but he knows the ins and outs of the lightning in a jar they captured and arguably that is the most promising asset on planet earth right now, so he'll be fine.
Years ago. And Google has been working actively on AI since that time, and even more actively since GPT-3.5 was released and they realized they need to catch up.
They are still catching up. What does this tell us?
I do think he was the chief architect of the coup. I do think his beliefs and ideals are still valuable flora for a company of this ambition. There just needs to be a more professional structure for him to voice them.
Dealing with folks like Ilya isn't necessarily a matter of if, but how much.
I think that his beliefs are important to the company. A board shouldn't be a homogenous glob nor should it be like a middle school friend group. What he did was both bizarre and amateur, but I believe in the best of us all the come forward from these types of events.
It could be that Microsoft is leveraging them to bring him back. This board may seem mercurial at the moment, but we really, truly, and honestly still do not have the big picture yet.
In the first (I think) episode of halt and catch fire, Joe tells IBM that they have their source code. IBM being IBM sends a legion of lawyers to their smallish company trying to scare the shit out of them.
I feel like it be like that, but instead of a legion, legions.
OpenAI isn't scared, OpenAI quit already. The remnants and their false king Ilya are beyond what the word scared is capable of describing though, in terms of the level of abject horror they are certain to face the rest of their entire lives even if they run away now. This will never escape them and nobody involved with this decision will ever work in tech again, or on any board of any organization. I hope they saved up for retirement.
They who control the gpus control the universe. There is a great chip shortage. If MS breaks the lease agreement with OpenAI (based on some pretext about governance) OpenAI won't be able to do any work nor will they be able to serve customer requests for the next year while they litigate this in court. Microsoft holds all the cards because they own the data centers.
As a for instance, and I don't know, but it's plausible Microsoft has full license to use all tech, is the cloud operating it, and has escape clauses tied to "key persons".
That combination could mean firing the CEO results in Microsoft getting to have everything and OpenAI being some code and models without a cloud, and whatever people that wouldn't cross the street with Altman.
I do not know about OpenAI's deal with Microsoft. But I have been on both sides of deals written that way, where I've been the provider's key person and the contract offered code escrow, and I've been a buyer that tied the contract to a set of key persons and had full source code rights, surviving any agreement.
You do this if you think the tech could be existential to you, and you pay a lot for it because effectively you're pre-buying the assets after some future implosion. OTOH, it tends to be not well understood by most people involved in the hundreds of pages of paperwork across a dozen or more interlocking agreements.
. . .
EDIT TO ADD:
This speculating article seems to agree with my speculation, daddy has the cloud car keys, and key person ouster could be a breach:
Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.
That gives the software giant significant leverage as it sorts through the fallout from the ouster of OpenAI CEO Sam Altman. The firm’s board said on Friday that it had lost confidence in his ability to lead, without giving additional details.
One person familiar with the matter said Microsoft CEO Satya Nadella believes OpenAI’s directors mishandled Altman’s firing and the action has destabilized a key partner for the company. It’s unclear if OpenAI, which has been racking up expenses as it goes on a hiring spree and pours resources into technological developments, violated its contract with Microsoft by suddenly ousting Altman.
Contracts are only worth their language if parties are willing to fight for them. Taking on Microsoft and hoards of angry billionaires with a piece of paper separating you from them might be more of a war than they expected.
With the way they fired him and the statement they made, it's hard to see how any of the remaining four could stay on if he did come back... as was previously mentioned, if you shoot at the king, don't miss.
Assuming you don't mean the insiders or the Quora CEO, which aspects of these remaining backgrounds do you find unusual for a Silicon Valley board member?
Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)
McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.
More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.
. . .
EDIT TO ADD:
The question wasn't whether this is scintillating substance. The question was, in what way is this unusual in Silicon Valley.
Near as I can tell they never actually launched a product. Their webpage is a GoDaddy parked domain page. Their Facebook page is pictures of them attending conferences and sharing their excitement for what Boston Dynamics and other ACTUAL robotics companies were doing.
>she launched with a colleague from Singularity University
It doesn't appears she's ever had a real job. Someone in the other thread commented that her profile reeks of a three letter agency plant. Possible. Either that or she's just a dabbler funder by her actor husband.
Lol. You literally know nothing about this person other than what you found online. She could be brilliant or offer a perspective the business needs.
Suggesting that some inarguably brilliant technologists and business people would invite a moron to crash their party makes you look petty (at best) and like an idiot (at worst)
Treating the non-profit OpenAI board like the board for a regular for-profit is weird.
This isn't just a non-profit holding company for tax purposes - the whole thing is structured with the intent of giving the non-profit complete control over the for-profit to help achieve the non-profit's charter.
The board being full of typical business people would likely be counterproductive to the goal of staying focused on the non-profit charter vs. general commercial business interests.
I don't know enough about most of the board to have any sort of real judgment about their ability, but there's a lot of comments here that are judging board members based on very different criteria than what they were actually brought in for.
So what? Regardless of launch/no launch, the company was a flop. This is a cheap shot. Just because someone was successful in the past (or not) is not an automatically relevant signal they'll be a great fit when placed in a different domain. Sometimes they have other relevant background and experience, and other times... Maybe they're just connected. What is the level of scrutiny of qualifications in other companies, even public ones? When looking closely at other companies, I've noticed board compositions can vary substantially. As outsiders, we're undoubtedly missing part of the context about what is relevant (to the board) or not.
Suggested reading: Black Swan by Taleb.
p.s. I am not partial to anyone involved, especially clueless board members. I found this comment annoying due to the breathless, baseless, and flawed logic. What was this supposed to add to the conversation?
> So what? Regardless of launch/no launch, the company was a flop.
Nothing wrong with that but a company like Open AI which is literally changing the world does not have a board member who is qualified to be in that position.
Did you find out e.g. Facebook will do the damage that it did and continues to do in social terms?
Have you done anything or has Facebook changed its way based on your ‘findings’?
The choice here is: does capital coupled with runaway egos provide better stewardship of socially impactful technology development or paper pushers or CIA plants?
I agree, but it's a difference believing in a concept vs engaging with persons in that same area. and together with everything else, that just seem to be half political organizations to farm funding from governments or ESG VCs, it doesn't look very good to me
She just sounds like a typical Silicon Valley trend grifter
FTX wasn't a Ponzi scheme per se. SBF committed fraud by saying they had risk controls in place when they had an exemption for his hedge fund: Alameda Research. FTX could be viable if it let Alameda fail.
Alameda played a significant role in propping up the value of FTX through their investment in FTT. Worth questioning how much FTX would have been worth if it hadn't been for various tricks like this.
None of that sounds like actual work or results. It's just a bunch of empty business speak. They are definitely not qualified to serve on the board of a company like OpenAI.
Would Altman's bio be any more impressive if framed the same way? A trash tier startup, failing upwards to a VC, and starting one of the sleziest cryptocurrencies around. Sure sounds like no actual work, qualifications, competence or results.
Their achievements are already framed in their best light.
Hopefully you're able to tell the difference between serving as CEO or president of real reputable companies (the "trash tier startup" still exited for mid-8 figures) versus what looks like being a figure head for fake companies.
He was president of YC, I think it's fair to say people will think he's got a better set of credentials on a YC forum than some omg-ai-is-dangerous-please-fund-me think tank thinker.
Maybe the problem is the meteoric rise of OpenAI--at the time this board was instituted, the company was much smaller, and wouldn't have been able to draw a more illustrious set of board members?
Didnt they have Elon Musk and Jessica Livingston as founding members, their social network would have someone with more credibility to be on the board compared to the current members.
None are out of the ordinary. It’s like Steel Perlot. It’s an indulge-the-wife tchotchke company position. There are lots of these for the wives and girlfriends of successful people.
Just a sinecure and someone you trust for some other reason. But you’ve got to trust them.
Looking at their CVs, they're more qualified than some rando on the internet. So from my point of view, they look more qualified than you, DebtDeflation, rando from the internet.
Adam has competing interests. It's hard to see why he is even allowed in non profit board. And for other two members, their profile seem pretty weak for being in a board of one of most important companies in the world.
Mike Judge says they had to tone down the advice they got from consultants for that show who were working in tech. The ideas they got (from people working in the real Silicon Valley) were too crazy for audiences to believe.
I'd bet money Satya was a driver of this reversal.
I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.
I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.
Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.
I think him staying is bad for the field overall compared to OpenAI splitting in two.
To be honest, as far as I can tell, the case FOR Sam seems to largely be of the status quo "Well, idk, he's been rich and successful for years, surely this correlates and we must keep them" type of coddling those in uber superior positions in society.
Which seems like it probably is a self fulfilling prophecy. The private sector lottery winners seem to be awarded kingdoms at an alarming rate.
There's been lots of people asking what Sam's true value proposition to the company is, and...I haven't seen anything other than what could be described above.
But I suppose we've got to be nice to those who own rather than make. Won't anyone have mercy on well paid management?
Well, if there's one thing I've learned, is that a venture capitalist proposing biometric world crypto coins does probably have quite a bit of charisma to keep people opening doors for them.
Frankly I've heard of worse loyalties, really. If I was sam's friend I'd definitely be better off in any world he had a hand in defining.
That is something that Sam Altman did with his own money. And it's fair he's criticized for his choices, but that has nothing to do with his role at Open AI.
He and Greg founded the company. They hired the early talent after a meeting that Sam initiated. Then led the company to what it is today.
Compared to...
The OpenAI Non-Profit Board where 3 out of 4 members appear to have significant conflicts of interest or lack substantial experience in AI development, raising concerns about their suitability for making certain decisions.
> The private sector lottery winners seem to be awarded kingdoms at an alarming rate.
Proven success is a pretty decent signal for competence. And while there is a lot of good fortune that goes into anyone's success, there are a lot of people who fail, given just as much good fortune as those who excelled. It's not just a random lottery where competence plays no role at all. So, who better to reward kingdoms to?
>Proven success is a pretty decent signal for competence.
Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.
I suppose if they entered an established market and dominated it from the bootstraps that'd build a lot of trust in me. But others have pointed out, Sam went from dotcom fortune, to...vague question marks, to ycombinator, to openai. Not enough is clear to declare him wozniak, or even jobs, as many have been saying (despite investors calling him as such)
Sam altman is seemingly becoming the new post-fame elon musk: the type of person who could first afford the strategic safety net and PR to keep the act afloat.
One key reason past performance cannot be used to predict future returns is because market expectations tend to price in expected future returns. Also, nothing competitive is expected to generate economic profit forever— in the long run things even out. In the long run, firms and stock pickers usually end up with normal profit.
But that doesn’t mean you can’t get some useful ideas about future performance from a person’s past results compared to other humans. There is no such effect in play here.
Otherwise, time for me to go beat Steph Curry in a shooting contest.
Of course there’s other reasons past performance is imperfect as a predictor. Fundamentals can change, or the past performance could have been luck. Maybe Steph’s luck will run out, or maybe this is the day he will get much worse at basketball, and I will easily win.
> Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.
It’s important to put those disclaimers in context though. The rules that mandated them came out before the era of index funds. Those disclaimers are specifically talking about fund managers. And it’s true that past performance at picking stocks does not indicate future performance of picking stocks. Out side of that context, past performance is almost always a strong indicator of future performance.
No dotcom fortune, just a failed startup that lost its investors money assuming it ever had an expense in its lifetime. OpenAI might in fact be the first time Altman has been in the vicinity of an object-level success; it depends on how you interpret his tenure at YC.
Ok then what better signal do you propose should be used to predict success as a CEO?
The fact is that most people can't do what Sam Altman has done at all, so at the very least that past success makes him in one of the few percent of people who have a fighting chance.
Often, leaders provide excellent strategic planning even if they are not completely well versed with the business domain, by way of outlining high level plans, communicating well, building a good team culture, and so on.
However, trying to distinguish the exact manners in which the leader does so is difficult[1], and therefore the tendency is to look at the results and leave someone there if the results are good enough.
[1] If you disagree with this statement, and you can easily identify what makes a good leader, you could make a literal fortune by writing books and coaching CEOs on how to not get fired within a few years.
Yea, I feel like this is another traitorous eight moment.
I want a second (first being Anthropic?) OpenAI split. Having Anthropic, OpenAI, SamGregAi, Stability and Mistral and more competing on foundation models will further increase the pressure to open source.
It seems like there is a lull in returns to model size, if that's the case then there's even less basis for having all the resources under a single umbrella.
We have diversity in the space, and OpenAI just happens to be the leader and they are putting tremendous pressure on everyone else to deliver. If Sam leaves and starts an OpenAI competitor I think it would take quite some time for such a company to deliver a model with GPT-4 parity given the immense amount of data that would need to be re-collected and the immense amount of training time. Meanwhile OpenAI would be intentionally decelerating as that seems to be Ilya's goal.
For those of us trying to build stuff that only GPT-4 (or better) can enable, and hoping to build stuff that can leverage even more powerful models in the near future, Sam coming back would be ideal. I'm kind of worried that the new OpenAI direction would turn off API access entirely.
> I'm kind of worried that the new OpenAI direction would turn off API access entirely.
That is a good point, I didn't consider people who had built a business based on Gpt-4 access. It is likely these things were Sam Altman ideas in the first place and we will see less such productionalization work in the future from OpenAI.
But since Microsoft invested into it I doubt it will get shut down completely, Microsoft has by far the most to lose here so you got to trust that their lawyers signed a contract that will keep these things available at a fee.
AFAICT Sam and his financial objectives was the reason for not open sourcing the work of a non profit.. He might be wishing he chose the other policy now that he can't legally just take the closed source with him to an unambiguously for profit company.
Personally, I would expect a lot more development of GPT-4+ as soon as this is split up from one closed group making gpt-5 in secret and it seems silly to exchange a reliable future for another few months of depending on this little shell game.
The architect of the coup (Ilya) is strongly opposed to open-sourcing OpenAI's models due to safety concerns. This will not - and would not - be any different without Sam. The decision to close the models was made over 2 years before the release of ChatGPT and long before anyone really suspected this would be an insanely valuable company, so I do believe that safety actually was the initial reason for this change.
I'm not sure what you mean by your second paragraph.
I think the closed source for safety thing started as a ruse as the closed source has been instrumental to keeping control and justifying a non profit that is otherwise not working in the public interest. Splitting off this ruse nonprofit would almost certainly end up unleashing the tech normally like every other tech google, etc, have easily copied.
How much of OpenAI’s success can you attribute to sama’s leadership and how much to the technical achievements of those who work under him.
My understanding is that OpenAI’s biggest advantage is that they recruited and attracted the best in the field, presumably under the charter of providing AI for everyone.
Not sure that sama and gdb starting their own company in the same space will produce similar results.
Big part of it is a typical YC execution of a product/pump/hype/VC/scale cycle and ignoring every ethical rule.
If you ever stood in the hall of YC and listened to Zuck pumping the founders, you’ll understand.
I’d argue this is a useful thing to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.
But sama and gdb were largely instrumental in that recruitment.
The whole open vs closed ai thing... the fact is Pandora's box is open now, it's shown to have an outsized impact on society and 2 of the 3 founders responsible for that may be starting a new company that won't be shackled by the same type of corporate governance.
SV will happily throw as much $$ as possible in their direction. The exodus from OpenAi has already begun, and other researchers who are of the mindset that this needs to be commercialized as fast as possible while having an eye on safety will happily come on board, esp. given how much they stand to gain financially.
Who hired those people? The answer to that is either the founders or some chain of people hired by the founders. And hiring is hard. If you're good at hiring the right people and absolutely nothing else on earth, you will be better than 90% of CEOs.
Whether or not Sam returns, serious damage has already been done, even if everyone also returns. MANY links of trust have been broken.
Even larger, this shows that the "leaders" of all this technology and money really are just making it up as they go along. Certainly supports the conclusion that, beyond meeting a somewhat high bar of education & experience, the primary reason they are in their chairs is luck and political gamesmanship. Many others meet the same high bar and could fill their roles, likely better, if the opportunity were given to them.
Sortition on corporate leadership may not be a bad thing.
That said, consistent hands at the wheel is also good, and this kind of unnecessary chaos does no one any good.
Exactly. I think it would actually be very exciting if OpenAI uses this moment to pivot back to the "Open"/non-profit mission, and Altman and Brockman concurrently start something new and try to build the Apple/Amazon of AI.
The only safety they are worried about is their own safety from a legal and economical point of view. These threats about humanity-wide risks are just fairy tales that grown ups say to scare each other (roko basilisk, etc there is a lineage) or cover their real reasons (which I strongly believe is the case for OpenAI).
You may be right that there's no danger, but you're mischaracterizing Ilya's beliefs. He knows more than you about what OpenAI has built, and he didn't do this for legal or economical reasons. He did them in spite of those two things.
Even they have grown up in a world where Frankenstein's Monster is the predominant cultural narrative for AI. most movies books shows games etc all say ai will turn on you (even though a reading of Marry Shelly's opus will tell you the creator was the monster not the creature that isnt the narrative in the publics collective subconscious believes it is). I personalmy prefer Asimov's veiw of ai in that its a tool and we dont make tools to hurt us they will be aligned with us because they designed such that their motivation will be to serve us.
We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.
> they will be aligned with us because they designed such that their motivation will be to serve us.
They won't hurt us, all we asked for is paperclips.
The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.
If it has no motivation and drives of its own yeah why not. AI wont have a "psychology" anything like our own it wont feel pain it wont feel emotions it wont feel biological imparetives. All it will have is its programming /training to do what its been told. Neural nets that dont produce the right outcomes will be trained and reweighted until they do.
On the contrary. I can safely say I have read litterally dozens of his book, both fiction and nonfiction, also have read countless short stories and many of his essays. He is one of my all time favorite writers actually.
none of the stories in i robot i can remember feature the robots intentionally harming humans/humanity most of them are essentially stories of a few robot technicians trying to debug unexpected behaviors resulting form conflicting directives given to the robots. so yeah. you wouldn't by chance be thinking of that travesty of a movie that shares only a name in common with his book and seemed to completely misrepresent his take on ai?
Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.
Again "they will be aligned with us because they designed such that their motivation will be to serve us." If you got this outcome from reading I robot either you should reread them because obviously it was decades ago or you build your own safe reality to match your arguments. Usually it's the latter.
And yet again I didn't get it from I Robot, I got it from Asimov's NON-fiction writing which I referenced in my previous post. Even if it had gotten it based on his fictional works, which again I didn't, the majority of his robot centric novels (caves of steal, naked sun, robots of dawn, robots and empire, prelude to foundation, forward the foundation, second foundation trilogy etc all feature benevolent AIs aligned with humanity.
This fear seems to have been largely played up for drama. My understanding of the situation is that at one point they went 'Huh, we could potentially set off a chain reaction here. We should check out if the math adds up on that.'
Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.
The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.
> Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?
Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.
Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.
> The push for the bomb was an international arms race — America against Russia
Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.
You know that now, with the benefit of history. At the time the fear of someone else developing the bomb first was real, and the Soviet Union knew about the Manhattan project: https://www.atomicarchive.com/history/cold-war/page-9.html.
Isn't this mainly about what happened after the war and developing then hydrogen bomb? Did anyone seriously believe during WW2 that the Nazis/Soviets could be the first to develop a nuclear weapon (I don't really know to be fair)?
A lot of it happened after the war, but the Nazis had their own nuclear program that was highly infiltrated by the CIA, and whose progress was tracked against. Considering how late Teller's mechanism for detonation was developed, the race against time was real.
This idea that ChatGPT is going to suddenly turn evil and start killing people is based on a lot of imagination and no observable facts. No one has ever been able to demonstrate an "unsafe" AI of any kind.
This sort of prediction is by its nature speculative. The argument is not--or should not--be certain doom. But rather that the uncertainty on outcomes is so large that even extreme tails has nontrivial weight
I do not believe AGI poses an exponential threat. I honestly don't believe we're particularly close to anything resembling AGI, and I certainly don't think transformers are going to get us there.
But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.
I don't think we need AIs to posess superhuman intelligence to cause
us a lot of work - legislatively regulating and policing good old limited humans already requires a lot of infrastructure.
Certainly. I think at current "AI" just enables us to continue making the same bad decisions we were already making, though, albeit at a faster pace. It's existential in that those same bad decisions might lead to existential threats, e.g. climate change, continued inter-nation aggression and warfare, etc., I suppose, but I don't think the majority of the AI safety crowd is worried about the LLMs of today bringing about the end of the world, and talking about ChatGPT in that context is, to me, a misrepresentation of what they are actually most worried about.
Factually inaccurate results = unsafety. This cannot be fixed under the current model, which has no concept of truth. What kind of "safety" are they talking about then?
The internet is not called "AGI". It's the notion of AGI that brought "safety" to the forefront. AI folks became victims of their hype. Renaming the term into something less provocative/controversial (ML?) can reduce expectations to the level of the internet - problem solved?
Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.
I fully agree with that. But if you read this thread or any other recent HN thread, you will see "AGI... AGI... AGI" as if it's a real thing. The whole openai debacle with firing/rehiring sama revolves around (non-existent) "AGI" and its imaginary safety/unsafety, and if you dare to question this whole narrative, you will get beaten up.
In the context of this thread, "safety" refers to making sure we don't create an AGI that turns evil.
You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.
How the thing can be called "AGI" if it has no concept of truth? Is it like "60% accuracy is not an AGI, but 65% is"? The argument can be made that 90% accuracy is worse than 60% (people will become more confident to trust the results blindly).
That a military AI helps to kill enemies doesn't look particularly "unsafe" to me, at least not more "unsafe" than a fighter jet or an aircraft carrier is; they're all complex systems accurately designed to kill enemies in a controlled way; killing people is the whole point of their existence, not an unwanted side effect. If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe", but nobody has ever been able to demonstrate an "unsafe" AI of any kind according to this definition (so far).
No more than any other weapon. When we talk about "safety in gun design", that's about making sure the gun doesn't accidentally kill someone. "AI safety" seems to be a similar idea -- making sure it doesn't decide on its own to go on a murder spree.
I think this is misguided. There can be goals internal to the system which do not arise from goals of the external system. For example, when simulating a chess game, it (behaves identically to) has a goal of winning the game. This is not a written expressed goal but is emergent. Like the goals of a human are emergent from the biological system which on the cellular level have very different goals
Almost all top AI scientists including the top 3 Bengio, Hinton and Ilya and Sam actually think there is a good probability of that. Let me think: listen to the guy that actually built GPT 4 or some redditor that knows best?
I think smart people can become quickly out of touch and can become high on their own sense of self importance. They think they’re Oppenheimer, they’re closer to Martin Cooper.
So in a vaccum if top experts telling you X is Y and you without being top expert yourself if you had to choose you would chose that they are high and not that you misunderstood something?
Correct, because experts in one domain are not immune to fallacious thinking in an adjacent one. Part of being an expert is communicating to the wider public, and if you sound as grandiose as some of the AI doomers to the layman you've failed already.
I don’t get the obsession with safety. If an organisation’s stated goal is to create AGI, how can you reasonably think you can ever make it “safe”? We’re talking about an intelligence that’s magnitudes smarter than the smartest human. How can you possibly even imagine to rein it in?
None of the human actors in the game are moral agents so whether you have more competition or less competition it's mostly orthogonal to the safety question. Safety is only important here because everyone's afraid of liability.
As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.
Luckily the AI field has been very open source-friendly, which is great for competition and free access, etc. The open source models seem to be less than a year behind the cutting edge, which is waaaay better than e.g. when OpenOffice was trying to copy MS Office.
while opensource is great. like 1M enthusiasts cannot build Boing 767, the same here. GPT4+DALE+4v aren't just models. That's the whole internal infrastructure, training, many interconnected things and pipelines. It's a _full_time_job_ for hundreds of experts. Plus a lot of $$ in hardware and services. OpenSource simply doesn't have this resources. The best models are opensourced by commercial companies. Like Meta handing out LLaMAs. So, at least for now, opensouce is not catching up, and 'less than a year behind' is questionable. More like 'forever', but still moving forward. One day it may dominate, like Linux. But not any time soon.
Honestly would be super interested to see what a hypothetical "SamAI" corp would look like, and what they would bring to the table. More competition, but also, probably with less ideological disagreements to distract them from building AI/AGI.
From what we've seen of OpenAI's product releases, I think it's quite possible that SamAI would adopt as a guiding principle that a model's safety cannot be measured unless it is used by the public, embedded into products that create a flywheel of adoption, to the point where every possible use case has the proverbial "sufficient data for a meaningful answer."
Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.
Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.
I mean this as an honest question, but what does Sam bring to the table that any other young and high performing CEO wouldn’t? Is he himself particularly material to OpenAI?
Experience heading a company that builds high performance AI, I presume. I reckon the learnings from that should be fairly valuable, especially since there's probably not many people who have such experiences.
You mean besides the business experience of already having gone down this path so he can speedrun while everyone else is still trying to find the path?
Easy; his contacts list. He has everyone anyone could want in his contacts list politician tech executives financial backers and a preexisting positive relationship with most of them. When alternative would be entrepreneurs are needing to make a deal with a major company like Microsoft or Google it will be upper middlemanagement and lawyers, a committees or three will weigh in on it present it to their bosses etc. With Sam he calls up the CEO and has few drinks at the golf course and they decide to work with him and they make it happen.
My main concern is that a new Altman-led AI company would be less safety-focused than OpenAI. I think him returning to OpenAI would be better for AI safety, hard to say whether it would be better for AI progress though.
They pretty much lost everyone’s confidence if they fire the CEO and then beg him to come back the next day. Did they not foresee any backlash? These people are gonna predict the future and save us from an evil AGI? Lol
It seems silly to me but then I always prefered Asimov positronic robots stories to yet another retelling of the Golem of Prague.
The thing is the cultural Ur narrative embed in the collective subconscious doesnt seem to understand its own stories anymore. God and Adam, the Golem of Prague, Frankensteins Monster none of them are really about AI. Its about our children making their own decisions that we disagree with and seeing it as the end of the world.
AI isnt a child though. AI is a tool. It doesn't have its own motives, it doesn't have emotions , it doesn't have any core drives we don't give to it. Those things are products of us being biological evolved beings that need them to survive and pass on our genes and memes to the next generation. AI doesn't have to find shelter food water air oxygen and so on. We provide all the equivalents when there are any as part of building it and turning it on. It doesn't have a drive to mate and pass on it genes it doesn't have any reproducing is a mater of copying some files no evolution involved checksums hashes and error correcting codes see to that. Ai is simply the next step in the tech tree just another tool a powerful useful one but a tool not a rampaging monster
There is a common definition of safety that applies to most of the world.
Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.
And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.
This babysitting of the world gets annoying, tbh. As if everyone to lose their mind and start acting illegal only because chatbot said so. There’s something fundamentally wrong with humanity (which isn’t surprising given the history of our species), if that is unsafe. AI is just a source of information, it doesn’t cancel upbringing and education for human values and methods of dealing with information.
Yes, in other words, AI is only safe when it repeats only the ideology of AI safetyists as gospel and can be used only to reinforce the power of the status quo.
Yeah that’s what I thought. This undefined ambiguous use of the word “safety” does real damage to the concept and things that are indeed dangerous and need to be made more safe.
Fuck safety. We should sprint toward proving AI can kill us before battery life improves, so we can figure out how we’re going to mitigate it when the asshats get hold of it. Kidding, not kidding.
I really don’t, I really think that he is going to be disaster. He is nothing but the representative of the money interests who are eventually will use the company to vastly profit on everyone’s else skin.
Sam’s forced departure and Greg’s ousting demonstrably leaves OpenAI in incompetent and reckless hands, as evidenced by the events of the last 24 hours. I don’t see how the field is better off.
everything about it screams amateur hour, from the language and timing of the press release to the fact they didn't notify Microsoft. And how they apparently completely failed to see how employees and customers would react to the news, Ilya saying the circumstances for Altman's removal "weren't ideal" shows how naive they were. They had no PR strategy to control the narrative and let rumors run wild
I doubt he returns, now he can start a for profit AI company, poach OpenAI's talent, and still look like the good guy in the situation. He was apparently already talking to Saudis to raise billions for an Nvidia competitor - https://news.ycombinator.com/item?id=38323939
Have to wonder how much this was contrived as a win-win, either OpenAI board does what he wants or he gets a free out to start his own company without looking like he's purely chasing money
This story that they want him back turns it from amateur hour to peak clownshow.
This is why you need someone with business experience running an organization. Ilya et al might be brilliant scientists, but these folks are not equipped to deal with the nuances of managing a ship as heavily scrutinised as OpenAI
Or little things like your $10b investment partner having a pissed off CEO and massive legal team ready to strike now. It’s such fucking amateur hour it’s incredible.
It’s unclear what Ilya thinks keeps the lights on when MSFT holds their money hostage now. Which is probably why there is desperation to get Altman back…
Sorry how could MSFT hold the money hostage exactly? Isn't that kind of investment a big cash transfer directly to OAI's bank account? Genuinely curious
> Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.
Microsoft’s “investment” is mostly cloud compute credits on a giga scale. OpenAI has pretty much free rein of every otherwise unallocated azure GPU host, and a lot of hardware spun up just for this purpose.
If Microsoft considers this action a breach of their agreement, they could shut off access tomorrow. Every OpenAI service would go offline.
There are very few services that would be able to backfill that need for GPU compute, and after this clusterfuck not a single one would want to invest their own operating dollars supporting OpenAI. Microsoft has OpenAI by the balls.
Microsoft has Microsoft by the balls. They just integrated GPT4 with their browser, search engine, and _desktop operating system_. It would be a mess to suddenly take all this functionality out. They have too much to lose by turning off compute for OpenAI.
actually wild to think about how something like this can even be allowed to happen considering OpenAI has(had) a roughly 90B valuation and it being important to the US from a geopolitical strategy perspective.
comical to imagine something like this happening at a mature company like FedEx, Ford, AT&T. All which have smaller market caps than OpenAI. You basically have impulsive children in charge of massively valuable company
Sure, it's important in some ways, but most corporations aren't direct subordinates of the US Government.
The companies you listed in contrast to OpenAI also have some key differences: they're all long-standing and mature companies that have been through several management and regime changes at this point, while OpenAI is still in startup territory and hasn't fully established what it will be going forward.
The other major difference is that OpenAI is split between a non-profit and a for-profit entity, with the non-profit entity owning a controlling share of the for-profit. That's an unusual corporate structure, and the only public-facing example I can think of that matches it is Mozilla (which has its own issues you wouldn't necessarily see in a pure for-profit corporation). So that means on top of the usual failure modes of a for-profit enterprise that could lead to the CEO getting fired, you also get other possible failure modes including ones grounded in pure ideology since the success or failure of a non-profit is judged on how well it accomplishes its stated mission rather than its profitability, which is uh well, it's a bit more tenuous.
All of them are when they become national security concerns. The executive branch could write the OpenAI board a letter directing them on what to do if it were a national security need. This has been done many times before, though usually limited to the defense industry in wartime, but as Snowden has showed it has been done in tech as well.
Except that is literally not true and the Government loses in court to private citizens and corporations all the time because surprise: people in America have rights and that extends to their businesses.
In wartime, pandemics, and in matters of national security, the government's power is at its apex, but pretty much all of that has to withstand legal challenge. Even National Security Letters have their limits: they're an information gathering tool, the US Government can't use them to restructure a company and the structure of a company is not a factor in its ability to comply with the demands of an NSL.
The PATRIOT act extended the wartime powers act to apply in peacetime, and there are other more obscure authorizations that could be used. I used to work in the defense industry. It was absolutely common knowledge that the government could step in to nationalize control (though not the profits of) private industry when required. This has been done in particular when there are rare resources needed for supersonic then stealth technology during the Cold War, and uranium in the 40’s and 50’s.
These things happen. ICANN controls DNS deeply and they were trying to sell off .org and you know what stopped them? California’s AG has some authority on non-profits in California.
That’s right. Worldwide DNS control and it was controlled by a non-profit in California. And that non-profit tried to do something shady and was kept in line simply because of California law enforcement.
> I think they could have won in the court of public opinion ... [but] they tried to skewer him, and it backfired completely
Maybe we have different definitions of "the court of public opinion". Most people don't know who Sam Altman is, and most of the people who do know don't have strong opinions on his performance as OpenAI's CEO. Even on HN, the reaction to the board "skwer[ing] him" has been pretty mixed, and mostly one of confusion and waiting to see what else happens.
This quick a turnaround does make the board look bad, though.
If this (very sparse and lacking in detail) article is true, is this a genuine attempt to get Altman back or just a filip to concerned investors such as Microsoft?
Does OpenAI's board really want Altman back so soon after deposing him so decisively?
Would Altman even want to come back under any terms that would be acceptable to the board? If "significant governance changes" means removing those who had removed him, that seems unlikely.
The Verge's report just raises so many additional questions that I find it difficult to believe at face value.
If Sam does come back Ilya’s maneuver will have been a spectacular miscalculation. Sam would be back much stronger than before and the people who cared about OpenAI’s original mission will have a massively damaged their reputation and credibility. They threw all the influence they had out the window.
Well that's just wrong. Before OpenAI I would've agreed with you, but since OpenAI's rise to prominence there has been a noticeable increase in its coverage in mainstream media outlets featuring Sam. People still read the Times.
I received messages from a physician and a high school teacher in the last 24 hours, asking what I thought about "OpenAI firing Sam Altman".
I've been deeply "in tech" for 40 years, and never heard of Sam Altman until he was fired from OpenAI. "Tech" isn't one thing though, it's a very diverse thing with many different areas of interest. I'm not really that interested in AI, so no, I'm not going to care who the players are in that arena. My interests lie in other "tech".
I think it depends on what you mean by ‘non-tech’ and ‘knows’. Reasonable interpretations of those words would see your statement as obviously false.
I agree that he doesn’t have a huge amount of name recognition, but this ousting was a front-page/top-of-website news story so people will likely have heard about it somewhat. I think it’s in the news because of the AI and company drama aspects. It felt like a little more coverage than Bob Iger’s return to Disney got (I’m trying to think of an example of a CEO I’ve heard about who is far from tech).
I think it is accurate to say that most people don’t really know about the CEOs of important/public companies. They probably have heard of Elon/Zuckerberg/Bezos, I can think of a couple of bank CEOs who might come on business/economics news.
My 60-year-old mom isn't tech savvy and always asks me for help with her computer. You wouldn't expect her to know about Sam Altman, but she's actively sending me articles about this fiasco.
I would've said the same thing about ChatGPT itself. You could've knocked me over with a feather when they announced that they'd grown to 100 million weekly active users.
I know, personally, a dozen or so non-tech people who know of Sam, mostly because they listen to podcasts or consume other news sources that tell them.
Generative AI's ubiquity has nothing to do with Sam Altman's noteriety. People can know what the former is without needing to know the latter. It's not as though he relishes in celebrity like other famous CEO's (Musk).
That would also remediate the appearence of total incompetence of this clown show, in addition to admitting the board and Sam don’t fit with each other, and restore confidence for the next investor that their money is properly managed. At the moment, no-one would invest in a company that can be undermined by its non-profit, with a (probably) disparaging press release a few minutes before market closure on a Friday evening, for which Satya had to personally intervene.
What matters is what investors think, and by majority they seem very unhappy with all of this.
Speaking for myself, if they had framed this as a difference in vision, I would be willing to listen. But instead they implied that he had committed some kind of categorical wrongdoing. After it became clear that wasn’t the case, it just made them look incompetent.
Sure, but Microsoft can sever the relationship if they want to. Thrive can choose to revoke their tender offer, meaning employees won't get the money they were expecting. New firms can decline to ever invest in OpenAI ever again.
There's a lot more to this than who has explicit control.
And I'm sure Google would jump at the occasion to fund the nonprofit and keep MS out while they develop their own.
The funding Goal for the openAI was just 1B. Small price to pay for Google to neuter one of it's competitors exclusive access to the GPT model.
Sure, but there's no research to be done without money for compute and salaries for researchers, which is the entire reason the for-profit company was spun out underneath the non-profit — they needed money. And who would give OpenAI money right now, given that the board ousted the popular CEO in a coup without consulting or even notifying investors?
I bet it was multifaceted. By firing Sam this way they nuked their ability to raise funds because anyone investing in the "for profit" subsidiary would have to do so with the understanding that the non-profit could undermine them at a whim.
Also, all the employees are being paid with PPUs which is a share in future profits, and now they find out that actually, the company doesn't care about making profit!
A lot of top talent with internal know-how will be poached left and right. Many probably going to Sam's clone that he will raise billions for with a single call.
>Also, all the employees are being paid with PPUs which is a share in future profits, and now they find out that actually, the company doesn't care about making profit!
Maybe. But on their investing page it literally says to consider an OpenAI investment as a "donation" as it is very high risk and will likely not pay off. Everyone knew this going into it.
1 - not running a move like this by the company that invested a reported $10 billion dollars;
2 - clearly not having spent even 10 seconds thinking about the (obvious) reaction of employees on learning the ceo of what seems like a generational company was fired out of the blue. Or the reaction to the (high likelihood) of a cofounder following him out the door
3 - And they didn't even carefully think through the reaction to the press release which hinted at some real wrongdoing by Altman.
3a - anyone want to bet if they even workshopped the press release with attorneys or just straight yolo'd it? No chance a thing like this could end up in court...
They've def got the A team running things... my god.
Agreed. Somewhere in Seattle, Satya said "Now Witness the Firepower of this fully Armed and Operational Army of Lawyers."
If there ever was a time for Microsoft to leverage LCA, it is now. There's far too much on the line for them to lose the goose that has laid the golden egg.
Sorry but that is ridiculous. The wording of the PR blurb is not what makes gears move in a giant like Microsoft.
I agree the board did botch this up. But this is in my view is a vindication of their being amateurs at corporate political games, that is all.
But this also means that Sam Altman’s “vision” and Microsoft’s bottom line are fully aligned, and that is not a reassuring thought. Microsoft one hears (see “5 foot pole”) even puts ads in their freaking OS.
“I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.”
^^
I don’t think the wording of the “press release” is an issue.
This is a split over an actual matter to differ about: a genuine fork in the road in terms of pace and development of AI products, and, a CEO which apparently did not keep the board informed as it pursued a direction they feel is contrary to the mission statement of this non-profit.
The board could have done this in the most gracious of manners, but it would not have made a bit of difference.
On one side we have the hyper rich investor “grow grow grow” crowd and their attendant cult of personality wunderkind and his or her project, and on the other side bunch of geeky idealists who want to be thoughtful in the development of what is undeniably a world changing technology for mankind.
You're (willfully, I think?) conflating two things.
However, the way they told the public (anti-Sam blog post) and the way they told Microsoft (one minute before the press release) were both fumbles that separately could have played out differently if the board knew what they were doing.
You mean that Satya Nadella, the CEO of Microsoft, is behind the drive to reinstate Altman as CEO, right? Because if you mean he was behind Altman's ouster, I'll happily take your money; let me know what your terms are. :)
it will definitely not do any of that, because (a) a crypto wallet has nothing to do with your identity, (b) nobody except the gullible will put their permanent biometrics information in the hands of a private company on purpose, (c) especially not if that private company is led by someone who repeatedly, demonstrably plays fast and loose with laws and regulations, especially around those having to do with privacy and ownership. It's an even wilder, less justified play than your other average shitcoins, which at least have some kind of memetic value.
A crypto wallet can be easily tied to a hash of your real world identity which can then be used to sign into a website or sign a transaction verifying your identity. Already being done.
He's also likely worried about what happens if you can't tell who is human and who isn't. We will certainly need a system at some point for verifying humanity.
Worldcoin doesn't store iris information; it just stores a hash for verification. It's an attempt to make sure everyone gets one, and to keep things fair and more evenly distributed.
(Will it work? I don't think so. But to call it an eyeball identity scam and dismiss Sam out of hand is wrong)
Sam Altman is 'rethinking capitalism' in the same way a jackal rethinks and disrupts sheep flocks. Are we thinking about the same guy? I'm thinking of this one: https://www.youtube.com/watch?v=KhhId_WG7RA
the rules would definitely change. Would you want a popped collar fail-upwards guy who creates a crypto scam to be part of the rule making structure, or would you prefer that not to be the case?
Satya drove the removal of Sam, or drove the board to get him back?
From Greg's tweet, it seems like the chaos was largely driven Ilya, who has also been very outspoken against open source and sharing research, which makes me think his motivations are more aligned with those of Microsoft/Satya. I still can't tell if Sam got ousted because he was getting in the way of a Microsoft takeover, or if Sam was trying to set the stage for a Microsoft takeover. It's all very confusing.
The latter. Microsoft didn't know about the firing until literally a minute before we did, and despite a calm response externally, there's reports Satya is furious.
Also, there's no real evidence of Microsoft being philosophically opposed to releasing model weights. That's entirely come from the AI safety people who want models with reactively updated alignment controls. If anything having model weights would mean being able to walk away from OpenAI and keep the thing that makes them valuable.
I hope that group starts a new version of openai using the credibility and popularity gained to acheive the original vision of safe, free, and open agi for the betterment of humanity.
With Altman gone and the direction of the board being to limit commercial growth, their investment is at risk, and their competitive edge will evaporate, especially if businesses switch to other LLMs as they surely will over time. Altman will also become a competitor.
If instead they are able to pull off a complete transformation of the nonprofit and oust Ilya, they will also lose a core technical leader and risk their investment while being left with the odd dynamic of a parent nonprofit.
Perhaps they could orchestrate some kind of purchase of the remaining portion of the subsidiary. Give Altman the CEO title and move forward while allowing the nonprofit to continue their operations with new funding. This doesn’t solve the Ilya problem but it would be cleaner to spin it off.
This is one of those things that I'll ignore. Just tell me the outcome when it's over. The older I get, the more I can't stomach this stuff. It applies to pretty much all news recently.
I mean, I don't see how anyone can expect him to come back after the public bridge burning and condemnation. I think it's reasonable to think that, if anyone were in his shoes, they, too, would feel too hurt/exasperated to go back to the people who caused all of... this to them.
He already has a fantastic proof of concept. Sure, it will take a little longer, but he can easily raise the billions in funding and find the people to build it.
Because it will cost billions to build and run and the builders have no clue how to get the money, and once they have something built, how to keep the money flowing in.
Like how Steve Jobs invented computers and the iPhone, this entire rotten American tech culture of effective altruists and randian freaks, it's one big cult. That's why it looks so freakish from the outside, it's end stage hyper capitalism.
Whether or not you agree about the board's decisions, you cannot deny that this has been some excellent drama to gossip over the past 24 hours, like a soap opera but with real consequences that might only be apparent years from now.
We have to ban such accounts. I don't want to ban you because you've also posted good things, but we really need you to use HN as intended from now on. If you'd please review https://news.ycombinator.com/newsguidelines.html and do that, we'd appreciate it.
Privacy is less important than AGI and any privacy you think you have is an illusion, only maintained while it's profitable for you to think you have privacy. Accept the reality and grow the hell up
I am not sure it really makes sense for the investors to continue with OpenAI. It seems like this incident proved that the constitution of the company wasn't really compatible with a profit focus. It was a split from the beginning, and obviously a questionable arrangement. It just came to a head to make it obvious.
The business and investment people want to make money. Many of the researchers want to take their time and build better and safer models and don't care about money in the short term at all. They are two different goals.
It's easy for business and investment people to say that they are concerned with safety and research, and I believe them to a certain degree. But they have $10 billion reasons to focus on the actual business instead of research and safety.
Maybe. Or maybe they can profit differently going forward. Sam is not the reason OpenAI is gangbusters, its tech is. What I’m not hearing in any of this reporting is how Sam Altman is somehow the secret sauce.
And it’s because he isn’t. This is “rules for thee but not for me”. He as a bad fit, 2/3 the board outed him, and investors are mad because they didn’t feel included.
You know, like how they include employees in layoff decisions and not blind side them.
Sam Altman has spoken about “firing fast” when someone is a bad fit. he got fired fast, because he was a bad fit. That’s the seminal conclusion
> And it’s because he isn’t. This is “rules for thee but not for me”. He as a bad fit, 2/3 the board outed him, and investors are mad because they didn’t feel included.
The fact that they're openly considering bringing him back should tell you that he's not just some random person whose job anyone can do. He's extremely well connected and was the face of the company - the face of deals that the company made. And you have to consider whether internally the employees are supporting this - if I were at OpenAI I would be pissed that the board decided to fuck around when we were doing so well.
If I were at open AI I'd side with tech leadership and research over business leadership and an even better compensation package, but that is an irrational bias on my part. It seems like the non-profit is getting persuaded by partners not to mess with the revenue stream. Perhaps Ilya would be a better fit at Keen. But Carmack is all about moving fast amd breaking things too, with no concern for ethics or safety, just the science. So maybe not there either.
Then again, maybe he has been making life less than desirable for the rank and file. Perhaps, even, they felt he was a bad fit for the company too. I don’t know, because I don’t work there.
If this is the case, good time to start hiring away engineers to another firm.
He may be the face, but faces change. Sam Altman isn’t t the only person capable of taking the reins. There is nothing about him that is more “magic” in this case, because the tech is always been their selling point. I think any competent CEO could sell the hell out of OpenAI right now
In as so far as bringing him back: I don’t know the validity nor veracity of those discussions. That news hit a little fast to me to have been fully fleshed out. Not saying it’s untrue, but “some of the board” talking isn’t the same thing as all of the board, either
I don't know about OpenAI, but I suspect that Altman was the one making the big deals, personally. Companies (like Microsoft) are not going to like the idea that:
a) A company they've partnered so heavily with is changing things up
b) That the change-up is to their point-person
It's not about whether another CEO could steer the ship, it's about the previous context and relationships that, regardless of skill, are going to have to be rebuilt carefully when you just rip out the point-person.
> Then again, maybe he has been making life less than desirable for the rank and file. Perhaps, even, they felt he was a bad fit for the company too. I don’t know, because I don’t work there.
Altman supporters have resigned. I have yet to see reports about rank and file people leaving en masse.
It could happen still, but it’s not obvious that it will.
As far as relationships go, they can build those. I doubt anyone who has access to OpenAI tech wants to give that up, so there is enough leverage on that to smooth things out
I think everyone is overvaluing what Altman brings to the table in terms of staying power. I don’t think MS is willing to simply throw in the towel on OpenAI because Altman isn’t there there anymore.
They’d be fools to do that if there is a path forward here. Short of them announcing on Monday that they are no longer selling their offerings, I don’t see how there won’t be a path.
Business is business, as all the VCs love to say, there is no room for emotion in this right?
That all depends on how much one believes OpenAI tech is far enough ahead that it’s too painful to replace, and that there is no way things can’t be established to be mutually beneficial
It’s a private company, options are nothing until otherwise anyway, in most cases. I wouldn’t look at them very much either way and I hope they’re smart enough to make the same distinction
He's extraordinarily talented by all accounts, especially at fundraising. But his Apple was Loopt. He's not the secret sauce but a facilitator - more Tim than Steve. Perhaps like Steve was to Pixar.
"The start-up company must either cross or die, but what value is life if
to gain it one has to go against one’s best self?" - Moore, Crossing the Chasm, p. 75
his arc doesnt line up with Jobs. Loopt was how he got his foot into the door at YC as a founder.
at YC he made a name for himself, built the rolodex, and learned how to build startups to the point that he turned OpenAI into a rocketship and now has unlimited access to capital and talent to build another one.
If money and talent is all you need, he's gonna win. But I think this doesn't work for research talent, it's more for commercialization. Think PARC vs ethernet/adobe etc. IDK if AI is really at that stage yet.
100%. Hear hear. It's very visible who knew about OpenAI before ChatGPT currently. It's somewhat frustrating to find 99.9999% of people had the standard genius-warrior-hero-CEO narrative locked in. I don't think even Sam in his most private of moments would come close to endorsing some of the hyperbole I've seen the past day.
Especially since it's seems it's either Sam or Ilya. Now Sam might surely be preferred as CEO, but after the Anthropic Split, I think Ilya is even more crucial to the continuity of OpenAI. Many of the researches joined OpenAI for the "Open" part. Sure money attracts people but we saw that ideal attracted the best set of people and not money, or it would have been Microsft/Google/Apple who come up with GPT instead OpenAI.
Yes i know that technically this is Google's research they are using but the execution. efficinces and tuning is what matters more in this case.
1. There is so much momentum around OpenAI that it would be tough to walk away
2. I imagine they could find a better way to setup governance so the nonprofit entity still exists but the for profit entity has a bit more autonomy
On Monday they’ll form OpenAI 2.0 S-Corp and move a bunch of ether into it. Investors will love the new structure and support an even higher valuation. Ilya and co will maintain a non-profit thing that probably refocuses to pure AGI. Everybody wins why didn’t they do it from the start.
There are still plenty of long investors out there. Amazon barely made a profit for many years and Bezos made it very clear that a quick profit wasn’t his focus.
They are plenty weird enough for my taste already. Seriously. What an incredibly turbulent 15 years it has been. So far. And it may well get a lot more turbulent still.
oh yeah. remember what i wrote about this 7 years ago https://dercuano.github.io/notes/wwiii-genesis.html and that was before agi, spacex, the unfolding of brexit, the us huawei sanctions, or the solar energy explosion
not that those are necessarily bad in all ways but they sure do contribute to unpredictability
the climate thing is pretty important but at this point manageable
financial crises come and go; the one 15 years ago was interesting but not, as it turned out, a second great depression
ukraine was one of the major topics of the note i linked, though not explicitly mentioned
covid i'm not sure about. you would expect 6.98 million confirmed deaths (and several times that many total deaths) to have a cultural impact like aids or the holocaust, but if it has, i am not sure exactly what that impact is. i don't even see people wearing face masks on the bus when they have a cold, which is sort of the minimum i was hoping for
> the climate thing is pretty important but at this point manageable
For humanity as a whole: yes. For individuals who happen to live in the wrong spot: not so much.
> financial crises come and go; the one 15 years ago was interesting but not, as it turned out, a second great depression
Maybe societies are like people in that sense: that they adapt to certain kinds of illnesses and at some point they are no longer quite as susceptible as they were the first time? After all, capitalism itself was tested for the very first time then and it did survive and what doesn't kill you makes you stronger.
> Ukraine was one of the major topics of the note i linked, though not explicitly mentioned
Ok.
> covid i'm not sure about. you would expect 6.98 million confirmed deaths (and several times that many total deaths) to have a cultural impact like aids or the holocaust, but if it has, i am not sure exactly what that impact is. i don't even see people wearing face masks on the bus when they have a cold, which is sort of the minimum i was hoping for
If there ever was a wake-up call that should have been it.
But it's like as if it never happened and as if it has definitively ended and nothing like it can ever happen again. Weirdest experience in my life to date.
This is the level of national security or foreign policy disaster. If you think anyone that was involved will ever work in tech again pass me whatever you're smoking
A. Core team members leaving (perhaps more threatening to leave).
B. (maybe more likely) Nadella told Sutskever that he might shut off funding or restrict compute resources if he didn't reverse course, or at least, wasn't able to retain talent (see A).
I've yet to see a good one. And even if: how you do something is often as important or even more important than that you do something. And on the 'how' bit the board just utterly failed. This is the most watched company in the world right now on the tech front, you can't just oust the CEO without a very good plan. If you do that kind of thing on a whim you are not fit to serve on the board of any company, but especially not on the board of this one.
Why aren’t we holding CEOs to the fire when they layoff thousands of people in what is effectively an email? Thats somehow okay but a CEO being ousted suddenly and it’s all hands on deck bad optics?
The board had a mandate level of votes for the replacement of Sam (2/3 of the board voted yes). Thats conviction.
They may have had the votes, but they didn't have the finesse and they may have forgotten who their ultimate paymasters are, regardless of the non-profit. Note that another 10 people or so have just as much power as the board (the key people behind all of the AI work) and three of those had already walked. That's a very powerful signal that they mis-handled this.
Which is to say, they were likely Altman supporters. Which is fine! They’re free to do as they wish.
However if that’s it (and it does remain to be seen if more happens or not) than 2/3 of folks stand by the decision, which would match with the board votes
So then the question is: how much weight do these votes actually carry because if the votes are by people who in turn don't have much support themselves (both within, employees, other board members and without, shareholders in the for-profit, donors and founders, then they can fire the CEO but they may not be able to make it stick if it isn't for an extremely good reason.
I haven't seen that reason yet, though I don't rule out one exists and even then you'd have to do this in a way that it doesn't ruffle the feathers of your ultimate paymasters. Being a board members of a large company is an exercise in diplomacy, not in bull-in-a-china-store level incompetence.
Whether you side with Sam or Ilya on rollout speed and approach to safety, the breathtaking incompetence of this coup has already incinerated a stunning amount of trust. One could hardly ask for a better demonstration that it matters not only what you do and why, but also how - above all, how you treat the ones who rely on you.
There is a concept of a boardroom coup, which is what this was. The lack of decorum, the unceremonious and abrupt exit at the highest levels including removal of the chairman of the board, and the damning statement implying misconduct are very serious and substantial moves for any board to make, nevertheless one of the size of an org like OpenAI. They did not consult with investors, did not attempt to negotiate with the CEO/ask for a resignation, or anything of the sorts to resemble reasonable process. Usually you retain some form of legal counsel before making such a substantial move, but given the traditionally lax oversight non-profits, it doesn't seem like this is the case.
It's only at the abrupt all-hands meeting they called on a Friday night that it became clear that Ilya Sutskever was at the center of it. He had his disagreements, and pushed the board into making such a abrupt move, and then goes on to say something like "oh I agree it wasn't the ideal way to do it". It's very clear this was a power struggle, not maleficence (per words of OpenAI CTO), from Sam Altman. At least so far, it boils down to ... Ilya didn't like feeling sidelined, so he took things over. And now it's clear the board that sided with (or rubber stamped?) Ilya just wasn't prepared for the consequences.
They're pretty close to having a different corporate structure IMPOSED on them next week.
Spare me the whole "but as a non-profit the board has a responsibility to their mission and charter". Someone has to pay for all those GPUs. If they're going to take a hardline against launching actual products then they can look for donors and see how far they get..
Let's wait for more details on what actually went down.
It's entirely possible Sam was exploring sales or new commercial ventures behind the board's back, or pressuring the business to side-step the oversight and safety mechanisms expected by the overarching non-profit mission. The timing with the dev event is suspect. It sounds like something came out that the board and research organizations were unaware of.
There's no indication that OpenAI wants to terminate existing or future commercial ventures.
Why would they accept a different corporate structure where they get fired? What do they get out of the deal? Currently, OpenAI is essentially owned by the board as a nonprofit.
Some reporting indicates that there was no malfeasance on Sam’s part. There would be no reason to terminate Sam under these circumstances except as a pure power play.
We're not liable to get the facts in an objective and satisfying way here. That makes speculation even less useful, of course. People are liable to pointlessly disagree about what happened here for some time, until (fortunately) people mostly stop thinking about it like with any other gossip.
Democracy is a blunt instrument, for disputes you can't resolve by talking. And the smaller the voting pool the more you can try to reach accomodations rather than the majority imposing on the minority.
The two thirds can undoubtedly do this. But the whole structure is in a bad way if they actually do.
It's becoming a bit of a weasel word in these discussions. I keep hearing it thrown around with nobody specifying how rolling out ChatGPT to more people makes the world "less safe". It's a laugh line at this point.
outside speculators extrapolating from second-hand rumors and Twitter vibes aren't making coherent sense? Shocking.
Perhaps it's as simple as insufficient oversight and moderation of the GPT store. Or perhaps there's too much legal risk in the expanding scope of b2c services, which could then threaten the existence of the research organizations. Who knows?
Maybe we should stop treating this like sports ball or politics
Now seems like the perfect time for him to go raise, bring over the team he wants, and retain ownership and control in a more effective way than a non-profit. Idk why he would go back without BOD capitulating a lot.
How much IP is captured at OpenAI though? Not to mention the hardware. I’m not super familiar with ML in practice but I have to imagine he would still be set back by a significant amount of time
The hardware is in the hands of Microsoft, Google and Amazon. The IP is in the heads of top AI researchers. Whether any of these methods are patentable is an open question since it’s all secret sauce and once the model weights are trained there’s no way to know which methods went into the training.
The current deal with MSFT is cut by Sam is in such a way that Microsoft has huge leverage. Exclusive access, exclusive profit. And after all the profit limit is reached OpenAI sill need to be sold to msft to survive.
This is like a worst deal for OpenAI whose goal is to do stuff open source way and it can’t do so due to this deal.
If it is not for the MSFt deal, OpenAI could have open sourced and might have resorted to crowdsourcing which might have helped humanity.
Also, quickly reaching profit goals is only good for MSFT. There is no need to actually send money to OpenAI team, just provide operating expenses with 25% profit and take 75% profit. OpenAI has built a great software due to years of work and is being simply being milked by MSFt when the time comes for taking profit.
And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the earth donors who donated with humanity goal whose funding made it all possible?
I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.
It's insane to me how easily Sam's side can spin the board firing him for violating the company's Charter and then not backtracking at all as "Within 24 hours the board has come crawling back, I- er Sam Altman might deign to return if they grovel hard enough and I'm given complete control."
This is really, really clearly incestuous tech media stuff as part of a pressure campaign. Sam is the darlin of tech media and he's clearly instigated this reporting because they're reporting his thoughts and not the Board's in an article that purports to know what the Board is thinking, the investors who aren't happy (the point of a non-profit is that they are allowed to make investors unhappy in pursuit of the greater mission!) have an obvious incentive to join him in this pressure campaign, and then all he needs for "journalism" is one senior employee who's willing to leave for Sam to instead say to the Verge that the Board is reconsidering. Boom, massive pressure campaign and perception of the Board flip flopping without them doing any such thing. If they had done any such thing and there was proof of that, the Verge could have quoted the thoughts of anyone on the Board, stated it had reviewed communications and verified they were genuine, etc.
> This is really, really clearly incestuous tech media stuff as part of a pressure campaign.
There’s no evidence of that, only your assumptions. Lots of comments from knowledgeable folks outside the media and who couldn’t care less about a “pressure campaign”, even if it did exist, think the board was clueless and got duped into making a huge mistake with the coup.
> > This is really, really clearly incestuous tech media stuff as part of a pressure campaign.
> There’s no evidence of that
The leaks themselves whether or not based in fact are evidence of that. The only reason for someone in a position to be taken credibly reporting the information contained in either this Verge article or the Bloomberg article with a similarly sourced but slightly different narrative, to take that information to the media, whether or not it is true, is to use public pressure to attempt to shape the direction of events.
EDIT: To be clear, its evidence of the "pressure campaign" part; to the extent that the "incestuous tech media" part has any substantive meaning, I'm not sure its evidence of that.
Agreed the article is more about probing the board’s weakness.
But the board seems to have a weak hand. It can decide to disappoint the for profit investors. But it doesn’t own Sam, or the vast majority of the workers, and maybe not much of the know how. And they can walk if the board disappoints them.
The board’s altruism might be great, but it lacks the legal tools to do what it wants, against organized labor backed by unlimited capital.
That's only because the key players have no reason to compete.
They don't want to run a developer/enterprise ChatGPT platform.
Google cares about Search, Apple about Siri, Meta about VR/Ads. But those three are interesting heavily in their own LLMs which at some point may better OpenAI.
I'd like to hear more about the board's argument before deciding that this was "virtuous board vs greedy capitalist". The motivations for both sides is still unclear.
Seems unusual for a nonprofit not to have a written investigative report or performance review conducted by a law firm or auditor. Similar to what happened with Stanford's ousted president but more expedited if matters are more pressing.
Which is why investors that signed on got a giant warning saying that the Board could choose to be unprofitable if they wanted and that the mission came first, so they should view their investment as a donation.
What we're seeing is that (predictably) the investors have decided that they don't care that they agreed with it and they don't want to lose out on their investment.
What remains to be seen is just how closely the board holds the charter to their hearts and whether the governance structure that was built is strong enough to withstand this.
If this had been presented as a bad outcome or even an improbable one I would agree. And of course you can just complain and do whatever you want in general within the boundaries of the law, free speech, etc.
But if you sign an agreement saying you understand you should treat your investments more like donations and that everything is secondary to the goals of the non-profit and then are upset that your goals were not placed in higher priority than the charter of the non-profit, I'm going to reserve the right to think you're a hypocrite.
Microsoft nor anyone else said they deeply believed in and prioritized OpenAI’s charter over their own interests. They might have agreed to it, and they must abide by agreements, but this is not a case of claiming one set of principles while acting contrary to them.
MSFT likely has the option to claw back some of their money - they could also cease providing Openai gpu’s. At the extreme, they could do so tomorrow based on their perception of the contract dispute. Although such action would risk reputational damage to MSFT.
At a minimum, taking your largest supplier and customer for a ride is probably a bad idea.
It wouldn't be a good idea in a regular business, for sure.
But non-profits aren't a regular business and their ultimate obligation is to their charter. Depending on just what the level of misalignment was here, it's possible that the company becoming nonviable due to terminating Altman is serving the charter more closely than keeping him on board.
No one posting here has enough detail to really understand what is going on, but we do know the structure of OpenAI and the operating agreement for the for-profit LLC make it a mistake to view the company from the lens as we would a regular for-profit company.
The damage would be far more than reputational - Microsoft are starting to roll out “AI with everything” on most of their money-making products. That’s all provided by OpenAI, they can’t just pause for 6+ months and re-implement it with a new provider.
There is no world in which MSFT ponied up 10 Billion without an escrow of the model. Depending on the partnership terms and next steps of OpenAI - this could be a significant breach.
Given the complex org structure - I wouldn’t be surprised if the non-profit (or at least it’s board) wasn’t fully aware of the contract terms/implications.
most of the money likely wasnt paid, and is based on milestones and future returns. I imagine MSFT has some solid licensing bought with their initial payments.
The distinction between the two is whether an organization returns funds to investors. OpenAI, Inc. does not. OpenAI Global LLC does, one of those investors being OpenAI, Inc. (well, indirectly, there's another holding company in between.)
The non-profit doesn't have investors. OpenAI Global, LLC isn't the non-profit, its a for-profit over which the non-profit has complete governance control.
There's nothing insane at all. The board has the freedom to make their choices, but they must own the consequences of those choices. This appears to be a choice they made which has had obvious consequences they miscalculated. Having to walk it back to some degree would prove their inability to effectively plan and govern.
It might, if that had actually happened. There is not good evidence that this has actually happened, and it's just a fact that Sam + investors are going on a massive pressure campaign to try to regain control that they are not legally entitled to, including a media blitz.
Moreover, there is an impartiality issue here in the tech press. A lot of the tech press disagree with the OpenAI Charter and think that Sam's vision of OpenAI as basically Google but providing consumer AI products is superior to the Charter, which they view in incredibly derogatory terms ("people who think Terminator is real"). That's fine, people can disagree on these important issues!
But I think as a journalist it's not engaging fairly with the topic to be on Sam's political side here and not even attempt to fairly describe the cause of the dispute, which is the non-profit Board accusing Sam Altman of violating the OpenAI charter which they are legally obligated to uphold. This is particularly important because if you actually read the OpenAI Charter, it's really clear to see why they've made that decision! The Charter clearly bans prioritising commercialisation and profit seeking, and demands the central focus be building an AGI, and I don't think a reasonable observer can look at OpenAI Dev Day and say it's not reasonable to view that as proof that OpenAI is no longer following its charter.
Basically, if you disagree with the idea of the non-profit and its Charter, think the whole thing is science-fiction bunk and the people who believe in it are idiots, I think you should argue that instead of framing all of this as "It's a coup" without even disclosing that you don't support the non-profit Charter in the first place.
> Basically, if you disagree with the idea of the non-profit and its Charter, think the whole thing is science-fiction bunk and the people who believe in it are idiots, I think you should argue that instead of framing all of this as "It's a coup" without even disclosing that you don't support the non-profit Charter in the first place.
I think you might have better luck grasping the situation if you put a little bit more effort into understanding it rather than jumping to put words in the mouths of others. Nobody said whether they support the non-profit charter or not in the first place, and as far as the phenomena of what's happening right now, the non-profit charter has nothing to do with it.
550 of 700 OpenAI employees have just told the board to resign. Altman is going to MSFT and taking his org with him. Regardless of what the board says, who do you think really has the power here -- the person who has and already had the full support of the org he built around him, or a frankly amateurish board that is completely unequipped for executing on a highly public, high stakes governance task presented in front of it?
Unfortunately, not only can you cannot charter public opinion, but those who try often see it backfiring by making clear their air of moral superiority rather than leaning on their earned mandate to govern the rank and file they are supposed to represent. The board, and it seems you, will simply be learning that lesson the hard way.
"Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."
The Board has the power to determine whether Sam is fulfilling his fiduciary duty and whether his conflicts of interest (WorldCoin, Humane AI, etc) compromise broad benefit.
I wasn't clear enough that I was talking about the real reasons for the dismissal; I think the lack of candour was probably a real event, but was functionally just an excuse. There has been heaps of reporting from inside OpenAI that this was a dispute over AI safety, the values of the Charter, etc. Here's what you can find in the press release itself:
"OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period."
"OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter."
The board didn't claim there were violations, it claimed he wasn't fully candid with them and that they had lost confidence in his leadership. "Violations of the charter" was something invented by the upthread commenter.
There is a bunch of independent reporting citing sources inside OpenAI that the central dispute is over the principles of the Charter, and while no one is clear on the lack of candour thing my view is that it's just an excuse, there was probably a real incident(s) but it wouldn't have been fireable if not for the core dispute about the Charter. They did explicitly re-affirm the Charter in their announcement of Sam's dismissal:
"OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter."
While this is an insightful perspective, it doesn't change the fact that the board is revealed as incompetent. Either they made a mistake in firing him and are now walking it back (while yielding so much leverage that Altman can condition his return on the resignation of everyone who voted to fire him), or they have such little credibility that some standard-practice media collusion has created sufficient pressure to change their optimal strategy, i.e. force their hand.
If they do nothing, then public perception harms their ability to raise further capital, and employees leave for Altman's new company. If they cave to the pressure (despite that being objectively the financially right decision), they lose their board seats and Sam comes back, proving they overplayed their hand when they fired him. They're basically in a lose/lose situation, even if this article is sourced from entirely biased and fabricated information. And that's exactly what reveals them as incompetent.
Their mistake was making a move while considering only the technicalities of their voting power, and ignoring the credibility they had to make it. Machiavelli is rolling in his grave...
...or they don't respond to these pressure tactics, continue talking to their employees to ameliorate legitimate concerns, and accept that some of Sam's hires will go to join him. OpenAI's core (remembering that OpenAI's Charter doesn't demand it makes cool consumer/developer AI products, it demands OpenAI build AGI safely) is not the ChatGPT product team or admin, it is the research team that Ilya leads (or lead until a month ago when Sam tried to sideline him). The company isn't going to leave to follow Sam, or at least the scientists and engineers aren't. They've lost some technical leads that Sam hired and will probably lose more, but it's worth it to make sure that OpenAI is actually following its Charter.
> continue talking to their employees to ameliorate legitimate concerns, and accept that some of Sam's hires will go to join him
This is wishful thinking. If an employee is inclined to follow the innovation, it's clear where they'll go.
But otherwise, the point you raise is a good one: this is about the charter of the board. Many of us are presuming a financial incentive, but the structure of the company means they might actually be incentivized to stop the continued development of the technology if they think it poses a risk to humanity. Now, I personally find this to be hogwash, but it is a legitimate argument for why the board might actually be right in acting apparently irrationally.
I honestly don't know what you're talking about. This article and The Verge one just describe the likely reality of the situation.
Sam Altman was fired. 4 other key people quit and it seems more will follow and join Sam's new venture. This outcome would be a disaster for Microsoft, for other OpenAI investors and for OpenAI. So the board is, per multiple sources, talking with Sam Altman to return. The board declined to comment and is free to clarify any inaccuracies.
There's no need for a spin, the board has miscalculated and got itself in a bad spot.
Welcome to The Verge. This is the same website that pasted “reporting by Alex Heath” on the original article - which was literally just regurgitating the board’s press release and pasting in some tweets. Reporting, huh?
I don't mind the Verge, I don't agree with everything they've written but I generally like their coverage. My criticism is specific to this situation and the cult of personality in tech media around Sam Altman.
Sam is not just the darling of tech media.
He literally
1. Met with every major head of state except for Xi and Putin. He is the face of AI, not just for OpenAI, but for the entire world. The entire AI industry would hate for this to happen.
2. Lead a company from 2 billion valuation to nearly 80 billion in a year.
There is no precedent in startup history to get rid of a CEO at this stage.
This is very US-centric thinking. For me he is just a CEO and completely exchangeable. The only thing I really remember is that the last thing he did was crawl in front of the EU and ask them not to regulate his AI business too much.
He did none of the research that fuels OpenAIs ambitions and future prospects, thats mostly done by people like Sutskever, Radford and many more brilliant scientist.
“Completely exchangeable”
Obviously people are not fungible, replacing one person with another will never provide the exact same results. The question in each case then is how different would the results be, and would the replacement be better or worse? For a very simple job, perhaps pressing a single button, the results may only be subtly different. But what happens when it’s a complex job with no right and wrong answers, where work input is affected by output (like a chaotic system), spanning multiple areas of influence? The work output of the individual changes drastically, and just like in a chaotic system, the results to the organization vary increasingly over time. Nobody is fungible, but of all people, decision makers like politicians, CEOs etc are the butterfly wings flapping in New York that causes a cyclone in Japan. The only real way to evaluate if their impact is likely to be positive is looking at previous results. Due to rarity of top performers and importance to systems, they have negotiation power.
Dependable leaders really do have that much value to their organizations. This is similar to why in critical areas like medicine, old-and-dependable things are valued over new and shiny. The older things have lower risk, and a strong track record. That added dependability is more important than being the newer “better” but riskier option. Back to this topic, how many CEOs with track records managing 80 billion revenue AI organizations are ready to replace Altman? Because Open AI is well ahead in the field, they don’t need big risky changes, they need to reliably stay the course.
His star power is much higher than you give him credit for. It's like saying you can replace Brad Pitt with an average actor because the writers are the ones responsible for the script.
You lose other actors who only joined to work with Brad for one. You lose part of your audience and you lose distribution and press opportunities.
If it wasn't for Sam pushing for a version that became gpt3.5 and the popularity that followed and most recently gpt 4 push we would still be waiting for the brilliant people . Google was way ahead in this space but failed to release anything.
As a developer I understand belittling the business side as providing little value but as someone who has tried to get the masses to adopt my software my respect for their ability to solve non-technical problems has grown.
> For me he is just a CEO and completely exchangeable.
Are you part of OpenAI governance, or any company's governance structure? If not, does it really matter whether someone is exchangeable or not for you?
the old saying “first time for everything” holds here, I think.
It was a mandate. 2/3 the board voted in favor of relieving Sam Altman of his obligation to the company. The question now is why and how that plays out. It is clearly what the board wanted
You are 100% right. Even those who know of his meetings with heads of state outside of the tech community probably just know that it was ChatGPT company doing it, if even that. They wouldn’t know his name.
Though I think it’s best to refrain from calling something a “dumb take”.
You seem to massively overestimate “the whole world” on their familiarity with anything about tech. Even leaving aside the billions who aren’t reading the news - I vividly remember the time a few years ago in Australia when a casual friend, recently out of high school, was shocked to learn that Microsoft makes the Xbox.
I think I agree with your facts as stated, but the absurd part of it is that a single guy was selected as "the face of AI". This happens all the time (and it's wrong): focus on the most prominent people becomes a feedback loop that takes it to ridiculous proportions.
If you look at the quote tweets on Sam's latest tweet[1] that contain just a single heart and no words, those are all OpenAI employees voting for Sam's return. It's quite a sight to see.
I think Microsoft is behind all of this. The “kumbaya let’s work together for humanity” Microsoft has been swapped out for the old Microsoft. Too much is at stake for them.
I read Nadella has threatened to turn off OpenAIs computers and would litigate the hell out of them to prevent the computers from being turned back on. Which is why the board is suddenly open to negotiation with Altman.
Yeah, that's the Microsoft of old. Don't trust 'em.
Bad news for OpenAI, and any hope that this stuff won't be used for evil.
>I read Nadella has threatened to turn off OpenAIs computers and would litigate the hell out of them to prevent the computers from being turned back on.
What a way to destroy confidence in Azure, or cloud platforms in general.
The deal was that MS was going to give them billions in exchange for 49% of the for-profit entity. They were also reportedly waiving the azure bill since their interests are aligned.
MS is saying that if we give you 10 billion dollars and don’t charge you for azure, then there are some obvious strings attached.
OpenAI is presumably free to do what the rest of the players in this space are doing and pay for their Azure resources if they don’t want to play nice with their business partners.
is that codified in the the contract between them though?
Microsoft, through the stock price, has much more to lose than OpenAI. The can apply pressure but don't have full control of the outcome here.
Presumably, under a contractual relationship tied to their licensing agreement with OpenAI. So, this kind of threat undermines confidence in Microsoft's contractual agreements.
To be fair, you wouldn't know they're hyper aggressive until they actually move to enforce part of the contract. Most of their "partners" probably never need to meet that side of their legal team.
If I measured the "aggressiveness" of every contract based on the potential litigation of all its clauses, I'd never sign anything.
They are pretty on the nose with explosive triggers. For example all windows license are free with office 10 year deal however if the deal is withdrawn all windows licenses are owned immediately upon cancellation. This is just the basic explosive stuff it gets worse from there.
Sure, but any clause of the contract that requires followup action to collect payment is hardly ever going to be enforced. It's only when you're partnering with them at the scale of OpenAI that you need to worry about details like that.
And in regards to OpenAI, note that (according to TFA), Microsoft has barely distributed any of their committed "$10 billion" of investment. So they have real leverage when threatening to deploy their team of lawyers to quibble over the partnership contract. And I don't think that "undermines confidence" in Microsoft's contractual agreements, given that there are only two or three other companies that have ever partnered with Microsoft at this scale (Apple and Google come to mind).
Care to sign a partnership agreement? I'll need you to personally indemnify against any defection to GP's side, of course - any subsequent comment in violation of these terms could be subject to arbitration and voiding of the partnership agreement.
Nah, it is MSFTs contingency plan in all this. U don’t invest 10B and get blindsided. It would be hilarious it is forced to threaten the board to comply this way
I thought one of the reasons people incorporated companies in the US is that there is a working judiciary system that can ensure the upholding of contracts. Sure the big money can apply some pressure to the dispossessed but if you have a few million cash (and OpenAI surely has) you should be able to force them to uphold their contracts.
Also imagine the bad PR from Microsoft if they decide to not honour their contracts and stop OpenAI from using their computer power for something that OpenAI leadership can easily spin as retaliation.
Sure, this latest move from OpenAI board will wreck the momentum that OpenAI had and its ability to continue its partnership with MS but one of the thesis here was that that's the goal in the first place and they're legally free to pursue that goal if they believe the unfolding of events goes against the funding principles of OpenAI.
That said, they choose a risky path to begin with when they created this for-profit controlled by a non-profit model.
Also not paying the rest of the tranches that would make up the $10B. Also with Microsoft being their exclusive commercial partner, they can’t revenue fund if Microsoft stops the spigots. No other investor would want to invest. PPUs lose most of their value and employees leave. How to implode the most important company of our times with record speed.
It’s also strange why they would have a couple of nobodies on the board.
When you fuck up, you get punished for it. And the OpenAI board is about to be punished. This is the problem with giving power to people who don't actually understand how the world works. They use it stupidly, short-sightedly, and without considering the full ramifications of their actions.
I have long since held the idea that "singularity" is a misleading term, and "event horizon" is much more descriptive: if you're being constantly surprised — my original phrasing was about going to sleep each night and being surprised each morning by the developments while you slept — then The Change has happened.
I don't think accelerationism is about increasingly reckless decisions made by humans in increasingly powerful positions. Or anyway I hadn't yet been thinking
what would the role and deliverable of the non-profit be though? so far they innovated, shipped fast, cut costs and sold plus subscriptions and dev apis for personal and enterprise use. and to be fair i really liked using their apis. so how would them being fully non-profit look like even? genuinely curious.
How exactly would that happen? It’s a non profit that owns a staggeringly valuable piece of IP.
Do you understand that this is conceptually the same thing as the directors of a public art museum deciding to just take millions of dollars of paintings for themselves?
The kind of AGI you’re talking about - the one that makes people nervous about who is in charge of it - nobody can truly be “in charge” of this kind of AGI.
They decide how to bias it. They decide if they should pull the plug or not. They decide if it’s too powerful and they’re in contact with people who suggest to the government that we should bomb datacenters if so
Over the years "tech" has been less and less about making things and more and more about making your investors money. Technical talented used to be extremely important in this industry, but it's slowly been being worn away over the years.
I still like working in this industry because you can still find interesting problems to solve if you hunt for them, but they're getting harder to find and it increasingly seems like making good technical decisions is penalized.
It's sad to see even on HN how many comments are so dismissive of technical skills and ambitions, though I guess we've had more than a generation of engineers join the field because it was the easiest way to make the most money.
For a brief moment on Friday I thought "maybe I'm too cynical! Maybe there still are places where tech actually matters."
Not surprised it looks like that hope will be inverted almost immediately. I also suspect the takeaway from this will be the final nail in the coffin for any future debates between engineering and people who are only interested in next quarters revenue numbers.
What else would you expect? OpenAI spun up "separate" for profit company and recruited a bunch of industry top engineers and scientists with 500k+ salaries where the vast majority of it is tied to equity grants.
Most of the employees values do not align with a non profit, even if executives like Ilya do.
By firing Altman and trying to remind the world they are a non profit that answers to no one they are also telling their employees to fuck off on all that equity they signed on for.
I mean you're describing exactly the empty technical world I've been experiencing.
So the future of AI is in the hands of leadership that's slick talking but really only there to make a quick buck, built by teams of engineers whose only motivation is getting highly paid.
I don't begrudge those that are only in it for the money, but that's not the view of tech that got me excited and into this industry many years ago.
The point of my comment is that for a moment I thought maybe I was wrong about my view of tech today, but it's very clear that I'm not. It sounds like the reality is going to end up that the handful of truly technical people in the company will be pushed out, and the vast majority of people even on HN will cheer this.
If Sam Altman wins and the likes of Ilya lose then we won’t actually have AI. Since Sam Altman doesn’t know anything about building AI. We’ll have more sharky products with grandiose visions that end up making money by using surveillance.
But I’m hopeful that AI will at least win by open source. Like Linux did. “Linux” wasn’t a 100 billion startup with a glitzy CEO, but it ate the world anyway.
but he wants to jail the model he builds. As Sam says, he should think more about what he actually wants to do, and then do it. Not go in 2 opposite directions at the same time.
From Microsoft? My point is that companies that are serious about making money (even at some indeterminate point in the future) are much better at attracting investment than those which have publicly declared it's not their goals.
Nobody is throwing billions around without expecting anything in return.
Nobody says that investors don't expect anything. However, it's pretty clear that Sam just solely focused on delivering fast in order to keep his advantage. He said he cared about AGI safety publicly, but his style of leading the company makes it clear that he didn't care.
> However, it's pretty clear that Sam just solely focused on delivering fast in order to keep his advantage
Yes, I'd assume most investors prefer this type of approach to a more cautious one. Meaning that companies like this are more likely to attract investors and more likely to beat the ones which care about AGI safety to actually building an AGI (whatever is that supposed to mean).
Equating Ilya to the average B.S. in Computer Science is like equating Sam to a used car salesman. Neither are true and both were instrumental in the success of OpenAI.
With a fairly neutral opinion of sama, I think this whole move has done more to hurt OpenAI than help. There are politics in companies all the time but it takes a certain finesse because you don’t want to sink the ship. This at best creates a stronger divide between those for and against.
>The Information reported that Altman was already meeting with investors to raise funds for such a project. One source close to Altman said that both options remained possible. “I think he truly wants the best outcome,” said the person. “He doesn’t want to see lives destroyed.”
Lives destroyed? This is either a reporting error or is a very weird thing to say. Unless the source insinuates that Altman is willing to destroy the world with a cruel AGI if he doesn't get his way.
I think he means people who have invested huge amounts of their energies for years into the venture and view it as their highest and best calling getting forced out (or feeling like they have to leave). For many of them it's not just a job.
Do you really think the board is so incompetent as to not have thought through Microsoft's likely reaction?
And, do you really think they would have done this if they thought there was a likelihood of being rebuffed and forced to resign?
The answer is, no. They are not that incompetent.
I wish Sam & co the best, and I'm sure they'll move on to do amazing things. But, the recent PR just seems like spin from Sam & co, and the press has every reason to feed into the drama. The reality is that there are very smart people on both sides of this power struggle, and there's a very low probability of such a huge misstep on the board's part - not impossible but highly unlikely imo.
The only exception I can see is if Ilya&co foresaw this but decided to act anyways because they feel so strongly that it was the moral thing to do. If that's the case, I'm sure Elon's mouth is watering ready to recruit him to xAI.
Yes, they are that incompetent except one. D’Angelo has a history of such moves. He fired his cofounder when Quora was still doing good and growing, and Quora has been struggling ever since.
The other likely scenario: investors are using their media connections to push a narrative to get OpenAI to take Sam back, not necessarily Altman himself. With this being the hottest story, any credible gossip from a known name would be enough for many of these media organizations to run with it.
“Staffers were ready to resign” really? Who? How many? The deadline passed hours ago, why haven’t we seen it?
This firing very much has the feeling of the board fearfully pulling the circuit breaker on OpenAI's for-profit trajectory.
On the one hand, I actually respect their principles. OpenAI has become the company its nonprofit was formed to prevent. Proprietary systems. Strong incentive to prioritize first-to-market over safety. Dependency on an entrenched tech co with monopolistic roots.
On the other hand, this really feels like it was done hastily and out of fear. Perhaps the board realized that they were about to be sidelined and felt like this was their last chance to act. I have to imagine that they knew there would be major backlash to their actions.
In the end, I think Sam creating his own company would be better for competition. It's more consistent with OpenAI's original charter to exist as the Mozilla (though hopefully more effective) of AI than as the Stripe of AI.
Sam creating his own company for what purpose? Meta, Google, Elon's company, Anthropic, OpenAI... why would anyone believe that Sam and crew could stand up a new company tomorrow and have any kind of chance to compete in next six months with above?Even if Microsoft threw a ton of money at such a startup, good luck finding GPU time. Good luck sourcing the data. Good luck doing the RLHF. Could Sam and Greg do this? Sure! But what would that give them above and beyond the racers that are currently in pole position?
It now seems to me that it was inevitable that something like GPT would take off - but it didn't necessarily have to come from OpenAI. Someone else would have filled their place. The collective ML knowledge and research were rapidly evolving, computing was getting faster and cheaper. The pressure was building and at some point, something somewhere had to pop off. They were a great but not a singular team.
And it looks like now they might be very close to the limits of their own capability. I'm not sure how much more they can give.
On the surface, their new features always seem to be quite exciting. But when the dust settles it is again all very lackluster, often copied from open source ideas. Not something you can bet on.
Their biggest moats are their popularity, marketing, and their large bags of cash. The latter of which they are burning through extremely quickly. The thing is, it's easy to build something massive when you don't care about unit economics. But where do they end up when the competitive forces commoditize this?
When listening to interviews with Sam I was always surprised by how little useful information I am able to get out of listening to him. I'm sure he's very smart but he tries to project the aura of radical honesty while simultaneously trying to keep all of his cards extremely close to his chest. All that without the product chops to actually back it up. That's my read.
I think you're right but there might be a catch. It doesn't seem like he's able to steer the delivery of polished products either. I know it's the fastest growing app and all that but that's driven by their tech. I use ChatGPT daily but the tool itself has a subpar design, it lags, its streaming UI is choppy, it breaks and cuts off mid-sentence, and they are not able to meet the demand either.
I don't know how this unfolds but when somewhat smart models become a commodity, and thus the remaining 90% of the population get access to polished chatbots distributed through dominant platforms like Google, Facebook, Instagram, etc. - where does that leave OpenAI at? High-end models probably. And maybe with superintelligence unlocked it's all that's needed to win business-wise, I don't know.
The reason something like GPT didn't come out of e.g. Google is that they had/have stuff similar but were very reticent about making it public -- because of the ethical aspects as well as the sheer resource cost of hosting it to the public -- and also I suspect have naturally internal conflicts about whether it's the right direction to take, etc.
And so something like OpenAI came along where Ilya S etc. got bags of money to go take that approach and scale the crap out of it, and, yeah, they got results. Because they didn't have to be careful, or deal with competing interests.
That's all fine, but it's also no surprise when it all blows up, is it?
It for sure is not because of ethical concerns. There is a higher bar to clear to burn cash when other projects are delivering high ROI. That sort of thing will never come out of Google in that form, since the post-Google Labs days.
Sam tries to sound smart while not really having any technical insight. He does a tremendous job with it though.
One way to think about this is: at some point in the next few years we'll have a few hundred GPUs/TPUs that can provide the compute the compute used to train GPT3.
This discovery was always going to happen. The question is if OpenAI made radical scaling possible unlike before. Answer there is also a no. There are clear limits to number of collocated GPUs, nVidia release cycles, TSMC capacity, power generation etc.,
So in the best case OpenAI fudged the timeline a little bit. Real credit belongs to the Deep Learning community as a whole.
It's related insofar that Sam or no Sam, OpenAI might be done with their part for good. It seems like they might have GPT-5 up their sleeve but after that, I can't be certain if they have a winning horse in this race to the bottom.
It might be like betting on electricity companies to become the biggest winners just because electricity is going to be everywhere.
Rarely does getting back together with an ex work out. You miss each other in the beginning thinking of only the good things, but very quickly you start to remember why you were exes to begin with.
Just like yesterday Sam 100% lied to the board and was on the outs. Today, it's 100% the board is getting fired and Sam is coming back on a white horse wearing shining armor.
Exactly my thoughts and the ex analogy; it would be extremely awkward for everybody as this event has tainted everyone's reputation at OpenAI. I'd think even MS may want to get out of the deal.
It's sort of like if they didn't have a change control process, deployed on a Friday, realized the deployment was broken, and are now hurriedly trying to roll it back before business hours Monday.
Not really a good look from a company that's leading the charge on such a pivotal technology.
For those downvoting me, I’d be interested to hear how you think it could work for him to come back and work with the people that unceremoniously fired him and made it sound like he had ethical violations when it now seems clear it was a disagreement over strategy. If there is a disagreement over strategy and he goes back what value do the others provide now?
Also, wouldn’t it seem clear that the board acted rashly in firing him without input from other stakeholders?
I am legitimately interested to know why you downvote as I don’t see another way for it to work.
Your position seems presupposed on the idea that people think Sam going back is beneficial for OpenAI.
Personally this is all largely popcorn munching entertainment for me, as I don't think Sutskever is right about a lot of his core tenets, but I also don't think that Altman is a good fit for achieving the charter that OpenAI is obligated to follow.
I don't think OpenAI will be at the forefront of AI/AGI/etc. research a decade from now regardless (and had that position before yesterday's events) but if the OpenAI charter and mission statements are sincerely held beliefs then the path they have been following for the past several years with Altman at the helm have obviously been counter to it.
Whether or not that charter is anything more than a pipe dream isn't really relevant - they're a non-profit so their legal obligation as the board is to direct the resources under their control to achieve that charter.
For what it’s worth (nothing!) I don’t believe that a rehire offer is really happening.
On the tech side, I think work will split on two tracks: 1) building great applications with small and medium fine tuned models like Mistral, etc. Within a year or two great models will run on the edge because of continuous technical improvements. 2) some players will go for the long game of real AGI and maybe they will get there in much less than a decade.
On the business side, I have no idea how the current situation is going to shake out.
If Sam starts a competing company and can pull a large chunk of the researchers and engineers over (if I were an OpenAI employee, I would be interested in following a proven success story like Sam), then Microsoft's $10B investment would be down the drain. Obviously Microsoft wouldn't want that, and I'm sure Satya has got his hands around the nuts of the Board members and is squeezing them hard (well, figuratively speaking, since there's Toner).
Microsoft hasn't actually sent them all that money yet, and a lot of it seems to be in Azure credits that they can just pull. Then what are they going to do?
The engineers are said to be relieved that Altman is gone, so it doesn't sound like they'd be following a "success story" (whatever that is supposed to mean).
Sam Altman should make it a condition of his return that the current board be dissolved and have a completely new board picked to avoid such dramatics in the future.
The board members should have a high degree of trust and cohesion. That does not mean they should see eye-to-eye on everything. But lack of trust and backstabbings like this can create a poisonous atmosphere on the board and ultimately affect the company's ability to execute effectively. If the CEO has to worry all the time if the board is going for his ouster, this will resemble the HP and Yahoo boards of the 2000s decade.
This boards needs to be fired. Every single one of them.
I don't understand how Microsoft, after having invested billions, doesn't have a board seat. If they did, I doubt this would've ever happened. I'm not sure why Microsoft let that happen.
But even ignoring that, the board making a decision as impactful as this without consulting their major investors is a dereliction of duty. That alone justifies getting rid of all of them because all of them are complicit in not consulting Microsoft (and probably others).
I have no idea why Sam was fired but it really feels just like an internal power struggle. Maybe there was genuine disagreement about the direction for the company but you choose a leader to make decisions. Ousting the CEO under vague descriptions of "communications with the board" just doesn't pass the smell test.
I'm reminded of this great line from Roger Sterling [1]:
> Half the time this business comes down to "I don't like this guy"
So much of working, performance reviews, hiring and firing decisions and promotions is completely vibes-based.
The entire setup is structured so that they are not supposed to be beholden to investors. If it is true that they ultimately are and Microsoft is the leverage to get Altman back, then they explicitly failed in the goal in setting up their structure of governance.
The fundamental thing you are missing here is that the charter of the non-profit and structure of their ownership of the for-profit (and the for-profit's operating agreement) is all designed in a way that is supposed to eliminate financial incentives for stakeholders as being the thing that the company and non-profit are beholden to.
It may turn out that the practical reality is different from the intent, but everything you're talking about was a feature and not a bug of how this whole thing was set up.
My point is that the reason for Sam's firing is irrelevant to the point that the board is incompetent and should be fired.
Valid or not, you don't blindside major investors who have given you billions. They apparently told Microsoft minutes before the announcement and after the decision had already been made. Even if you fully intend to take this course of action, you loop in your major investors and consult them or at least give them a heads up to prepare any communications they might need to make or even just their press people are prepared for the inevitable questions.
They didn't do that, according to Micrsoft. That's why they need to be fired.
"The Who, What & Why of Directors & Officers Insurance
The Hartford has agents across the country to help with your insurance needs.
Directors and officers (D&O) liability insurance protects the personal assets of corporate directors and officers, and their spouses, in the event they are personally sued by employees, vendors, competitors, investors, customers, or other parties, for actual or alleged wrongful acts in managing a company.
The insurance, which usually protects the company as well, covers legal fees, settlements, and other costs. D&O insurance is the financial backing for a standard indemnification provision, which holds officers harmless for losses due to their role in the company. Many officers and directors will want a company to provide both indemnification and D&O insurance."
It’s very hard to have too strong of an opinion here without knowing more about the actual conversations happening behind closed doors… on outside looking in this seems like a lot of drama and whatever the outcome- it looks super bad for OpenAI… if anything it’ll have the effect of a lot of us shopping for alternatives maybe that was the intention?
There's no way anyone should continue to use OpenAI products if Sam Altman is allowed to return. Because in that scenario there won't even be a counterweight to his point of view in the company anymore, and that counterweight was the only thing keeping his extreme untrustworthiness and sliminess from being a deal-breaker.
He'll probably go to X to work for Musk, Microsoft is too big and too controlling and will eventually go full Gavin Belson (from the Silicon Valley show) and want to put GPT in a box to do Windows telemetry or something similar, while Musk can pay Altman some ungodly amount to build something with at least a vision behind it.
>One AI-focused venture capitalist noted that following the departure of Hoffman, OpenAI’s non-profit board lacked much traditional governance. “These are not the business or operating leaders you would want governing the most important private company in the world,” they said.
I'd be interested in a discussion of the merits of "traditional governance" here. Traditional private companies are focused on making a profit, even if that has negative side effects like lung cancer or global warming. If OpenAI is supposed to shepherd AGI for all humanity, what's the strongest case for including "traditional governance" type people on the board? Can we be explicit about the benefits they bring to the table, if your objective is humanitarian?
Personally I would be concerned that people who serve on for-profit boards would have the wrong instinct, of prioritizing profit over collective benefit...
Does this mean they're willing to replace Ilya Sutskever with Altman? Because it sounds like Sutskever doesn't want Altman as CEO, so I assume he will leave willingly or unwillingly if they take back Altman. Who adds most value to OpenAI?
So one day you get the ceo of Microsoft to attend your dev conference and next day you get fired and then another day there are negotiations to get you back. What is this? A Russian roulette or a game of thrones??
I'm glad that there are other companies and open source efforts to fall back on.
As an API user of the GPT models I've always had it at the back of my mind that it would be unwise to 100% rely on OpenAI for the core of any product I built.
The recent rocking of the boat is further justification for my stance in that regard.
> OpenAI's chief strategy officer, Jason Kwon, told employees in a memo just now he was "optimistic" OpenAI could bring back Sam Altman, Greg Brockman and other key employees. There will likely be another update mid-morning tomorrow, Kwon said.
Has anyone asked ChatGPT about the situation? It seems like an obvious thing to do. When parents are arguing and about to divorce, they must listen to their children.
Not here when the governance structure the way it is. The board has the last say. Now, there are obviously ways that investors can put pressure on the board and with literally billions at stake I can imagine that happening. Think serious espionage level drama.
Sure, that might be true on paper, but if you have the ability to turn off the money faucet you have a lot of leverage regardless of what the documents say. No espionage drama needed haha.
If you then imagine being dependent on that investor not only because your service runs on their infra but also your biggest enterprise customers use your service through their infra, you are even more incentivised to listen to them..
Either way, even before the firing, Ilya and Altman was not gonna be in the same office working much longer. Altman seem to be the irreplacable one because of his status/connections/leadership. Which is also good, as where ever Ilya goes, it will only heat up the competition for OpenAI. Competition is good for tech.
Microsoft now has both 49% stake in OpenAI and the OpenAI visionaires building competing products. I smell a rain of lawsuits.
Meanwhile AWS is sucking its thumb and google meet will no longer be relevant! LOL
We are a board for a company whose stated mission is to develop AI tools to better serve the needs of humanity. The CEO has secretly piloted the company towards alternate goals of profit. We fired him on Friday but think it was a mistake. Who knows. Anyways, how can we rehire him?
---
Rehiring a CEO whom you've recently fired is a delicate process, both from a legal and business standpoint. Here's a general approach you might consider:
1. *Board Meeting*: Convene a board meeting to discuss the decision to rehire the CEO. This should involve all key stakeholders to ensure transparency and agreement.
2. *Legal Considerations*: Consult with legal counsel to understand any legal implications or contractual issues arising from the initial termination and potential rehiring.
3. *Negotiation and Terms*: If the board agrees to proceed, you'll need to negotiate new terms with the CEO. This might include discussions about the future direction of the company, salary, and any conditions related to the rehire.
4. *Addressing the Underlying Issues*: It's important to address the reasons that led to the initial firing. This might involve setting clearer goals aligned with the company's mission, establishing better oversight, or implementing checks and balances.
5. *Communication*: Once an agreement is reached, communicate the decision to rehire the CEO to your employees, stakeholders, and possibly the public, depending on the nature of your company. This communication should be clear about the reasons for the reversal and the future direction of the company.
6. *Monitoring and Evaluation*: Finally, set up a system for regularly evaluating the CEO's performance against the company's mission and goals to prevent a recurrence of the previous issues.
Remember, the rehiring process should align with your company's bylaws and any relevant legal requirements. Transparency and clear communication throughout the process are crucial to maintain trust among your stakeholders.
It seems incredible that the OpenAI board would hastily bring back someone whom they fired hastily fired just 24 hours prior, allegedly for serious ethical reasons, something tantamount to lying to the board.
1. **Transparent Governance**: OpenAI should strive for greater transparency in its governance structure. This includes clearly outlining the roles and responsibilities of the nonprofit board versus the for-profit subsidiary, and how decisions impact each entity. This would help mitigate misunderstandings and conflicts of interest.
2. **Balanced Board Composition**: The board should be restructured to balance the interests of various stakeholders, including investors, employees, and the broader AI community. This can be achieved by having a diverse set of members with expertise in technology, business, ethics, and law.
3. **Stakeholder Engagement**: Regular engagement with key stakeholders, including investors like Microsoft and employees, is crucial. This ensures that major decisions, such as leadership changes, are made considering their potential impact on all parties involved.
4. **Leadership Stability**: To address concerns about leadership and company direction, it may be beneficial to have a stable leadership team that aligns with OpenAI's mission and values. This could involve a re-evaluation of Sam Altman's role and contributions, considering the interests of both the nonprofit and for-profit entities.
5. **Strategic Communication**: OpenAI should develop a strategic communication plan to address public concerns and market reactions. This includes clear messaging about its mission, decisions, and future plans, which can help maintain public trust and investor confidence.
6. **Ethics and Safety Focus**: Given the transformative potential of AI, OpenAI should continue to prioritize AI ethics and safety. This commitment should be evident in its operations, research directions, and partnerships.
7. **Long-Term Vision Alignment**: Finally, aligning the long-term vision of OpenAI with the interests of its stakeholders, including the global community it aims to serve, is essential. This involves balancing profitability with ethical considerations and societal impact.
By implementing these strategies, OpenAI can navigate its current challenges while staying true to its mission of developing AI in a safe and beneficial manner.
It's cool to see that the moment to moment dynamics of the worlds most cutting edge company is no different than a pack of 5 year olds arguing at lunch over a few day period.
Ah this is the straw that breaks the back of the ethics/responsible AI camel’s back. When Sam comes back he will turn it into a regular company and maximize profits.
> OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
At least two of these people are not like the others and deserve to be fired and disgraced for this shitshow regardless of how it pans out.
Ok so why doesn't Sam Altman and his buddies team up with John Carmack, who is fully invested in AGI now, and has a proven legacy for getting shit done?
Instead of outsing Sam A and Greg B, if Ilya really had deep concerns, he should have quit and built his own AGI dedicated research company. His prestige surely would have given him funding.
Like how Hinton left Google so he could speak freely.
IMO inventing AGI is more powerful than nuclear energy. It would be very stupid of humanity to release it out in the wild.
LLMs are a great tool and nowhere near AGI.
I’m of the belief that alignment of AGI is impossible. It’s like asking us to align with lions. Once we compete for the same resources, we lose.
> If Ilya really had deep concerns, he should have quit and built his own AGI dedicated research company.
...
You should look up some history here.
Exactly what you say has already happened and OpenAI is the dedicated research company you are referring to.
He originally left Google deep mind I believe.
> I’m of the belief that alignment of AGI is impossible.
I don't think most people in this space are operating based on beliefs. If there is even a 10% chance that alignment is possible, it is probably still worth pursuing.
This makes no sense. People are calling what the board did a coup, but Altman is trying (failing?) to stage a coup.
The board was Altmans boss - this is pretty much their only job. Altman knew this and most likely ignored any questions or concerns of theirs thinking he is the unfireable superstar
Imagine if your boss fired you - and your response was - I’ll come back if you quit! Yeah, no. People might confuse status with those of actual ceo shareholders like zuck, bezos, or musk. But Altman is just another employee
The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired. So mostly this just seems like an egomaniac employee who thinks he is the company (while appropriating the work of some really really smart data scientists)
Also, the board made a decision without the board's chairman - Greg Brockman - involved. Also, it looks like the board didn't follow it's own internal rules about meetings.
Also, the investors were not informed. It’s insane their largest investor and partner MSFT was blindsided by this. Anyone with just a little bit business sense knows this.
Non-profits still earn money recorded as net assets. They do not retain earnings at the end of the accounting period to store in shareholder’s equity because there are no shareholders that own the non-profit.
The point still stands, the board does not have "investors". Microsoft knowingly donated to the for profit entity of the non profit. Open AI isn't a PBC, it's a 501c non profit. So the board can act that way, without the knowledge of the investors.
That being said, this is a case of biting the hand that feeds you. An equivalent would be if a nonprofit humiliated its biggest donor. The donor can always walk away, claiming her future donations away, but whatever she's donated stays at the nonprofit.
Watching what? A 501c3 being publicly pressured to make key governance decisions for the commercial benefits of investors in the 501c3's for-profit indirect subsidiary rather than the board's good-faith interpretation of its charitable purpose?
Nonprofit boards literally don't answer to anyone. Company boards are responsible to shareholders, but nonprofits only have donors and that's not ownership. You can say you're not donating again, or all the employees can quit, but neither of those are legal issues for the board members. It just makes them look stupid.
Though, I'm not sure if you can legally force a donation refund, or what it counts as if you cancel a billion in Azure credits.
Can you please stop posting in the flamewar style? You've done it multiple times in this thread, it's not what we want here (and destroys what we do want), and you can make your substantive points without it.
From a legal point of view, they are correct: non-profits do not have shares, so they do not have shareholders, and it's the board that wields ultimate authority.
Of course in real life non-profits are funded by donors, who thus have a great deal of practical power over what happens, and you can also do things like sue the non-profit for deviating from its mission.
If they are willing to let things end (because they see their mission as not being fulfillable etc.) and don't care much otherwise then that statement isn't far off. I really don't know how ideological this particular situation is, though.
Are you going to claim something is specifically going to happen to them or are you just going to post about "what's really going on man"?
Satya can assassinate them if he wants, but they're not in legal trouble. (Although, considering how much Helen Toner's resume reads like a CIA agent, he probably shouldn't try it.)
I think they could survive as a useless AI safety lab too. After all, there's already been one for decades called MIRI and it has enough donors to stay open.
These board members are either not serious people or they let their perceived power over a ground breaking company go to their collective heads. Either way has been quite the misplayed checkers move.
The fact that HN engineering grunts have no idea what table stakes are vs titles and authority shows how they aren’t cut out for executive brinksmanship.
What does any of that have to do with whether it's a "coup" or not? "Coup" has an implication of illegitimacy, but by all accounts the board acted within its authority. It doesn't matter if it was an ill-advised action or if Altman has more leverage here.
There's a distinction between what's technically allowed and what's politically allowed. The board has every right to vote Sam and Greg off the island with 4/6 voting in favor. That doesn't mean they won't see resistance to their decision on other fronts, and especially those where Sam and Greg have enough soft power that the rest of the board would be obviously inadvised to contradict them. If the entire media apparatus is on their side, for example (soft power), then the rest of the board needs to consider that before making a decision that they're technically empowered to make (hard power).
IMO, there are basically two justifiably rational moves here: (1) ignore the noise; accept that Sam and Greg have the soft power, but they don't have the votes so they can fuck off; (2) lean into the noise; accept that you made a mistake in firing Sam and Greg and bring them back in a show of magnanimity.
Anything in between these two options is hedging their bets and will lead to them getting eaten alive.
You mean Microsoft, who hasn't actually paid them the money they said they will eventually, and who can change their Azure billing arrangement at any time?
Sure, I guess I didn't consider them, but you can lump them into the same "media campaign" (while accepting that they're applying some additional, non-media related leverage) and you'll come to the same conclusion: the board is incompetent. Really the only argument I see against this is that the legal structure of OpenAI is such that it's actually in the board's best interest to sabotage the development of the underlying technology (i.e. the "contain the AGI" hypothesis, which I don't personally subscribe to - IMO the structure makes such decisions more difficult for purely egotistical reasons; a profit motive would be morally clarifying).
The objective functions might be different enough and then there is nothing the hard power can do to get what it wants from OpenAI. Non-profit might consider winddown more in line with mission than something else, for example.
The threat to the hard power is that a new company emerges to compete with them, and it's led by the same people they just fired.
If your objective is to suppress the technology, the emergence of an equally empowered competitor is not a development that helps your cause. In fact there's this weird moral ambiguity where your best move is to pretend to advance the tech while actually sabotaging it. Whereas by attempting to simply excise it from your own organization's roadmap, you push its development outside your control (since Sam's Newco won't be beholden to any of your sanctimonious moral constraints). And the unresolvability of this problem, IMO, is evidence of why the non-profit motive can't work.
As a side-note: it's hilarious that six months ago OpenAI (and thus Sam) was the poster child for the nanny AI that knows what's best for the user, but this controversy has inverted that perception to the point that most people now see Sam as a warrior for user-aligned AGI... the only way he could fuck this up is by framing the creation of Newco as a pursuit of safety.
That's certainly a purist way of looking at it, and I don't disagree that it's the most aligned with their charter. But it also seems practically ineffective, even - no, especially - when considered within the context of that charter. Because by shutting it down (or sabotaging it), they're not just making a decision about their own technology; they're also yielding control of it to groups that are not beholden to the same constraints.
Given that their control over the technology at large is limited anyway, they are already (somewhat?) ineffective, I would think. Not sure what a really good and attainable position for them would like be in that respect.
Yeah, agreed. But that's also why I feel the whole moral sanctimony is a pointless pursuit in the first place. The tech is coming, from somewhere, whether you like it or not. Never in history has a technological revolution been stopped.
They acted within their authority but possibly without the support of those that asked them to join in the first place and possibly without sufficient grounds and definitely in a way that wasn't in the interest of OpenAI as far as the story is known today.
Legitimacy is derived from power not from abstraction. Sorry that’s the reality. Rules are an abstraction. Power let’s you do whatever you want including making new rules.
Everyone on that board is financially independent and can do whatever they want. If Sam & Ilya can't get along that basically means there are 2 companies where previously there was OpenAI. (4 if you add Google and Anthropic into the mix; remember that OpenAI was founded because Ilya left Google, and then Anthropic was founded when a bunch of top OpenAI researchers left and started their own company).
Ultimately this is good for competition and the gen-AI ecosystem, even if it's catastrophic for OpenAI.
I'm curious what you're inferring to be "the way they want it to"?
From my read, Ilya's goal is to not work with Sam anymore, and relatedly, to focus OpenAI on more pure AGI research without needing to answer to commercial pressures. There is every indication that he will succeed in that. It's also entirely possible that that may mean less investment from Microsoft etc, less commercial success, and a narrower reach and impact. But that's the point.
Sam's always been about having a big impact and huge commercial success, so he's probably going to form a new company that poaches some top OpenAI researchers, and aggressively go after things like commercial partnerships and AI stores. But that's also the point.
Both board members are smart enough that they will probably get what they want, they just want different things.
You need to remember that most people on this site subscribe to the ideology that growth is the only thing that matters. They're Michael Douglas 'greed is good' type of people wrapped up in a spiffy technological veneer.
Any decision that doesn't make the 'line go up' is considered a dumb decision. So to most people on this site, kicking Sam out of the company was a bad idea because it meant the company's future earning potential had cratered.
> You need to remember that most people on this site subscribe to the ideology that growth is the only thing that matters
I'm not sure that's actually true anymore. Look at any story about "growth", and you'll see plenty of skeptical comments. I'd say the audience has skewed pretty far from all the VC stuff.
That’s unfair. The issue is poor governance. Why would anybody outside OpenAI care how much money they make? The fact is a lot of people now rely in one way or another on OpenAI’s services. Arbitrary and capricious decisions affect them.
My best guess is they turn off the commercial operations that are costing them the most money (And that they didn't want Sam to push in the first place) and pump up the prices on the ones they can actually earn a profit from and then try to coast for awhile.
Or they'll do something hilarious like sell VCs on a world wide cryptocurrency that is uniquely joined to an individual by their biometrics and somehow involves AI. I'm sure they could wrangle a few hundred million out of the VC class with a braindead scheme like that.
No, to continue the poker metaphors, that's taking your chips and going home, perhaps to create your own casino with blackjack and hookers (h/t to Bender).
"Table stakes" simply means having enough money to sit at the table and play, nothing more. "Having a big pile of GPUs is table stakes to contest in the AI market."
Are you saying that Sam has the ability to generate new contracts when you say take contracts with him, or do you think that somehow the existing contracts with Microsoft and other investors are tied to where he is?
But it isn't a business at heart from its structure. Commercially I agree that Sam's position is superior but purely focusing on the non-profit's mission (not even the non-profit itself) - not so sure.
Alignment with the 2nd most valuable company on the planet.
I could go on. I strongly dislike the guy but you need to recognize table stakes even in your enemy. Or you’ll be like Ilya. A naive fool who is gonna get wrecked thinking doing the “right” thing in his own mind will automatically means you win.
From everything we can see Ilya appears to be a true believer.
A true believer is going to act along the axis of their beliefs even if it ultimately results in failure. That doesn't necessarily make them naive or fools - many times they will fully understand that their actions have little or no chance of success. They've just prioritized a different value of you.
That's fair, but by messing this up OpenAI may well end up without any oversight at all. Which isn't the optimum outcome by a long shot and that's what you get for going off half-cocked about a thing like this.
Assuming he sees OpenAI spinning out of control either way, it's probably better to have tried to change it and, if it fails, to at least not be part of the problem.
I think that could have been done more graciously. And there are other drivers still on the table other than good governance, a good old palace revolution in disguise is definitely not ruled out at this point.
I’ve been in his shoes at a smaller level. Once the company believes they have a stable sellable product they have no interest in any new breakthroughs. His table stakes are gone and Microsoft probably believes gpt4 turbo will be billable for years to come.
I think it’s even simpler than that. They’re engineers, they’re used to reading the “specifications” and then assume that’s how things work. In this case they read whatever a 501(c) non profit is and now can claim they understand exactly how it works because they read the spec.
I am probably close to an engineer like you describe them.
But managers you describe are, to repeat your words, so tiresomely dumb, that they think they could have a sustainable business without such engineers.
I have left seemingly successful tech companies and 6 months - 2 years later they were bancrupt. I won't claim just because of me leaving, but because of enough engineers getting fed up with management and their originally great product detoriating quickly.
The advantage for me as an engineer is that I valuable creative working environment over getting rich and powerful.
What you are describing is real but the difference is between simple disaggregated power vs singular focused power. Over and over singular and focused tends to win in an established game where money is already being made.
If you are pre revenue you are all just larping top to bottom. Which is fine I’m there now.
Then the board essentially owns the company, if I understand your comment correctly. So it’s like if Yann LeCun says he’ll come back to Meta once Zuck sells all his shares
Sort of. The board also answers to two other groups:
• Employees
• Donors or whoever is paying the bills
In this case, the threat appears to be that employees will leave and the primary partners paying the bills will leave. If this means the non-profit can no longer achieve its mission, the board has failed.
It's possible that the failure occurred at some point in the past. If the board truly believes keeping Altman is inherently incompatible with achieving their charter, they have to let him go. The fallout from that potentially kills the company, but a small chance of achieving the charter is better than no chance.
If that's the case, then the failing would be in letting it get to this point in the first place.
Sam and Greg were part of the board apparently, so definitely a coup (we can debate for hours if it's a coup or not, but come one, imagine the scene being played in a movie and not being played as a coup).
Another way to think about these is that companies are basically small countries.
First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
Can you explain the third point a little more? If Altman has no meaningful economic interest in the company, what was his motivation for being involved at all? Why did he choose to spend his time this way?
I'm aware that Altman has made the same claim (close to zero equity) as you are making, and I don't see any reason why either of you would not be truthful, but it also has always just seemed very odd.
> If Altman has no meaningful economic interest in the company, what was his motivation for being involved at all? Why did he choose to spend his time this way?
Not everything is about money. He likely just likes the idea of making AI.
No one governs the board of a nonprofit, exactly. In this case, it sounds like Sam and his allies are trying to exert pressure on the board by threatening crippling resignations. This puts the board in the position of choosing between pursuing its mission without certain employees, or pursuing business plans that do not align as well with its mission, but with the full complement of employees.
It's a tricky situation (and this is just with a basic/possibly-incorrect understanding of what is going on). I'm sure it's much more complicated in reality.
In a 501.3c I think the board is the top. From what I understand they’re usually funded through grants that have requirements that need to be met for each disbursement. If you fail then the money stops but there’s no “firing” the board they just stop getting funds.
{the entity} of which they are the board does not have shareholders and unless there's something funky in the charter: there's no mechanism to fire members of the board (other than board action). The shareholders of the llc aren't relevant in this context, as they definitely can't fire the nonprofit's board (the whole point of their weird structuring). https://openai.com/our-structure
Pedantic, but: LLCs have "members", not "shareholders". They are similar, but not identical relations (just as LLC members are similar to, but different from, the partners in an partnership.)
> From what I understand, the for-profit OpenAI is owned and governed by the non-profit OpenAI.
That's functionally true, but more complicated. The for profit "OpenAI Global LLC" that you buy ChatGPT subscriptions and API access from and in which Microsoft has a large direct investment is majority-owned by a holding company. That holding company is itself majority owned by the nonprofit, but has some other equity owners. A different entity (OpenAI GP LLC) that is wholly owned by the nonprofit controls the holding company on behalf of the nonprofit and does the same thing for the for-profit LLC on behalf of the nonprofit (this LLC seems to me to be the oddest part of the arrangement, but I am assuming that there is some purpose in nonprofit or corporate liability law that having it in this role serves.)
The board that has been threatened to be sued individually and collectively by some of the most well known names in IT. They're probably wondering how they can get out of this with their reputations and ego's in one piece. You may have the legal authority to do something but if you don't have the support (or worse: if you haven't checked that you have the support) then it's not exactly the best move.
It's also worth remembering that Sam Altman is also seeking to get out of this with his reputation and ego in one piece. Definitely in his interest to be able to portray the board as coming crawling back to him after kicking him out the door, even if that is, well, less than candid communication of what has happened.
And the evidence that we've seen so far doesn't refute the idea that the board isn't seriously considering taking him back on. The statements we've seen are entirely consistent with "there was a petition to bring him back sent to the board and nothing happened after that."
I read somewhere that the CTO wasn’t at all the best pick for interim CEO, but they couldn’t find anyone else that was in their camp in a hurry. Nothing about this looks like they did their homework and thought this through. If they _had_ done those things, MSFT wouldn’t be as pissed as they are right now.
They are a small board, and Microsoft has a very large number of lawyers.
I do not believe it is possible for them to have thought this through. I believe they'll have read the governing documents, and even had some good lawyers read them, but no governance structure is totally unambiguous.
Something I'm immensely curious about is whether they even considered that their opposition might look for ways to make them _criminally_ liable.
I don't see any ways in which they could be held criminally liable for just voting their conscience, and good luck verifying that. So that angle is not open for exploration as far as I can see. But what would scare the wits out of any board members is to have say the full power of Microsoft's legal department going after them for the perceived damages with respect to either Microsoft's stock price (a publicly traded company, no less) or the value of Microsoft's holdings in OpenAI.
And, incidentally, if there is a criminal angle that's probably the only place you might possibly find it and it would take the SEC to bring suit: they'd have to prove that one or more of the board members profited from this move privately or that someone in their close circle profited from it. Hm. So maybe there is such an angle after all. Even threatening that might be enough to get them to fold, if any of them or their extended family sold any Microsoft stock prior to the announcement they'd be fairly easy to intimidate.
> But what would scare the wits out of any board members
Don't you think the board must have sought legal counsel before acting? It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.
I don't think OpenAI board has any responsibility to care for Microsoft's stock price. Such arguments won't hold water in a court of law. And I don't think the power of Microsoft's legal department would matter when there's no legal basis.
> Don't you think the board must have sought legal counsel before acting?
They probably should have, but they may have not.
> It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.
It is. But having the legal rights to do something and having it stand unopposed are two different things and when one of the affected parties is the proverbial 900 pound Gorilla you tread more than carefully and if you do not you can expect some backlash. Possibly a lot of backlash.
> I don't think OpenAI board has any responsibility to care for Microsoft's stock price.
Not formally, no. But that isn't what matters.
> Such arguments won't hold water in a court of law.
I'll withhold comment on that until I've seen the ruling. But what does and does not hold water in a court of law unless a case is extremely clear cut isn't something to bet on. Plenty of court cases that have been won because someone managed to convince a judge of something that you and I may think should not have happened.
> And I don't think the power of Microsoft's legal department would matter when there's no legal basis.
The idea here is that Microsofts - immense - legal department has the resources to test your case to destruction if it isn't iron-clad. And it may well not be. Regardless, suing the board members individually is probably threat enough to get them to back down instantly.
There is a difference between "my lawyers advised me that it was probably ok" and "Microsoft's legal team spent 100,000 billable hours pouring over case law to demonstrate that it was not, in fact, ok."
> I don't think OpenAI board has any responsibility to care for Microsoft's stock price.
They control an entity that accepted $10B from Microsoft. Someone signed that term sheet.
For such a basic action as a board exercising one of the most fundamental of its rights, I don't think it's necessary to spend 100K hours. And I don't think the board consulted to random lawyers off the street.
> Someone signed that term sheet.
Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?
I agree that the act of voting itself is too squishy/personal, but the things that led up to it and their handling afterwards?
My curiosity stems from whether the board was involved in signing the contract for Microsoft's investment in the for-profit entity, and where the state might set the bar for fraud or similar crimes. How was the vote organized? Did any of them put anything in writing suggesting they did not intend to honor all of the terms of the agreement? Did the manner in which they conducted this business rise the level of being criminally negligent in their fiduciary duty?
I feel like there are a lot of exciting possibilities for criminality here that have little to do with the vote itself.
I've had a case in Germany that for an outsider may have looked like we should have lost it. In a nutshell: we took on a joint-venture to develop a market that we weren't particularly interested in, 51:49 to their advantage. The day after the ink was dried and we had set up their development track to create the product they took the source code and sold it to another party.
We had the whole thing - including the JV - reversed in court in spite of them having the legal right to do all this. The reason: the judge was sympathetic to the argument that apparently the JV was a sham created just to gain access to our code. Counterparty was admonished, a notary public that had failed their duty to act as an independent got the most thorough ear washing that I've ever seen in a court and we got awarded damages + legal fees.
What is legal, what you can do and what will stand up are not always the same thing. Intent matters. And what also really matters is what OpenAI's bylaws really say and to what extent the non-profit's board members exercised their duty to protect the interests of the parties who weren't consulted and who did not get to vote. This so called duty of care - here in NL, not sure what the American term is - can weigh quite heavily.
It could be. But I've yet to see any evidence of that. More likely it wasn't because short of a massive skeleton in a cup-board in Sam Altmans' apartment this was mishandled and by now I would have expected that to come out.
What evidence were you expecting to find? The board said that Sam wasn't candid with his communication. I've yet to see any evidence that he was candid. Unless the communication has been recorded, and somehow leaks, there won't be any evidence that we can see.
Except he is not. He was a cofounder of the company and was on the board. Your metaphor doesn't make any sense -- this is like if your boss fired you but also you were part of your boss and your cofounder who is on your side was the chair of your boss.
I think the allure for Altman though would be that OpenAI already has all the pieces in place.
Going off and starting his own thing would be great, but it would be at least a year to get product out, even if he had all the same players making it. And that's just to catch up to current tech
Thats ship has sailed for him if hes not on the openAI train out of town. He'd be like a third party political candidate if he tried another run at it building his own team+product from scratch. Lots of other great things to do for sure but probably not a similar supercharged role. It just wouldn't be the same - OpenAI clearly the front runner right now
It's still tough. They won't have the data used to train the model, which is an incredibly important part. There's a lot of existing competitors in this space with headstarts. There's no guarantee that the entire research organization will follow Sam even if they leave OpenAI - they're going to have a lot of offers and opportunities at other companies that have an advantage.
It's also not clear that this is a realistic scenario - Ilya is the real deal, and there's likely plenty of people that believe in him over Altman.
Of course, the company has also expanded massively under Altman in a more commercial environment, so there are probably quite a few people that believe in Altman over him.
I doubt either side ends up with the entire research organization. I think a very real possibility is both sides end up with less than half of what OpenAI had Friday morning.
thing is they can team with people that probably have that data already. Say Microsoft switches teams to a hypothetical SamCo AI most of the internet has already been indexed by bing and wants to be indexed by bing as its the number 2 search engine. that mean they either have cached or access to pretty much everything SamCo could want to feed said AI. Reddit or Twitter for example would never cut bing off as it would cut off users. Microsoft could though block openai from further access to things like github linkedin.
Generally speaking for a base model this isn't nearly as important as it sounds because the specifics of the data don't matter as long as there's enough of it. You may remember this from high school as the central limit theorem.
For specific things like new words and facts this does matter, but I think they're not in real trouble as long as Wikipedia stays up.
Sounds like wishful thinking on the part of the authors source.
If I worked there, I would keep my job and see how things shake out. If I don’t like it, then I start looking. What I don’t do is risk my well being to take sides in a war between people way richer than me.
That makes good sense and I think all those that are not independently wealthy already except personal friends of either Sam or high level remainers are going to do something quite similar. It's just too fluid a situation to make good decisions, especially if your livelihood is at stake, better not to make decisions that can't be easily undone.
You’re probably right because people usually don’t have an appetite for risk, but OpenAI is still a startup, and one does not join a startup without an appetite for risk. At least before ChatGPT made the company famous, which was recent.
I’d follow Sam and Greg. But N=1 outsider isn’t too persuasive.
OpenAI isn’t a normal startup. It was founded as a research focused not for profit. That 300k+ base comp isn’t what I’d consider “risky” either. Career wise it never seemed risky as some of the fields top AI researchers were there from day almost one.
Maybe much longer. The mass of infrastructure and data housed at OpenAI will be difficult to reproduce from scratch.
Especially considering OpenAI has boosted the value of the masses of data floating around the internet. Getting access to all that juicy data is going to come at a high cost for data hungry LLM manufacturers from here on out.
Except building something the second time around is often quicker and with the current gains of hardware capabilities and interest in the space… maybe it wouldn’t be a year behind.
There are also a ton of ~first mover advantages you can't benefit from, be they of untapped markets for demand or the exploitation of underpriced labor, capital, or IP. If Sam started a new company he would not get as good a deal on hardware or labor, he would get much more scrutiny on his training sets, and he would have to compete against both OpenAI and its competitors.
I agree. Anthropic and Mistral are good examples. Both companies have key people from OpenAI and they fairly quickly developed good models, but I don’t think either are thinking too hard about real AGI, but instead are trying to create useful and ethical tools.
I'm curious about hoe the messaging and zeitgeist will evolve. Ober the past few months, the sentiment I encountered most frequently is that OpenAIs lead is unsurmountable and basically has a monopoly on genAI - or even AI in general. While I disagreed with this sentiment because there's no reason to believe LLM are the final word in AI, I think the will be many more people going back on prior messaging for partisan or astroturfing reasons and saying OpenAI is nothing special.
The playbook, a source told Forbes would be straightforward: make OpenAI’s new management, under acting CEO Mira Murati and the remaining board, accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors.https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
Hard to reconcile with people who would do something like that.
Differences in interpretations will happen but the YC rule that founder drama is too often a problem continues to exist and it shouldn’t be a surprise.
I'm not sure what rule the OP is referencing but otherwise reasonably successful start-ups often fail because founders clash on key parts of their vision (or behave in toxic ways towards each other or to other people in the company). This can very handily wreck your company or at a minimum substantially damage it.
> This makes no sense. People are calling what the board did a coup, but Altman is trying (failing?) to stage a coup.
I think he stage his coup long ago when he took control of OpenAI making it “CloseAI” to make himself richer by effectively selling it to Microsoft. This is the people who believe in the original charter fighting back.
> The shareholders can fire the board, but that’s not what he’s asking for.
There are no shareholders in a non-profit if I’m right. The board effectively answers to no one. It’s take it or leave it kind of deal. If you don’t believe in OpenAI’s mission as stated in their charter, don’t engage with them.
The nature of power relationships at this level is not strictly hierarchical and there's a vast wealth differential here, and Sam is a lot more powerful than any of the board members in many many ways. Everybody who has large amounts of money at stake in this enterprise is going to back Altman. The board has no one.
It to mention I’d wager that Altman is a lot higher on the sociopathic scale as well. The board members sound like somewhat normalish people trying to stick to their charter and perhaps genuine belief in the missions. Altman, not so much.
In a coup, a leader with the support of the people is ousted by force. If we believe the reports that there will be mass resignations, that seems to indicate the founders enjoy the “support of the people.”
Of course you can protest, “but in this country the constitution says that the generals can sack the president anytime they deem it necessary, so not a coup.” Yes, but it’s just a metaphor, so no one expects it to perfectly reflect reality (that’s what reality is for).
I feel we’ll know way more next week, but whatever the justifications of the board, it seems unlikely that OpenAI can succeed if the board “rules with an iron fist.” Leadership needs the support of employees and financial
backers.
> In a coup, a leader with the support of the people is ousted by force.
Not necessarily. An unpopular leader can be even easier to overthrow, because the faction planning the coup has a higher chance of gaining popular support afterward. Or at least they can expect less resistance.
Of course, in reality, political and/or military leaders are often woefully bad at estimating how many people actually support them.
Not to mention 100 hours not even being logistically possible. Working 100 hours a week with just 5 hours of sleep per day leaves only 4 hours in the day for the other parts of living and getting from a to b. Anyone claiming that, much less for an extended period of time, or either lying or is in slavery against their will.
My impression is that people don't measure the time they work, but judge it by their impression. First, they think that they work for, let's say, 40 hours per week. They don't consider how much meals, coffee breaks, mental breaks, off-topic office discussions, checking social media, visiting restroom take. Second, when they work overtime, they get tired and overestimate the amount of time they worked. 10 hours of overtime probably feels like 20 hours.
100 hours is equal to 2 full-time jobs and a half time. People believing that number should consider how they would live going to their second job after their day ends (second full-time job) and working on weekends as well (half-time one).
Under ideal conditions, someone might be doing it. But, people shouldn't be throwing around these numbers without any time-tracking evidence.
Agreed. And even if someone is doing it, it isn't something to be proud of. It shows a major gap in management and process and potentially reveals the business to be unviable.
Sam and Greg were trying to stage a coup, the rest of the board got wind of it and successfully countered in time (got to them first).
What they didn't expect is that a bunch of their own technical staff would be so loyal to Sam (or at least so prone to the cult of personality). Now they're caught in a Catch-22.
If Altman takes all of the good engineers and researchers with him, OpenAI is no more.
So the board can be the boss of nothing, sure, without the ability to do anything - leading the organisation, raising funds, and so on
Perhaps they could hire someone that could replace Sam Altman, but, that would require a much larger company who have the employees indifferent to the leadership, like, EA or something
> Imagine if your boss fired you - and your response was - I’ll come back if you quit! Yeah, no. People might confuse status with those of actual ceo shareholders like zuck, bezos, or musk. But Altman is just another employee
Think you're missing the big picture here. Sam Altman isn't an "easily replaceable employee" especially given his fundraising skills.
Brilliant as sama is, a star fundraiser is more replaceable than a top engineer.
One can imagine Microsoft, for example, swooping in and acquiring a larger share of the for-profit entity (and an actual seat on the board, dammit) for more billions, eliminating the need for any fundraising for the foreseeable future.
If a lot of top engieers follow sama out, now that's a real problem.
There’s probably a lot of behind the scenes drama and phone calls occurring among their top researchers. I’d guess Sam Altman calling them and trying to gain support for a counter coup. Things like this article to give the appearance that Sam et al have ready won, etc. If the board and new CEO isn’t doing that too, they could end up losing.
Making no effort to obtain a grasp on the basic facts of the situation doesn’t seem to stop people here from posting embarrassing rants.
Altman was on the board. He was not “just another employee.” Brockman was also on the board, and was removed. It was a 4 on 2 power play and the 2 removed were ambushed.
You also don’t seem to realize that this is happening in the nonprofit entity and there are no shareholders to fire the board. I thought OpenAI’s weird structure was famous (infamous?) in tech, how did you miss it?
To add to that because it may not be clear for everyone, if they leave the knowledge will sprout other companies that will directly be able to compete with openai with different flavors. If this happens that mean OpenAI may really well be finished and that really well may he the reason why they try desperately to save what they can. Microsoft has a lot to loose here to both in cloud income and because they would loose their enormous tactical advantage they have so far.
How much would you like to bet that the Murati character is in a love triangle or something like that when the movie comes out? (The character, probably has no bearing on reality).
altman was in this from the beginning.
they all orchestrated to curb msft somehow
like a loop hope that screws msft's control but also keeps the gpu deal
#speculation
evidence:
a. It's openai ffs, they already have advance enough RL bots that can create 999+ IQ strategies/
b. google meet/
theory:
they were training GPT-5, and had some time to clean some under table dust.
What I'm surprised about in this whole discussion is how little people are actually looking at Altmans worldcoin venture.
It is incredibly shady, and has the same kind of sci-fi marketing bullshit vibe going on as Elon Musk's hyperloop and Mars missions, and, come to think of it, OpenAI's marketing.
Altman+OpenAI are a hype machine that's riding a bubble to get rich enough through any scheme to be able to turn around and milk users for data, just like facebook and google.
The only difference is, he gets to twist the focus towards this sci-fi AGI idea, which works like the distraction element of a magic trick. The media loves to talk about AGI and regulating Skynet, because it's a story that gets the imagination going --- certainly much more interesting than stories about paying people 2 dollars an hour to sift through data used to train a language model for offensive and traumatizing content to feed to the autocomplete parrot.
I think it's good that he got kicked off the position as CEO, but that does not suddenly make OpenAI a good actor. Some other jerk will take his spot.
Purely speculating, but could there be some sort of 4D chess move going on here? Look at the timeline:
1) The board puts out a press release saying Sam was outed for not being candid
2) Internally a memo from the COO circulates saying that is not true
3) Greg and other senior folks quit
4) Now he is in a negotiating position to return for a) being fired on a false premise and b) bringing back all the top talent
If someone wanted to restructure the board of this company, they could have fed the other board members false information to put Sam in this negotiating position. It's also strange that a bunch of billionaires voiced support for Sam immediately after the firing without knowing any details.
Another great example that even huge multi billion dollar companies are lead by people. What a mess.
However this plays out, this is a big wake up call for everyone who is currently dependent on OpenAI. More changes will be needed to restore trust. It's going to be messy for a while. For a company that has executed pretty much perfectly until now it's so surprising how they just ruined their reputation like this.
It's a play to warm the public up to the idea of a more algorithmically managed organization. In Act I, the board jesters stumble around making bad decisions and wavering. In Act II, the embarrassed board welcomes the first AI oversight into the company constitution. ...
Just as software devs are allergic to voting machines which run software, I would imagine that ML companies will be the last to adopt ML for their own high-level decision making.
I called this yesterday. I said the board would be forced out under the pressure, and Sam would be back. It was obvious. Even if it is a not-for-profit company, it bows under investor pressure.
Clownshow. Holy shit, this is what happens when stupid people get put into positions of power and money and have no idea how things actually work. Fucking non-profits are the worst to deal with.
This just shows there is no way you can have a non profit board with a profit cap structure. The capitalists always will push through and "exert pressure" one way or another if they want their way. The non profit setup was a facade. And this has clearly showed it in the fallout. The board had every right to veto or replace Altman if they didn't feel they were prioritizing their mission.
Democracy could have prevented this shitshow. The board could have asked the employees if they are ok with the direction of the company and then simply act according to the outcome of the vote. But companies are always run like dictatorships, leading to mostly terrible decisions disregarding the wisdom of the crowd.
While you obviously seem to care more about business experience than other things, it seems odd that you would discount Adam D'Angelo even from your point of argument - The former CTO/VP of Engineering for Facebook and founding CEO of Quora isn't exactly a no-name dope.
I mean the other two. And no offense to other no name dopes. Of course, I’m being context sensitive. In the scheme of this scenario, they’re nothing but votes up for manipulation. In regular life, they’re important in their respective fields Yada, yada
Yes, clearly all categories of loathsome lower castes, worthy of your disdain.
VS what, a Stanford dropout who made buds with Paul Graham? That's better and more respectable because he's cooler and connected with YC/VC hipness, right?
I'm responding to a post by a self hating engineer that seems to think that this specific debacle was caused by engineers. No, it was caused by non engineers. And while yes, I do have disdain for people that spends thousands to study skills that contribute nothing to their fellow man or woman, I don't think scientist or artist are those things. (Mba and ethicist definitely are mastibatory self important bullshit)
The real reason I disdain the majority of the board of openai is that there are clearly 3 people on the board that have accomplished nothing and are clear trust fund babies.
I suspect the root-commenter was speaking about Ilya S, and it's a bit quibbling to classify him as "scientist" vs "engineer." He clearly crosses the line between both.
But yes, the comment was a bit unhinged.
Don't really see the difference between an MBA and whatever it is that Altman does, though, other than credentials.
Finally, that you think that ethicist (or the study of ethics) is masturbatory, especially in the context of an organization that has as its explicit mission to hoist AGI onto the world -- tells me quite a bit about your own... ethics.
World could do with a lot more ethicists and a lot less MBAs.
You know, I always found Mike judge to probably be the most prescient of all of the science fiction authors.
In Office Space, Idiocracy, and most relevant here in Silicon Valley; he accurately, and very precisely not only forecasts, but deconstructs the reasoning, and vapid lack of core philosophy behind each of the real life narratives he’s parodying.
That serious people still consider Silicon Valley as some kind of thing to aspire to is horrifying. This despite repeated examples of predictably base incompetence, lack of maturity and quite frankly avaricious opportunism as the kernel on which SV lies.
When Michael Lewis wrote the book about Wall Street, 'Liar's Poker' , he got calls from parents of college-bound kids, not about how to avoid Wall Street, but about how to better prepare for getting in to Wall Street...
I recognize the brilliance of SV (the show) but couldn't make it past a few episodes because it was just too painfully close to home.
Now that I work for a non-tech, non-SV company (agricultural equip manufacturer in fact), and have some distance from the real world SV, maybe I could watch it without my skin crawling.
I found the episodes legitimately stressful to watch. "Too close to home" is exactly right. I grimaced through them though. And nowadays I do enjoy watching the "best of" clips on YouTube, or linking my coworkers to whatever scene from the show is relevant to the current meeting...
Or real life is just messy but most people don’t know it because they willfully ignore it and just keep their head down living their boring middle class life.
Judge is no profit he just communicated what the rest of us already knew.
No, that’s how I know you’ve never been an executive. It’s constantly messy and personal AF. It’s just the workers below who don’t see it and cash in their checks. You live a stable life because you are shielded from the chaos to keep you productive.
What you're not realizing is that it's like this everywhere. It's not exclusive to SV. Politics is part of human life. Adults are just kids with money.
And once upon a time Paris was the Prime Meridian and the British Empire was the dominant super power that spanned the globe.
It's worth asking how rapidly can, say, a global finnacial hub transfer from one location to another, how quickly a centre for excellence can transfer, how many years does it take for the world's best space scientists to move out of Germany, etc.
Silicon Valley is really a great show. Working in tech, I often think of it. Especially whenever there's a new great idea from management that nobody saw coming and feel like the beginning of a new episode.
Ah yes, it is well known that no other individual sitting on the board of directors of any US company has ever made sudden and drastic decisions ever before. Totally.
I disagree with what he did in this situation. But it's complete bullshit to make this about his ethnicity or to imply he had anything but good intentions here. By all accounts, he did what he felt was right for the safety of the world, even if I think it was misguided.
A person who doesn't care about privacy isn't likely to care about security. And it seems security was the main factor in what we're going through right now.
I can't know but I assume this is the reason why they got rid of Sam. When I listened to him talking, I didn't believe he took security serious. I assume his board colleagues also noticed this based on him pushing for new releases and maybe other things.
Conspiracy time: Is it possible this was all orchestrated? He was fired, causing mistrust in the governance structure, which is part of the remaining open source portion of “open” ai. Then he gets rehired with significant governance changes, thus removing any last vestiges of real power of the open source portion of the company.
Even though the country was against you, you should come back and lead it to freedom.
If the entire board can be replaced, then Sam should come back. Even though he can build a new company tomorrow. A leader will come back to lead the best shot anyone has gotten in AI development so far.
This was a power grab and it did not work. Not coming back is letting four people derail history and slow the development of AGI.
To explain, it's the board of the non-profit that ousted @sama .
Microsoft is not a member of the non-profit.
Microsoft is "only" a shareholder of its for-profit subsidiary - even for 10B.
Basically, what happened is a change of control in the non-profit majority shareholder of a company Microsoft invested in.
But not a change of control in the for-profit company they invested in.
To tell the truth, I am not even certain the board of the non-profit would have been legally allowed to discuss the issue with Microsoft at all - it's an internal issue only and that would be a conflict of interest.
Microsoft is not happy with that change of control and they favourited the previous representative of their partner.
Basically Microsoft want their shareholder non-profit partner to prioritize their interest over its own.
And to do that, they are trying to impede on its governance, even threatening it with disorganization, lawsuits and such.
This sounds like highly unethical and potentially illegal to me.
How come no one is pointing that out?
Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?
What does it say about the seriousness of it all?
But of course, that's Silicon Valley baby.