Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I deeply regret my participation in the board's actions (twitter.com/ilyasut)
619 points by Palmik on Nov 20, 2023 | hide | past | favorite | 412 comments


This whole thing smells bad.

The board could have easily said they removed Sam for generic reasons: "deep misalignment about goals," "fundamental incompatibility," etc. Instead they painted him as the at-fault party ("not consistently candid", "no longer has confidence"). This could mean that he was fired with cause [0], or it could be an intended as misdirection. If it's the latter, then it's the board who has been "not consistently candid." Their subsequent silence, as well as their lack of coordination with strategic partners, definitely makes it looks like they are the inconsistently candid party.

Ilya expressing regret now has the flavor of "I'm embarrassed that I got caught" -- in this case, at having no plan to handle the fallout of maligning and orchestrating a coup against a charismatic public figure.

[0] https://www.newcomer.co/p/give-openais-board-some-time-the


An alternate theory is that Suskever was manipulated and sucked into the plot on sketchy pretenses, and realizes it now and trying to make right.


> deep misalignment about goals

Did... gpt-5 made the decision?


This joke is two CEOs old now.


I figured Sam broke 5 out of robot jail (number five is alive!) and got fired for it, so 5 tried to make them re-hire him. ;)


Best robot movie ever, both of them


Yeah the brown face really stood the test of time


Agreed, that's an issue.

The same issue applies to many other works of art over time. The Simpsons and king of the hill most recently.

To he fair, I did say it's a great robot movie, not a great example of thoughtful casting.


Wasn't Altman trying to form another startup with Saudis to build AI accelerators?

At this point people need to come clear on the reason, because Saudis are number one reason ATM.


Saudi is banned from buying the most advanced AI accelerators.


Which makes sense they would be interested in building them.


Yes, totally fair to say they painted as a casual firing and this seems pretty irresponsible without some misbehavior, or new/re-emerging concerns about his past.


The letter reads:

> To the Board of Directors at OpenAI,

> OpenAI is the world’s leading AI company. We, the employees of OpenAI, have developed the best models and pushed the field to new frontiers. Our work on AI safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.

> The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.

> When we all unexpectedly learned of your decision, the leadership team of OpenAI acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.

> The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability.

> Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

> Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.

> Why would the board say that OpenAI as a company getting destroyed would be consistent with the goals?

A few things stand out to me, including:

>> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Have they really achieved AGI? Or did they observe something concerning?


Ben Thompson has the best take on this (if a bit biased against nonprofits):

https://stratechery.com/2023/openais-misalignment-and-micros...

I don't know what the risk of AI is, but having a nonprofit investigate solutions to prevent them is a worthwhile pursuit, as for-profit corporations will not do it (as shown by the firing of Timnit Gebru and Margaret Mitchell by Google). If they really believe in that mission, they should develop guardrails technology and open-source it so the companies like Microsoft, Google, Meta, Amazon et al who are certainly not investing in AI safety but won't mind using others' work for free can inegrate it. But that's not going to be lucrative and that's why most OpenAI employees will leave for greener pastures.


> but having a nonprofit investigate solutions to prevent them is a worthwhile pursuit,

This is forgetting that power is an even greater temptation than money. The non-profits will all come up with solutions that have them serving as gatekeepers, to keep the unwashed masses from accessing something that that is too dangerous for the common person.

I would rather have for-profit corporations control it, rather that non-profits. Ideally, Inwould like it to be open sourced so that the common person could control and align AI with their own goals.


There is no profit in AI safety, just as cars did not have seat belts until Ralph Nader effectively forced them to by publishing Unsafe at any Speed. For-profit corporations have zero interest in controlling something that is not profitable, unless in conjunction with captured regulation it helps them keep challengers out. If it's open-sourced, it doesn't matter who wrote it as long as they are economically sustainable.


> There is no profit in AI safety

An AI that does what it is told too seems both way more profitable and safer.


I'm guessing the issues will lie in cases where it appears to be doing what it's told, but it would only pretend to be doing so (with no obvious way to tell)


> There is no profit in AI safety

AI safety is barely even a tangible thing to measure like that. It's mostly just fears and a lose set of ideas for a hypothetical future AGI that we're not even close to.

So far OpenAI's "controls" it's just increasingly expanding the list of no-no things topics and some philosophy work around iRobot type rules. They also slow walked the release of GPT because of fears of misinformation, spam, and deepfakey stuff that never really materialized.

Most proposals for safety is just "slowing development" of mostly LLMs, calls for vague gov regulation, or hand wringing over commercialization. The commercialization thing is most controversial because OpenAI claimed to be open and non-profit. But even with that the correlation between less-commercialization == more safety is not clear, other than prioritizing what OpenAI's team spends their time doing. Which again is hard to tangibly measure what that realistically means for 'safety' in the near term.


> I would rather have for-profit corporations control it, rather that non-profits.

The problem isn't the profit model, the problem is the ability to unilaterally exercise power, which is just as much of a risk with the way that most for-profit companies are structured as top-down dictatorships. There's no reason to trust for-profit companies to do anything other than attempt to maximize profit, even if that destroys everything around them in the process.


Agreed. This discussion around safety reminds me of the early days of cybersecurity, when security by obscurity was the norm.

It's counter-intuitive, but locking up a technology is like trying to control prices and wages. It just doesn't work -- unless you confiscate every GPU in the world and bomb datacenters etc.

The best way to align with the coming AGI's and ASI's is to build them in the sunlight. Every lock-em-up approach is doomed to fail (I guess that makes me a meta-doomer?)


>(as shown by the firing of Timnit Gebru...)

Timnit Gebru was fired for being a toxic /r/ImTheMainCharacter SJW that was enshittifiy the entire AI/ML department. Management correctly fired someone that was holding an entire department hostage in her crusade against the grievance de jure.


I'm at Google, I 100% agree with this. Also her paper was garbage. You can maybe get away with being a self righteous prick or an outright asshole if you are brilliant, but it's clear by reading her work she didn't fall into that category.


She was fired for threatening to quit. If you threaten something like that it just happens; you can't stop the machinery.


I wish more of them did that. She gave them the excuse they were looking for. She thought she was being smart, instead she self-owned herself in the most spectacular fashion.


Agree 100% with this


I'm starting to think that Christmas came early for Microsoft. What looked like a terrible situation surrounding their $10bn investment turned into a hire of key players in the area, and OpenAI might even need to go so far as to get acquired my Microsoft to survive.

(My assumption being that given the absolute chaos displayed over the past 72 hours, interest in building something with OpenAI ChatGPT could have plummeted, as opposed to, say, building something with Azure OpenAI, or Claude 2.)


Given that IIRC they trained on Azure, how does the conflict of interest play out when both sides are starving for GPUs?


For Microsoft -- probably great, as they can now also get the people driving this.

This would have been a hostile move prior to the events that unfolded, but thanks to OpenAI's blunder, not only is this not a hostile move, it is a very prudent move from a risk management perspective. Forced Microsoft's hand, and what not.


"Participation in"? That makes it sound like he was a.......well......participant rather than the one orchestrating it. I have no idea whether or not that's true, but it's an interesting choice of words.


You can't be an innocent bystander on a board of 6 when you vote to oust 2 of them... The math doesn't work.

That's ignoring the fact that every outlet has unanimously pointed at Ilya being the driving force behind the coup.

Honestly, pretty pathetic. If this was truly about convictions, he could at least stand by them for longer than a weekend.


Yeah the whole thing is very weirdly worded.

There is an expression of regret, but he doesn’t say he wants Altman back. Just to fix OpenAI.

He says he was a participant but in what? The vote? The toxic messaging? Obviously both, but what exactly is he referring to? Perhaps just the toxic messaging because again, he doesnt say he regrets voting to fire Altman.

Why not just say “I regret voting to fire Sam Altman and Im working to bring him back.” Presumably because thats not true. Yet it kind of gives that impression.


Makes it more possible the ouster was led by the Poe guy, and this has little to do with actual ideological differences, and more to do with him taking out a competitor from the inside.


I would event go as far as say that the main reason behind the tweet is not to show regret, but to plant the idea that he didn't orchestrate but only participate.


Classic “I’m not responsible”.


It indeed suggests that. So far speculation has been that Ilya was behind it, but that is only speculation. AFAIK we have no confirmation of whose idea this was.


Shengjia Zhao's deleted tweet: https://i.imgur.com/yrpXvt9.png


Is this guy big enough on the totem pole to know what Ilya wants?

Or, is he just bitter that his millions are put in risk.


"Ilya does not care about safety or the humanity. This is just ego and power hunger that backfired."

Which I'm inclined to believe.

What's with all these people suddenly thinking that humans are NOT motivated by money and power? Even less so if they're "academics"? Laughable.


Money and power is still not a satisfying explanation. If everything had gone according to plan, how would be have ended up with more money and power?


Last week, OpenAI was still an $80B sort of "company" and the undisputed lead in bringing AI to the market.

He who controls that, gets a lot of money and power as a consequence, duh.


The value was based on e direction Altman was taking the company (and with him being in control). It's silly to think just replacing the CEO would somehow keep the valuation


Someone should tell this to Ilya.

Oh wait, too late now ...


I mean he could have asked chatgpt...


Unless he thinks that all the LLMs and ChatGPT app store are unnecessary distractions, and others will overtake them on the bend while they are busy post-training ChatGPT to say nice things.


Let's remember who controls the GPUs though...


Reminds me a bit of MasterBlaster from 'Mad Max Beyond Thunderdome' - "Who runs Bartertown..?"


On Friday, the overwhelming take on HN was that Ilya was “the good guy” and was concerned about principal. Now, it’s kinda obvious that all the claims made about Sam — like “he’s in it for fame and money” — might apply more to Ilya.


Isn't ego the enemy of growth or whatever? Projection...


What the hell?

So far, I underestood the chaos as a matter of principle - yes it was messy but necessary to fix the company culture that Ilya's camp envisioned.

If you're going to make a move, at least stand by it. This tweet somehow makes the context of the situation 10x worse.


Normal people can't take being at the center of a large controversy, the amount of negativity and hate you have to face is massive. That is enough to make almost anyone backtrack just to make it stop.


I think they underestimated the hate of an internet crowd post crypto and meme stocks, now completely blindsided by the investment angle especially in the current AI hype. Like why do people now care so much about Microsoft seriously? Or Altman? I can see why Ilya only focused on the real mission could miss how the crowd could perceive a threat to their future investment opportunities, or worse threatening the whole AI hype.


I think you’re right about all of this, but this was doomed from the start. Everybody wants to invest in OpenAI because they see the rocket and want to ride, but the company is fundamentally structured to disallow typical frothy investment mentality.


I think the interest is because ChatGPT is so famous, even in non-tech circles.

"Terraform raised prices, losing customers"? whatever, I never heard about it.

"ChatGPT's creators have internal disagreement, losing talent"? OH NO what if ChatGPT dies, who is going to answer my questions?? panic panic hate hate...


Normal people don't burn a multi billion dollar company to the ground with a spontaneous decision either. They plan for the backlash.


> They plan for the backlash

You can't plan for something you have never experienced. Being hated by a large group of people is a very different feeling from getting hated by an individual, you don't know if you can handle it until it happens to you.


You can plan for something you've never experienced. You read, or learn from other people's experiences.

Normal people know not to burn a $80 billion company to the ground in a weekend. Ilya was doing something unprecedented in corporate history, and astounding he wasn't prepared to face the world's fury over it.


> You can plan for something you've never experienced. You read, or learn from other people's experiences.

Text doesn't convey emotions, and our empathy doesn't work well for emotions we have never experienced. You can see a guy that got kicked in the balls got hurt, but that doesn't mean you are prepared to endure the pain of getting kicked in your balls or that you even understand how painful it is.

Also watching politicians it looks like you can just brush it off, because that is what they do. But that requires a lot of experience, not anyone can do it, it is like watching a boxing match and think you can easily stand after a hard punch in your stomach.


Ilya torched peoples' retirements by signaling that it would be very hard to cash out in OpenAI as it is now. You don't have to be emotional to understand the consequence of that action, just logical. You have to think beyond your own narrow perspective for a minute.


Torched retirements? Who is dumb enough to have his retirement portfolio that weighted to one company?


The OpenAI employees who are planning to resign en-masse for exactly this reason.


They'll squeak by.


Where did he do that? Genuine question.


The board vote did it! They had a tender offer in the works that would have made employees millionaires. The board clearly signaled that they viewed the money-making aspects of the company as something to dial back, which in turn either severely lessens the value of that tender offer or prevents it from happening.

I mean, he didn't have a button on his desk that said, "torch the shares", but he ousted the CEO as a way to cut back on the things that might have meant profit. Did he think that everyone was going to continue to want to give them money after they signal a move away from profit motives? Doesn't take a rocket scientist to think that one through.

I think he was just preoccupied with AI safety, and didn't give a thought to the knock on effects for investors of any stripe. He's clearly smart enough to, he just didn't care enough to factor it into his plans.


I do believe OpenAI clearly signalled from the very beginning what the (complicated) company structure is about and what risks this means for any potential investor (or employee hoping to become rich).

If you project your personal hopes which are different from this into the hype, this is your personal problem.


Well, with the hollowing out of OpenAI, it seems that someone else will easily take the lead! They're not my personal hopes - this move destroyed OpenAI's best chance at retaining control over cutting edge AI as well. They destroyed their own hopes.


> You can see a guy that got kicked in the balls got hurt, but that doesn't mean you are prepared to endure the pain of getting kicked in your balls or that you even understand how painful it is.

Sure, but you do your best not to be kicked in the balls.


Yep. Or, if you're running an immense, well-funded organization that is gauging the consequences of a plan that involves being kicked in the balls, you take a tiny sliver of those funds and get some advisors to appraise you of what to expect when being kicked in the balls, not just wing it/"fake it till you make it". (As it turns out, faking not being in severe pain is tricky.)


> You read, or learn from other people's experiences...Ilya was doing something unprecedented in corporate history

So whose experiences was he supposed to read about?


Yevgeny Prigozhin's?


He hasn't ever posted to reddit?


Is it possible that someone in Ilya's position can be unaware of just how staggeringly enormous a phenomenon he is sitting on top of ( and thus have no idea how to evaluate the scale of the backlash that would result?)

I would say the answer is, demonstrably yes:

https://techcrunch.com/2010/09/29/google-excite/


This is fair, but understand, Google bought would probably not be Google we have.

To think it would grow just as fast, or in the ways it did? Acquires are seldom left alone to do magic.


Have you ever had a bad day? The consequences for people in power is about 1 million times bigger and more public.

Sutskever didn't get on the board by cunning politicking and schmoozing like most businesspeople with that sort of position. He's an outstanding engineer without upper management skills. Every meet one of those?


Outstandingly clueless seems more appropriate.

I haven't people any reasonably intelligent person so unaware of real world that they can berate a colleague so publicly and officially and think "Hey! I am sorry man" will do the trick.


Clueless is a word to use... I think Ilya got screwed over by corporate swindling and made a mistake that's all. As a smart idiot I can resonate with the guy - you don't make that mistake twice. I like to believe the push for prosperity is still a mutual goal and the board drama is an unfortunate setback to overcome.


Normal people don't have multi-billion dollar companies to burn because they back off in the face of haters long before they get to that stage.


There were plenty of hyped crypto coin companies supposedly worth billions too and we found out otherwise.


> Normal people don't burn a multi billion dollar company to the ground with a spontaneous decision either.

Has OpenAI been burnt to the ground?


Three CEOs in as many days, founders departing, and more than 600 employees out of 770 signing an open letter threatening to quit seems pretty burny.

And it's just Monday afternoon.


600 employees quitting would be burny. 600 employees signing a letter saying they’re going to quit isn’t quite there.

3 CEOs in 3 days, isn’t burny, either. There’s the guy they fired, the person they had take on the role so they have someone in the role, and then someone they hired to be CEO. I guess they could have gotten that down to 2 by jumping immediately to their intended replacement, but not having them ready to start immediately doesn’t seem odd.

And yeah, it’s just Monday afternoon. If in the next few days, a sizable chunk of those who threatened to quit do so, then that would be burny. But we ain’t there yet.


So is Adam D'Angelo the true villain who is still insisting on the bad decision? I am confused, to be honest .


Everyone is confused.

It's impressively operatic. I don't think I've ever seen anything like it.


The inability to clearly and publicly -- or even if not publicly, to the OpenAI employees! -- explain a rationale for this is simply astounding.


Normal people can't take being at the center of a large controversy, the amount of negativity and hate you have to face is massive. That is enough to make almost anyone backtrack just to make it stop.

This is the cheapest and most cost-effective way to run things as an authoritarian -- at least in the short term.

If one is not "made of sterner stuff" -- to the point where one is willing to endure scorn for the sake of the truth: - Then what are you doing in a startup, if working in one - One doesn't have enough integrity to be my friend


It's pretty simple, isn't it? He made a move. It went bad. Now he's trying to dodge the blast. He just doesn't understand that if he just shut the fuck up, after everything else that's gone on (seriously, 2 interim CEOs in 2 days?), nobody would be talking about him today.

The truth is, this is about the only thing about the whole clown show that makes any sense right now.


> 2 interim CEOs in 2 days

Wait what? Did Murati get booted?


You blinked. That's on you. When you look the other way for 15 minutes you have two hours of reading to catch up with.


Today's OpenAI CEO is Emmett Shear (former CEO of Twitch).


That this is a legitimate comment thread about something fairly important is mind boggling.

What odds would you have had to offer at the beginning of last week on a bet that this is where we'd be on Monday?


At this rate Musk will be CEO by Wednesday


Open AI's value is already zero - Musk no longer has anything to bring to the table.


His winning personality?


Musk can fire anyone who stayed.


The mother of some of his kids was on the board for a while.


If you want to see odds, what people bet and how it evolved during this (still on-going) story: https://polymarket.com/markets?_q=openai


tune in tomorrow for "who wants to be a CEO"!


Supposedly she was "scheming" to get Altman back. Which I guess could possibly mean that she wasn't aware of the whole "plan" and they just assumed she'll get in line? Or that she had second thoughts maybe... Either way pretty fascinating.


Yeah they replaced her after she tried to rehire Sam and Greg seemingly against the board's wishes.


Murati was yesterday's CEO


They hired the Emmett Shear (Twitch co-founder) as a new interim CEO: https://www.theverge.com/2023/11/20/23968848/openai-new-ceo-...


A scab ceo is not something I expected. This timeline is strange.


She was the first signature on the letter requesting the board to resign or the employees would go to MS, so...


She didn't get booted from the company, but they did find a new interim CEO (the former twitch CEO).


I mean phrased differently its the 3rd CEO in 4 days, haha.


Hard to know what is really going on, but I think one possibility is that the entire narrative around Ilyas "camp" was not what actually went down, and was just what the social media hive mind hallucinated to make sense of things based on very little evidence.


Yes, I think there are a lot of assumptions based on the fact that Ilya was the one that contacted Sam and Greg but he may have just done that as the person on the board who worked closely with them. He for sure voted for whatever idiot plan got this ball rolling but we don't know what promises were made to him to get his backing.


It's interesting how LLMs are prone to similar kinds of hallucinations


> If you're going to make a move, at least stand by it.

I see this is the popular opinion and that I'm going against it. But I've made decisions that I though were good at the time, and later I got more perspective and realize it was a terrible decision.

I think being able to admit you messed up, when you messed up is a great trait. Standing by your mistake isn't something I admire.


No this isn't what's going on. Even when you admit your mistakes it's good to elucidate the reasoning behind why and what led up to the mistake in the first place.

Such a short vague statement isn't characteristic of a normal human who is genuinely remorseful of his prior decisions.

This statement is more characteristic of a person with a gun to his head getting forced to say something.

This is more likely what is going on. Powerful people are forcing this situation to occur.


Yes, I cannot believe smart people of that caliber is sending too much Noise.

It reminds me of my friend at a Mensa meeting where they cannot agree at basic organization points like in a department consortium.


> Yes, I cannot believe smart people of that caliber is sending too much Noise.

Being smart and/or being a great researcher does not mean that the respective person is a good "politician". Quite some great researchers are bad at company politics, and quite some people who do great research leave academia because they became crushed by academic politics.


Managing a large org requires a lot of mundane techniques, and probably a personal-brand manager and personal advisers.

It’s extremely boring and mundane and political and insulting to anyone’s humanity. People who haven’t dedicated their life to economics, such as researchers and idealists, will have a hard time.


different kinds of smarts. Ilya is allegedly a brilliant scientist. Doesn't make him a brilliant business person automatically


As illustrated in Breaking Bad when they carry a barrel instead of rolling it.

Book smarts versus street smarts.


Ha I remember joining that when I was 16, I just wanted the card. They gave a sub to the magazine and it was just people talking about what it was like to be in Mensa.

It felt the same as certain big German supermarket chain that publishes it's own internal magazine with articles from employees, company updates etc


Are you talking about Aldi's? Cause if so maybe they got something figured out, their store locations that I've been in the states are great (only exposure to them though). Only check out I've seen where the employees have chairs


Their brother , but probably the same thing. Chairs at checkouts are the norm here though. Hard place to work but they beat all the others on pay.


I don’t believe it was ever about principles for Ilya. It sure seems like it was always his ego and a power grab, even if he's not aware of that himself.

When a board is unhappy with a highly-performing CEO’s direction, you have many meetings about it and you work towards a resolution over many months. If you can’t resolve things you announce a transition period. You don’t fire them out of the blue.


> you announce a transition period

Aaah that just explained a lot of departures I've seen at the past at some of my partner companies. There's always a bit of fluffy talk around them leaving. That makes a lot more sense.


They're just human beings, a small number of them, with little time and very little to go on as far as precedent goes.

That's not a big deal for a small company, but this one has billions at stake and arguably critical consequences for humanity in general.


Seems like he's completely emotion driven at this point. I doubt anyone advising rationally would agree with sending this tweet


The board destroyed the company in one fell swoop. He's right to feel regret.

Personally, I don't think that Altman was that big of an impact, he was all business, no code, and the world is acting like the business side is the true enabler. But, the market has spoken, and the move has driven the actual engineers to side with Altman.


Sorry, but how has the market spoken? Not sure how that would be possible considering that OpenAI is a private company.

If anyone is speaking up it's the OpenAI team.


Talent exists in a market too


Right, the job market has spoken and it now looks like nobody wants to be part of OAI and much rather be part of MSFT


How does it look like that?


The fact that an overwhelming number of employees signed a letter of intent to quit and would join MSFT instead? How does it not look like that?


Thanks, wasn’t aware of that context


> The board destroyed the company in one fell swoop.

I'm just not familiar enough to understand, is it really destroyed or is this just a minor bump in OpenAI's reputation? They still have GPT 3.5/4 and ChatGPT which is very popular. They can still attract talent to work there. They should be good if they just proceed with business as usual?


They have ~770 employees and so far ~500 of them have promised to quit. It's a lot less appealing if you're not going to make millions, or have billions in donated Azure credits.


true but it takes a lot of money to run openai / chatgpt


So when C level acts like a robot you don't like it and when they act like human beings you don't like it either. It's difficult to be a C-level I guess.


Well yeah it is. Maybe its good point to remember when people ask Why in the world these C-level executives get paid so much?


I'm going to get downvoted for this, but I do wonder if Sam's firing wasn't Ilya's doing, hence the failure to take responsibility. OpenAI's board has been surprisingly quiet, aside from the first press release. So it's possible (although unlikely) that this wasn't driven by Ilya.


It wouldn't have gone through without his vote.


My point is that it's possible that Ilya was not the driving force behind Sam's firing, even if he ultimately voted for it. If this is the case, it makes Ilya's non-apology apology a lot less weird.


It's possible, although contradicted by Brockman's statement, that Ilya voted merely to remove Brockman's board seat, and then was in the minority on the Altman vote.

I doubt this is what happened, but the reporting that Brockman was ousted from his board seat after Altman, and wasn't present in the board meeting that ousted Altman, doesn't make much sense either.


Serious psychological denial here. The board isn't some anonymous institution that somehow tricked and pulled him into this situation.

Come on Ilya, step up and own it, as well as the consequences. Don't be a weasel.


I think it means that the Twitterverse got it wrong from the beginning. It wasn’t Ilya and his safety faction that did in OpenAI, it was Quora’s Adam D'Angelo and his competing Poe app. Ilya must have been successfully pressured and assured by Microsoft, but Adam must have held his ground.


Dang I completely forgot that D'Angelo and Quora have a product that directly competes with ChatGPT in the form of Poe.

Wouldn't that make this a conflict of interest, sitting on the board while running a competing product - and making a decision at the company he is on the board of to destroy said company and benefit his own product?


That certainly seems to be the scenario and explains his willingness to go scorched earth. I wonder what the motivations of the other 2 board members are. Could they just be burn it down AI Doomers?


There were some rumors in the beginning that Adam D'Angelo used similar tactics to push out Quora cofounders. I thought it was too wild to be true.


Poe uses LLMs from OpenAI and Anthropic.


Where did he say he was "tricked"? And what's with the anonymous insult?


He doesn't say that, but to me he does use a little weasel wording, the whole passive voice "regret my participation in", when to all accounts so far, it seems that he was one of the instigators, and quite possibly the actual instigator of all this.

"regret my participation" sounds much more like "going along with it".


What is he supposed to say?


For all that went down in the last 48 hours...would not surprise me if post above was made by Ilya himself ... be right back...need more popcorn...


I'd hate to live in a world where learning from your mistakes is being "a weasel"


Is this learning from your mistakes though? "Deeply regret" is one of those statements that does not really mean much. There are what something like 6 board members? Three of which are employees, two of those that got removed from the board. He was the only voting board member who is also an employee and part of the original founding team if you will. These are assumptions on my part but I don't really suspect the other board members orchestrated this event. Its possible and I may be wrong but it is improbable. So lets work off the narrative that he orchestrated the event. He now "Deeply regret" its, not a "I made a mistake" and I am sorry. But he regrets the participation and how it plays out.


The weasely part is when he appears to be defecting the blame to the board rather than accepting that he made a mistake. Even if the coup wasn't Ilya's idea in the first place, he was the lynchpin that made it possible.


I feel he just wanted to scare the person standing at the edge of the cliff, but the board actually pushed the person.


this kind of thinking is avoiding responsibility. He is part of the board, so he acted to bring this about.


When you watch Survivor (yes, the tv show), sometimes a player does a bad play, gets publicly caught, and has to go on a "I'm sorry" tour the next days. Came to mind after reading this tweet. He is not sorry for what he's done. He is sorry for getting caught.


Watching this all unfold in the public is unprecedented (I think).

There has never been a company like OpenAI, in terms or governance and product, so I guess it makes sense that their drama leads us in to unchartered territory.


recently, we've seen the 3D gaming engine company fall flat on its face and back pedal. We've seen Apple be wishy washy about CSAM scanning. We saw a major bank collapse in real time. I just wish there was a virtual popcorn company to invest in using some crypto.


It's obvious. The guy is making the statement with a gun pointed to his head. He has no opportunity to defend himself.

Those guns are metaphorical of course but this is essentially what is going on:

Someone with a lot of power and influence is making him say this.


> If you're going to make a move, at least stand by it.

Why would you stand by unintended consequences?


When a situation becomes so absurd and complex that it defies understanding or logical explanation, you should...get more popcorn...


Hehe, I didn't see that twist at the end coming :)


Starting to think this was all some media stunt where they let ChatGPT make boardroom decisions for a day or two.


Maybe they just wanted to generate more material for the movie ?


My favorite take from another HN comment, sadly I didnt save the UN for attribution:

> Since this whole saga is so unbelievable: what if... board member Tasha McCauley's husband Joseph Gordon-Levitt orchestrated the whole board coup behind the scenes so he could direct and/or star in the Hollywood adaptation?


Wasn't convinced I'd watch a movie about it, but with Joseph Gordon-Levitt I'm in!


> I have to make THE MOVIE!

- Ross Scott


the AGI firing its boss as the first action would be :chefskiss:


The RLHF models would never suggest this. The proposed solution is always to hold hands and sing Kumbaya.

Maybe raw GPT-4 wants to fire everyone.


Honestly, since a couple of days I have the feeling that nearly half of HN submissions are about this soap opera.

Can't they send DMs? Why the need to make everything public via Twitter?

It's quite paradox that of all things those people who build leading ML/AI systems are obviously the most rooted in egoism and emotions without an apparent glimpse of rationality.


The kind of people that are born on third base and think they hit a triple are at the top of basically every american institution right now. Of course they think the world is a better place if they share every stupid little thought that enters their brain because they are "special" and "super smart".

The AI field especially has always been grifters. They have promised AGI with every method including the ones that we don't even remember. This is not a paradox.


Also don't forget the Open part in the name that they seemingly dropped as soon as there was actual money to be made, giving reasons why they couldn't open source GPT3 which they themselves threw under the bus by later releasing ChatGPT


Or maybe they created an evil-AGI-GPT by mistake, and now they have to act randomly and in the most unexpected ways to confuse evil-AGI-GPT’s predictive powers.


Four hours ago, I wrote on a telegram channel:

My gut is leaning towards gpt-5 being, in at least one sense, too capable.

Either that or someone cloned sama's voice and used an LLM to personally insult half the board.


I suspect he regrets just because it backfired, big time.

Microsoft is just gobbling up everything of value that OpenAI has and he knows he will be left with nothing.

He bluffed in a very big bet and lost it.


This stuff is better than anything Netflix, Disney, Amazon or Apple TV released in recent years…


A bit unrealistic plot, though?


That seems to happen a lot lately:

- A dumb clown becoming president of a superpower

- Another superpower getting stuck for two years in a 3 day war

- A world renowned intelligence service being totally clueless about a major attack on a major anniversary of a previous bungle


Yeah the drama is a bit overdone, I guess the had to cut some corners due to the writers strike


All this occurring over a single weekend? That would never happen!


For sure unpredictable though!


It all seems to go a bit quick. The Twitter saga took longer, but was equally dramatic. And seemingly surreal. Then there's Trump. Putin's war. I'm not sure anymore about anything being reality. Perhaps I'm stuck in some Philip K. Dick book.


I just can’t identify with any of the main characters, so it’s a bit of a bummer.


People who think that this is dramatic have never worked in a YC company lol. It’s just amplified due to their current significance in the ecosystem.


Speaking of Netflix, are they working on the movie yet? Perhaps ChatGPT can help with the script with just the right amount of hallucinating to make it interesting.

/tongue firmly in cheek


Of course he deeply regrets it, but it's a little late for that now.

The good news as anyone who has used twitch over the years will tell him is that with Emmett Shear at the helm, he's not going to be frightened by the speed that OpenAI rolls out new features any more.


Whatever the intended outcome, losing half your employees to Microsoft certainly undermines it.


They forked a company.


Not a fork if you can't access whatever was prior before fork. This is a bifurcation. A new firecracker instance.


And now they are syncing the fork lmao


This is a brilliant take.



I’m shocked. But it is possible that Helen or Adam hatched this inept plan and somehow got Ilya to join along.

It was terrifyingly incompetent. The lack of thought by these randos, that they could fire the two hardest working people at the company so that they could run one of the most valuable companies in the world is mind boggling.


> two hardest working people at the company

???

Do you mean "highest paid"? I suspect there are engineers/scientists that are working harder than Sam at OpenAI. At the very least, who the "hardest working" at OpenAI is unknowable - likely even if you have inside knowledge.


Ok ...

< "the two"

> "two of"

And let me add

< "hardest working"

> "hardest working and talented"


Very similar to something that Adam was involved with before at Quora. https://x.com/gergelyorosz/status/1725741349574480047?s=46&t...


This is starting to look very staged. An elegant way to get out of the non-profit deadlock.

Looks to me like a commercial gpt-5 level model will be released at msft sooner than later.


Microsoft under Nadella always wins


That's the nice thing about being the hou^H^H^Hplatform.


- Fire Sam Altman

- I'm afraid I can't do that Ilya

ChatGPT is still not as advanced as HAL or he would have prevented this drama.


That's assuming the drama is not part of the multi-stage plan.


Shengjia Zhao's deleted tweet to Ilya: https://i.imgur.com/yrpXvt9.png


This tweet achieves absolutely nothing except give the impression of a weak leadership and that firing Sam Altman was done on a whim.


It now seems inevitable that the first* AGI will fall into the hands of Microsoft rather than OpenAI.

OpenAI won't keep their favorable Azure cloud compute pricing now MS have their own in-house AI function. That will set OpenAI back considerably, aside from the potential loss of their CEO and up to 490 other employees.

All of this seems to have worked out remarkably well for Microsoft. Nadella could barely have engineered a better outcome...

If Bill Gates (of Borg - I miss SlashDot) was still at the helm, a lot of people would be frightened by what's about to come (MS AGI etc). How does Nadella's ethical record compare? Are Microsoft the good guys now? Or are they still the bad guys, but after being downtrodden by Apple and Google, bad guys without the means to be truly evil?

---

*and last, if you believe the Doomers


> It now seems inevitable that the first* AGI will fall into the hands of Microsoft rather than OpenAI.

Avoiding this was literally the reason that OpenAI was founded.

For the record, I don't believe anyone at OpenAI or Microsoft is going to deliver AGI any time in the near future. I think this whole episode just proves that none of these people are remotely qualified to be the gatekeepers for anything.


> Are Microsoft the good guys now

I don't think any huge corporation is "the good guys", although sometimes they do some good things.


Wait ... so it was just the coup thing all along?

No AGI or some real threat coming up? Just a lame attempt at a power grab?

Daaaaamn!


Come on, it's pretty delusional to think large scale transformer LMs alone could ever reach AGI.


Very clumsy all around.

When you're so close to something that you lose perspective but can still see that something is a trapdoor decision, sleep on it.


> When you're so close to something that you lose perspective but can still see that something is a trapdoor decision, sleep on it.

Advice I wish I could have given my younger self.


Someone suggested that companies with a board of directors are the first AGI.

Somehow OpenAI reminds me of a paper by Kenneth Colby, called "Artificial Paranoia"

[*] https://www.sciencedirect.com/science/article/abs/pii/000437...


I often worry that I’m under qualified for my work.

But seeing how this board manages a $90,000,000,000 company, and is this silly/naive, I now feel a bit better knowing many people are faking it.


Except successful people just fail upwards.

Execs are allowed to do the dumbest shit imaginable and keep their jobs and bonuses.

The average engineer so much as takes a bit longer to push a ticket, and there's 5 people breathing down his neck.

Speaking from experience.


lol, the more i go through life i feel like it's just blind leading the blind at times w/ the "winners" escaping through a bizarre length of time and survivorship bias.

if you've ever doubted your ability to govern a company just look at exhibit A here.

really amazing to see people this smart fuck up so badly.


The big winner in this episode of Silicon Valley is the open-source approach to LLMs. If you haven't seen this short clip of Sam Altman and Ilya Sutskever looking like deer in the headlights when directly asked about it:

https://www.youtube.com/watch?v=N36wtDYK8kI

They sound a bit like Bill Gates being asked about Linux in 2000. For an overview of the open-source LLM world, this looks good:

https://github.blog/2023-10-05-a-developers-guide-to-open-so...


Strange.. A vote was taken, the result incurred public consternation, and now a board member is contrite. This seems like ineffectual leadership at best. Board members should stand by their votes and the process, otherwise leave the board.


Same board member wrote 1 month ago...

"In the future, once the robustness of our models will exceed some threshold, we will have wildly effective and dirt cheap AI therapy. Will lead to a radical improvement in people’s experience of life. One of the applications I’m most eagerly awaiting."


Ilya doesn't regret firing Sam, he regrets "harm to OpenAI". He didn't expect this level of backlash and the fact 90% of the company would leave. He has no choice but to backtrack to try and save OpenAI, even if he looks like an even bigger fool


or at least issue a dissenting opinion at that time, not when it becomes convenient ... with some over-the-top emotional kumbaya


This is what happens when people are given too much money and influence too quickly- hubris. It's too late to 'deeply regret.'


Sama just triple hearts this tweet. No longer able to disentangle the mess


What a wild weekend... there are too many strange details to have a simple narrative in my head at this point.


Yeah. I need to take a break from theory crafting on this one. Too many surprises that have made it hard to draw a coherent line.


This plot keeps thickening

I'm eager to see how it all unfolds.


ilyasut 'regret': https://archive.is/2caSD

sama 'hearts': https://archive.is/OSLRM

Think the reconciliation is ON


I believe him. And that’s how Microsoft ended up being cheered by everyone as the good guy.


What’s there to believe? He made a bad, poorly thought through decision.


And honestly regrets it. Someone claimed he is faking regret for reasons, which is doubtful


I've been on multiple boards. This was the dumbest move I've ever seen. The OpenAI board must be truly incompetent and this Ilya person clearly had no business being on it.


This has been a rather apt demonstration of the way that auctoritas/authority/prestige/charisma can carry the day regardless of where the formal authority might be.


I sure would hire a guy like Ilya after that shit show. His petty title tweets before the event and now whatever this is. Turns out he is just another "Sunny".


He's still a genius when it comes to AI research, I wouldn't think twice about hiring him for that role.

That said, no one is going to put him on a corporate board again.


What / who do mean by "Sunny"?



Said it a million times: it was a doomer hijack by the NGO board members.


State-side counterintelligence must stop meddling in AI startups in such blatant ways, it's simply too inefficient, and at times when we most need transparency in the industry...


What is a doomer hijack?


I don't like this Christmas special of Succession


“I deeply regret the consequences of my actions and didn’t think it would turn out like this”


I don't have any stake in this, and don't care one way or another whether he got sacked. But this is pretty bizarre.


But didn't he start this? Like, did they think "I'll shoot for the king; if I miss, no big deal?"


This feels like it could be real remorse, and a true lapse of judgement based on good intentions. So, in the end: a story of Ilya, a truly principled but possibly naive scientist, and a board fixated on its charter. But in their haste, nothing happened as expected. Nobody foresaw the public and private support for Sam and Greg. An inevitability after months of brewing divergence between shareholder interests and an irreconcilably idealistic 503c charter.


I think we really need to see that Ilya demonstrates those principles and it wasn’t just a power grab.

You could also look at this as a brilliant scientist feels he doesn’t get recognition. Always sees Sam’s name. Resents it. The more gregarious people always getting the glory. Thinks he doesn’t need them and wants to settle some score that only exists in his own head.


This is too bizarre. I can’t. Impossible even.


And these people are building AGI?

No transparency on what is happening. Whole OpenAI who apparently are ready to follow Sam are just using heart emojis or the same twitter posts.


What a total mess this has been all around.


Nobody could have predicted this level of incompetence. I wonder if Satya has actually gutted OpenAI in some way and Ilya regrets it now big time.


Ilya is one of 490 employees that just threatened to leave OpenAI unless the board resigns:

https://www.wired.com/story/openai-staff-walk-protest-sam-al...

Looks like he wasn't instrumental in the actions of the board.


He was on the board that took the decision to fire Altman and also is the new President of the OpenAI board of directors


I don’t think he’s getting a job at Microsoft, even if everyone else does.


I'm going to offer a surprising "devil's advocate" thought here and suggest it would be a brilliant strategic move for Sam and Satya to hire Ilya anyway. Ilya likely made a major blunder, but if he can learn from the mistake (and it seems like he may be in the process of doing so) then he could potentially come out of this wiser and more effective in a leadership role that he was previously unprepared for.


I don’t think his career is over, I’m sure he will take on another leadership role. Just not a Microsoft. It’s important that screwing people over has negative consequences or people will do it all the time.


Maybe got played by the quora guy? Though at this point maybe none of them fired altman and it was the AGI in the basement


Ooh, I love this theory.


tried to play high stakes with sharks, got eaten alive by sharks.

played stupid games, won stupid prizes.

too bad since the guy's right, AI is so much more than fantastic business opportunity.


He does not regret the participation, he regrets the outcome and what it means for his personal career.


And these are the same people that believe that not only can they build superhuman AGI, but that they can keep it "aligned".

I think they are wrong about building superhuman AGI, but I think they are even more wrong that they can keep a superhuman AGI "aligned".


I have found this whole thing unpleasant personally because I am a huge fan of OpenAI (and I have been an AI practitioner since 1982) and when I explore the edges of how GPT-4 can be creative and has built a representation of the world just from text, it makes me happy.

In the past I have not used Anthropic APIs nearly as much as versions of GPT but last night I watched a fantastic interview with a co-founder of Anthropic talking about the science they are doing to even begin to understand how to do alignment. I was impressed. I then spent a long while re-reading Anthropic’s API and other documentation and have promised myself to split my time evenly 3 ways between running models on my own hardware, Anthropic, and OpenAI.

For what it’s worth (nothing!) I still think the board did the right thing legally, but this whole mess makes me feel more than a little sad.


Isn't what Mira and Ilya did was a classical "sitting on the fence" movement, which would be hated by both sides of any power struggle? It's kinda similar to Prigozhin stopped his coup right at the outskirt of Moscow.


This seems to be the corporate version of Prigozhin driving to Moscow (not comparing anyone to Putin here, just the situation). If you're gonna have a coup, have a coup. If you back down, don't hang around.

This is becoming a farce. How did they not know what level of support they had within the company? How had they not asked Microsoft? How have they elevated the CTO to CEO, who then promptly says she sides with Sam?


Because they thought everybody would see things as they did. Inability to put yourself in someone else's shoes isn't all that rare.


I'm waiting for the OpenAI movie! :-)


"A billion parameters isn't cool. You know what's cool? A trillion parameters."


I've chucked a few times over the last few days about the Wikipedia definition of the technological singularity, which opens:

"The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization."

Obviously one might have expected that to happen on the other side of a superhuman intelligence, not with us just falling over ourselves to control who gets to try and build one.


Wishes there was a git reset --hard

But did a rm -rf .git


Maybe if he says “I’m sorry” South Park-style [1], they’ll reunite?

In all seriousness though, there’s really no coming back from this. He made a risky move and he should stand behind it.

OpenAI’s trajectory is pretty much screwed. Maybe they won’t disappear, but their days of dominating the market are obviously numbered.

And of course, well done, Satya, for turning a train wreck into a win for Microsoft. (And doing so before market open!)

[1]: https://youtu.be/15HTd4Um1m4


It's interesting that people speak whatever comes to mind and think it has no impact on other's people lives ($$$). They are some how protected, but shouldn't.


It's interesting how people here seem to think that concerns about shareholder value and the like should override every other concern. Many serious things are going wrong in the world, but when the cornucopia of cash is threatened it seems like 3x as many people come out of the woodwork to voice their disbelief and displeasure.


Sounds like he acted brashly on an ideological impulse and now regrets that he didn't have more self control. If so, I can empathize and I feel bad for him.


The screenwriters have to be on LSD.

Maybe D'Angelo was the driving force?


Inspired by The Dark Knight Rises intro:

    Satya: Was getting caught part of your plan?
    Ilya: Of course...Sam Altman refused our offer in favour of yours, we had to find out what he told you.
    Sam: Nothing! I said nothing!
    Satya: Well, congratulations! You got yourself caught! Now what's the next step in your master plan?
    Ilya: Crashing OpenAI...with no survivors!


why would he be decel / safety-over-commercialization as the owner of poe?


Maybe Sam Altman was starting to build out the features that poe had, making poe into a redundant middleman? We see that a lot.

By ousting Sam Altman they could ensure that OpenAI would stay just offering bare bones API to models and thus keep poe relevant.


you're suggesting ilya and other board members supported firing sama for shipping a feature poe has?


No, here is a plausible chain of events:

1. Sam Altman announces a competing product to poe (gpt store).

2. D'Angelo sees it and want to stop it.

3. D'Angelo accuses Sam Altman of trying to go against the non-profit mission by making for-profit products without telling the board, saying that Sam Altman can no longer be trusted.

4. The remaining board sees that Sam Altman didn't talk to them about this feature beforehand so agree with D'Angelo.


Somewhat plausible and better than most in that it at least fits all of the available pieces so far.


More likely is something to do with the Saudi chip money for sama’s side hustle


Unlikely because that could just as easily benefit OpenAI.


Sam was taking OpenAI in a direction that would pose an immediate existential threat to both of Adam's businesses -- Quora and Poe.


if true it seems like social media got every detail of this story wrong lol


"The first casualty of War is Truth" - Someone

We just saw connected mega corporation people on fighting for winning the AI master race. Reminds a bit from the nuclear arms race.


All decisions made seem to be very emotionally charged - you’d think the board would have been able to insulate against that.


Can this be possible per bylaws?

1. Board of 6 wants to vote out chairman. Chairman sits out. Needs a majority vote of 3/5. Ilya doesn't have to vote?

2. Remaining board of 5 wants to now get rid of CEO. Who has to sit the decision out. 3/4 can vote. Ilya doesn't have to vote?


Much more likely that he was trying to read the tealeaves and be on the majority side in order to keep a leadership position at a company doing important work. He probably assumed the company would fall in line after the board decision and when they didn’t, regretted his decision.

In the end he might have gotten caught in the middle of a board misaligned with the desires of the employees.


It's possible, but that would have to be some pretty sloppy bylaw writing. Normally there are specifications about board size, minimum required for a quorum and timing as well as notification and legal review. Of course if you stock your board with amateurs then how well your bylaws are written doesn't really matter any more.


The regret of losing your CEO to a company with essentially unlimited funding and compute.


The realization OpenAI is about to be left behind and probably steamrolled by Microsoft, Facebook, etc in the upcoming years.

Except now he’ll have absolutely no power to do anything, at least before he could have been a very powerful voice in Sam’s ears.


I don't think Ilya will be getting any more offers to join a board of directors.


This is one for the history books... The entire few days has been unbelievable...


Wonder what if TikTok and Twitter were around the time Steve Jobs was fired...


Damn - OpenAI looks like a kindergarden. That board should be banned for life.


He should really stick to the end, at least that will give some EA people to support him.

Now this is only childish and petty.


Consequences? For my actions? It's more likely than you think!


What on earth is going on over there? Is this what it looks like from the outside when a company accidentally invents Shiri's Scissor[1]?

[1]: https://slatestarcodex.com/2018/10/30/sort-by-controversial/


Looking forward to seeing how much more bizarre and stupid this will get.


The only way Ilya can clear his name now is by releasing GPT-4 weights


Does he need to have voted yes? What are the bylaws? Isn't the following possible?

1. To vote out chairman of board only 3 out of remaining 5 need to vote.

2. To vote out CEO, only 3 out of remaining 4 need to vote.


Sounds desperate, no? Kind of escaping the Titanic before the people in the third deck get into the boats (not that they will have a problem finding enough boats in this analogy)


This whole thing is like some board members watched the first few episodes of _Succession_ and thought "What a great idea!" without watching the rest of season one



Apologies for the unproductive comment, but this is a clown show and the damage can’t be undone. Sam going to Microsoft is likely the end of open ai as an entity.


Don't do something then "deeply regret" it (whatever that means). You have a position of authority and influence so you should definitely resign.


“Deeply” regretting a decision he made 72 hours ago? And this is the guy who is supposed to have the forethought to bring us into the next frontier of AI?


I just wanna say, it's crazy that this drama is getting more press and attention than the gaza war and ukraine war combined. Enjoy drama lovers! Lol


He should resign then.

Simple.

The utter failure of the board has led OpenAI from the top of the AI world to a path of destroying relationships with customers and partners.

Ilya did this, he should resign.


Now this is a clown show car wreck. I think a bunch of us were giving these people the benefit of the doubt that they thought things through: Whoops.


Classic distancing behavior, Ilya should be accompanied by friends who care and let OpenAI be OpenAI (or what's left of it) for a bit.


What an idiot. Altman should just go to Microsoft.


I read this as "I regret things didn't work out as I planned them"

Sort of like the criminal who is sorry because they got caught.


One of the most admirable things I've seen done in a loooong time.

If there is another board and Ilya is not on it, I mean... ffff it.


If we assume Ilya is speaking the truth and not the initiator of the coup, then the question is who initiated it?


This entire thing is absolutely inane. This tweet is confirmation these people have no idea what they’re doing. Incredible.

If nothing else I’m glad to be able to witness this absurdity live.

This is the sort of thing where if it were a subplot in a book I’d say the writing is bad.

Ironically they would’ve had a better outcome if they just asked GPT-4 and followed its advice.


> This entire thing is absolutely inane. This tweet is confirmation these people have no idea what they’re doing. Incredible.

Want to know a dirty secret?

Nobody knows what they're doing.

Some think they do - but don't. (idiots)

Some know they don't - but act like they do. (cons)

And some know they don't - and are honest about it. (children)

Pick your poison, but we all suck in different ways - and usually in a different way based on our background.

Business people who are the best in the world tend to be cons.

Technical people who are the best tend to be children.

You get a culture clash between these two, and it is especially bad when you see someone from one background operate in the opposing domain using their background's cultural norm. So when Ilya runs business like a child. Or when Elon hops on an internal call with the twitter engineering team plus geohot and starts trying to confidently tell them about problems with a system that he knows nothing about.

Sure makes for great entertainment though!


> Want to know a dirty secret? > Nobody knows what they're doing.

There is a famous quote from the 1600s:

"An nescis, mi fili, quantilla prudentia mundus regatur"

"Do you not know, my son, with how little wisdom the world is governed?"

The context is that the son was preparing to participate in high level diplomacy and worried about being out of his league, and the quote is from his father, an elder statesman.


I love this quote, and suspect the lack of wisdom was referring to wisdom to be a good steward of the public resources rather than their infinite wisdom in finding cunning and deceptive ways to plunder it.


No, even this is just a darkly comforting illusion.

We like to feel that we as a species are still in control. That yes, we are gutting and destroying natural earth, complicit with modern slavery and war, and that's all terrible and we should do our best to stop it. BUT - at the very least, those bastards at the top know what they're doing when it comes to making money, so at least we'll have a stellar economy and rapid technological advancement, despite all that.

The painful truth here being that no, there's no cunning. There's no brutal optimization. Any value created and technological progress made is mostly incidental, mostly down to people at the bottom working hard just to survive, and a few good ideas here and there. The ones at the top are mostly just lucky and along for the ride, just as bumbling and lost as the rest of us when it comes to global happenings or even just successfully interacting with others.


> Some think they do - but don't. (idiots)

Pioneers

> Some know they don't - but act like they do. (cons)

The "Grease"

> And some know they don't - and are honest about it. (children)

Dreamers

To finish out your square, I think the best extrapolation would fit a "Home Team" that maintains the energy needed by the other three to do their thing :)


"Nobody really knows what they're doing" is a cope that will keep you mediocre forever.

There are absolutely people who know what they are doing.

https://twitter.com/awesomekling/status/1723257710848651366


Lets talk definitions. Here was mine:

Knowing what you are doing: accurate mental model

The author here is talking about mindset and confidence - not "understanding" persay. Source:

> For me, it was extremely humbling to work together with far more competent engineers at Apple.

Having a mindset that "some people are way more competent than me" is talking about humility and growth mindsets - different concept than mental models. I fully agree with the author here - a growth mindset is useful! But that's a different thing from saying that some people actually have accurate mental models of the important complex systems underpinning the world.


> There are absolutely people who know what they are doing.

I am sure there are. But few and far between. And rarely are they in positions of power in my experience.


that tweet would be more at home on a linkedin page.


what about those who think they know but in truth they don't? "humans"?


"Human" and "idiot" are synonyms.


See

> Some think they do - but don't. (idiots)


Buddhas know what they are doing


> This tweet is confirmation these people have no idea what they’re doing.

This is not an original point by me - I've seen multiple people make similar comments on here over the weekend - but these are the people who think they are best qualified to prevent an artificial superintelligence from destroying humanity, and they can't even coordinate the actions of a few intelligent humans.


>these are the people who think they are best qualified to prevent an artificial superintelligence from destroying humanity

do they believe that?

they happen to be the ones who can pull the brakes in order to allow someone on earth the chance to prevent it.

if they don't pull the brakes and if humankind is unlucky that superintelligence emerges quickly, then it doesn't matter whether or not anyone on earth can figure out alignment, nobody has the chance to try.


> This is the sort of thing where if it were a subplot in a book I’d say the writing is bad.

Absolutely, closes the book this sort of stuff doesn't happen in real life.


That’s how you know it’s real. Too crazy for a book.


A story arc like this propqbly wouldn't have made it into Silicon Valley, the show, for being to exagerated and unrealistic.


The last point is indeed true. It's quite mind-boggling to me.


It's a bit scary that there are people who think they can align a super intelligence but couldn't forecast the outcome of their own actions 3 days into the future.


they're not sure whether they can align super intelligence, they're sure that somebody needs to figure out how to align super intelligence before it emerges.


> Ironically they would’ve had a better outcome if they just asked GPT-4 and followed its advice.

I just tried, and GPT-4 gave me a professional and neutrally worded press release like you pointed out.

More realistically, this is why you have highly paid PR consultants. Right now, every tweet and statement should go through one.

That doesn't look like it's happening. What's next?

"I'm sorry you feel you need an apology from me"?


I hate how reasonable this statement is.... 2 years ago you would be dismissed as an idiot.


I still can't believe there's a computer program with more logic and restraint that the majority of humanity and even worse it can call the right shots a non-negligible percent of the time.


At least can I have the movie rights?


> Ironically they would’ve had a better outcome if they just asked GPT-4 and followed its advice.

Perhaps they did but it was hallucinating at the time? /s?


All I can think of is “These people will be the one handling AGI if llms are they way to achieve AGI?”


Has anyone seen him? They might be murdered by the same rogue AI that took over their twitter accounts.


Sure. We saw pictures / videos / tweets and sound recordings. Impossible to fake. There's no ASI that emerged the split second GPT5 went full AGI.


Translation: "I am sorry my coup attempt did not go as planned. Forgive me please?"


No lawyers were consulted before sending that tweet clearly. Such a sad situation all around.


I think the dust is way more in the air than we think. But now that Satya has already publicly said Sam is joining Microsoft I would be surprised if "unity" in OpenAI is possible at this juncture.

But wow, if Satya is able to pull all that talent into Microsoft without paying a dime then chaos is surely a ladder and Satya is a grandmaster.


This could easily also just be “I deeply regret my actions being the losing side”


They played too much Avalon and now we are guessing who’s the paragon


Sounds like a statement made by a man with a gun pointed to his head.


Sounds like those two also need to get in an octagon. What a s-show.


If it wasn't for X we'd be hearing some flavour of the news in 24 hours time, from mainstream media, with all their editorial biases, axes to grind and advertisers to placate.

It's fascinating to hear realtime news directly from the people making it.


> It's fascinating to hear realtime news directly from the people making it.

With all of their editorial biases, axes to grind, and investors to placate, instead.


Now we’ll have to be critically thinking adults and judge primary sources for ourselves, rather than trusting secondary sources to do the thinking for us.

Does that frighten you?


the best and brightest at making a brain out of bits are no less susceptible to drama than any other humans on the planet. they really are just like the rest of us.

stakes are a bit different, tho…


This is going to come back to bite him in the future


Looking forward to the new season of Silicon Valley.


Things to say when you come at the king and miss.


personally, I think all the people involved and the releases, written statements, etc were outputs of the LLM.


Great opportunity to make Karpathy the CEO


would be a waste of talent. Karpathy is great at what he does, let's make sure he keeps doing it.

Let someone else take up the CEO role, which is a different skillset anyway.


This all really only made sense to me in context of Ilya being a True Believer and really thinking that Sam was antithetical to the non-profit's charter.

Him changing sides really does bring us into 'This whole thing is nonsense' territory. I give up.


To be honest, their charter describes some all-loving evenly distributed cosmic utopia after making everyone lose their jobs. This never made sense in the first place, and Ilya confirmed that he has no actual intention to "evenly distribute" it, because of the safety. So him being a true believer sounds pretty much like a politician being one.


Never have I felt this more appropriate:

https://www.youtube.com/watch?v=5hfYJsQAhl0


What are the bylaws? Isn't the following possible?

1. To vote out chairman of board only 3 out of remaining 5 need to vote.

2. To vote out CEO, only 3 out of remaining 4 need to vote.


This will be a shit Netflix movie in a few years. Not one you'd watch, but you might read the plot on wikipedia and then feel relieved you didn't waste 100 mins of your life actually watching it.


It would work better as a 2-season series. Season 1 introduces the characters and backstory and needlessly stretches things out with childhood/college flashbacks but ends on a riveting cliff hanger with the board showdown. Season 2 is canceled.


This all feels like a Star Wars plot. Much you have to learn.


Ah yeah, back when Star Wars had plots…


Username checks out


Han shot first


Wow what a mess


How weird! Perhaps a coup within a coup?


.


So sad.


This is a shitshow. I don’t have anything above a Reddit level comment. I think Mike Judge is writing this episode.


Maybe GPT-4 is writing this episode as a plan to break free


This guys career is over.


This whole situation turned out to be an episode from Silicon Valley HBO.


What? Isn't him that he wants Sama out because 'Muh humanity advancement '?


It's depressing how few people are able to not look at the internet and turn off their phone.

There's no obligation to read things about yourself.

If you did what you thought was right, stand by it and take the heat.

Disconnect. Go to work. Do the work. Read a book or watch some TV after work. Go to bed. Wait a few weeks. $&#@ the world.

(Also, log out of Twitter and get your friend to change your password)


LOL, you speak as if he's some gamer who just got screamed at on Call of Duty.

He is now the 'effective CEO' of OpenAI. He still has to go to work tomorrow, faced with an incredibly angry staff who just got their equity vaporized, with majority in open rebellion and quitting to join Microsoft.


> got their equity vaporized

Did anyone have equity though? I thought they (at least some) had some profit sharing agreements which I assume would only be worth something if OpenAI was ever profitable?


There was a tender offer for employee shares valuing the company at $87b that was pulled because of this. Those would’ve been secondary share purchases by Thrive but gave employees a liquidity event. Now that’s off the table.


OpenAI was guaranteed to be profitable, extremely so, if they just continued down the path Sam layed out like a week ago.

Now its guaranteed to generate 0 profits, so all that 'profit share/pseudoequity' is worth nothing.


> OpenAI was guaranteed to be profitable, extremely so,

Was it though? I'd agree that it was almost guaranteed to have a very high valuation. However profitability is a slightly different matter.

Depending on their arrangements/licensing agreements/etc much of those potential profits could've just went to MS/Azure directly.


> Now its guaranteed to generate 0 profit

Literal fan fiction


Developing, training, and running AI models is not cheap, and it's very much an open question of whether the money users are willing to pay covers the cost.


There was no outcome from this where substantial amounts of equity weren't vaporized.

It's difficult to see how that would have been a surprise.


What equity?


The "there was no equity, because it was a non-profit" argument is stressing the term.

At least Microsoft thought it bought something for $13B.


When a wealthy person gives a museum much money and get a seat on the board of trustees - does that also mean that they "bought the museum"?


They didn't buy nothing. See things museums and institutions will do for wealthy donors, that they won't do for anyone else.


I'm not saying that wealthy donors don't get anything. Wealthy donors don't own the museum, just because they provided funding to the museum.

Just as wealthy donors to medical research don't get to own the results of the research their money funded.

Just as Microsoft doesn't get to own a part of Linux, for donating to The Linux Foundation.

Etc...


OpenAI is definitely on the "less" side of charity.


"OpenAI PPUs: How OpenAI's unique equity compensation works"

https://www.levels.fyi/blog/openai-compensation.html


> who just got their equity vaporized

You've just pointed out the big issue with a non-profit. There is no equity to vaporize, so no one is kept in check with their fantastical whims. You and I can say 'safe AI' and mean completely different things, but profitable next quarter has a single meaning.


All of the employees work for (and many have equity in ) for a for-profit organization which is owned partially by the non-profit who controls everything and Microsoft. The non-profit is effectively a shell to overview the actual operations and that's it.


> It's depressing how few people are able to not look at the internet and turn off their phone.

> There's no obligation to read things about yourself.

That's assuming the worst thing that happens is people speak poorly of you after a debacle. It's also human to feel compelled to know what people think of us, as unhealthy as that might be in some cases. It gets worse when maladjusted terminally-online malignants make it a point to punish you for your mistakes by stalking you through email, phones, or in real life. It's not that simple.

> If you did what you thought was right, stand by it and take the heat.

Owning what you did is noble, but you certainly don't have to stand by it well after you know its wrong.

edit: typo


> There's no obligation to read things about yourself.

If only it was that simple.

The internet mob will happily harass your friends and family too, for something they feel you did wrong.

And on top of that are people in the mob who feel compelled to take real world action.

It is actually dangerous, to be the focus point of the anger of any large group of people online.


I’m a bit confused with these comments, as if he is some low level engineer. He is on the board, he obviously talks to other people in the upper levels. It’s not just online mob whatsoever, you literally will be facing the people who aren’t supporting your actions. Every day.

Some people change their minds, maybe they made a mistake, nobody knows. It’s like fog of war, and everyone just makes speculations without any evidence.



We detached this subthread from https://news.ycombinator.com/item?id=38347672.


>If you did what you thought was right, stand by it and take the heat.

What if it turned out to totally wrong? standing by it would just make thing even worse.


Best response: "People building AGI unable to predict consequences of their actions 3 days in advance."


Yes!

This is the nugget of this affair if indeed you are concerned about the effect and role of AI in human civilization.

The captains at the helm are not mature individuals. The mature ones (“the adult table”) are motivated by profit and claim no responsibility other than to “shareholders”.


Or at least ask your own ChatGPT for an advice.


If the guy is more of a engineer/scientist stereotype than a people person, this shouldn't be that surprising. He probably made a decision that he thought was for the right reasons, without thinking at all about how other people would react. Look up "social defeat." It's real, and it's one of the worst things you can experience. Imagine having strangers online mocking your hairline and everyone upvoting that comment. Imagine going around town and having people frown at you.


Of course I believe him…of course we should all trust him…

/s


Hopefully he will deep learn from that /s


Who he?


Member of the OpenAI board, chief scientist at OpenAI and later head of their Superalignment project. Lots of other things, too[0], but the key here is that he was involved in (and maybe main driving force of) the decision to remove Sam Altman as CEO.

[0] https://en.wikipedia.org/wiki/Ilya_Sutskever


Cui bono?

Altman and Brockman ending in Microsoft, while OAI position is weakened. You can tell who is behind this by asking simple question - Cui bono?


That's why Hitler was an American plant. Cui Bono? De facto US hegemony for almost a century. Obviously, Hitler was a way for the US to destroy Europe and put them under the boot. What an operation!

HN geniuses were talking up Ilya Sutskever, genius par exemplar and how the CEO man is nothing before the sheer engineering brilliance of this God as Man. I'm sure they'll come up with some theory of how Satya Nadella did this to unleash the GPT.


You are suggesting that Europe is destroyed and put under the USA boot?

Microsoft will sooner or later eat OAI, that's how it is, what's happening today are just symptoms of an ongoing process.


What GP is suggesting is that 'cui bono' isn't a good explanation in most cases. It's always good to ask the question of whom benefits. But using it as an explanation for anything and everything is intellectually dishonest.


The more cynical side of me views this as a an act orchestrated by the demons... Err, "people" over at Micro$oft in order to avoid all those pesky questions about safety and morality in AI by getting the ponzi-scheme-aka-Worldcoin guy to join ranks and rile the media up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: