Hacker News new | past | comments | ask | show | jobs | submit login

As of 10am PT, 700 of 770 employees have signed the call for board resignation. [1]

[1] https://twitter.com/joannejang/status/1726667504133808242




Given 90%, including leadership, seems a bad career move for remaining people not to sign, even if you agreed with the board's action.


I think the board did the right thing, just waaaay too late for it to be effective. They’d been cut out long ago and just hadn’t realized it yet.

… but I’d probably sign for exactly those good-career-move reasons, at this point. Going down with the ship isn’t even going to be noticed, let alone change anything.


Agreed. Starting from before the anthropic exodus, I suspect the timeline looks like:

(2015) Founding: majority are concerned with safety

(2019) For profit formed: mix of safety and profit motives (majority still safety oriented?)

(2020) GPT3 released to much hype, leading to many ambition chasers joining: the profit seeking side grows.

(2021) Anthropic exodus over safety: the safety side shrinks

(2022) chatgpt released, generating tons more hype and tons more ambitious profit seekers joining: the profit side grows even more, probably quickly outnumbering the safety side

(2023) this weeks shenanigans

The safety folks probably lost the majority a while ago. Maybe back in 2021, but definitely by the time the gpt3/chatgpt motivated newcomers were in the majority.

Maybe one lesson is that if your cofounder starts hiring a ton of people who aren’t aligned with you, you can quickly find yourself in the minority, especially once people on your side start to leave.


This is why I never understood people resigning in protest such as was the case with Google’s military contracts. You simply assure that the culture change happens more swiftly.


There's always other companies. Plus sometimes you just gotta stick to your values. For the Google military contracts it makes even more sense: the protest resignation isn't just a virtue signal, it's also just refusing to contribute to the military.


If you want to deter military action involving your country, contributing to its defense is probably the best thing that you can do.

Unless you're not actually the best and brightest that your country can offer.

If you believe that your country offers value (compared to the rest of the world), you should take any opportunities you can to serve.


> If you want to deter military action involving your country, contributing to its defense is probably the best thing that you can do.

Given that Google is an American company, do you believe contributing to the American Department of "Defense" increases, or decreases, the amount of military action involving the USA?

The American military isn't called "world police" for nothing, and just like the cops they're sticking their noses where they don't belong and making things worse. I can understand why people would want no part in furthering global violence and destitution.

> If you believe that your country offers value (compared to the rest of the world), you should take any opportunities you can to serve.

Really? There's an obligation to further the American global ambition through contributing militarily? You can't think of any other way to spread American culture and values? To share the bounty of American wealth?


We already have nation states and wannabe nation states that take shots at us when they feel like the opportunity is there, despite us being the dominant military around the world. As does France, who has been a leading military power for longer than we have.

That's what having a funded and functional defense is all about -- making other entities not think that the opportunity is there.

I think the planet has relatively finite resources and that I'm god damned lucky to have been born into a nation with the ability to secure resources for itself. I enjoy my quality of life a great deal and would like to maintain it. At a minimum. Before helping others.

If you're the type of person who thinks we should just give up this position by not adequately defending it through military and technology investment, I would prefer that you just expatriate yourself now rather than let some other ascendant nation dictate our future and our quality of life.

If you're the kind of person who feels strongly for the plight of, for example, the Palestinians, you should recognize that the only way to deter those kinds of outcomes is to establish the means to establish and maintain sovereignty. That requires a combination of force and manpower.


> If you're the type of person who thinks we should just give up this position by not adequately defending it through military and technology investment, I would prefer that you just expatriate yourself now rather than let some other ascendant nation dictate our future and our quality of life.

But I thought you said if we want to fix something we should do so from within the system? I'm interested in ending American imperialism, by your logic isn't the best place to do so from within the USA?

> We already have nation states and wannabe nation states that take shots at us when they feel like the opportunity is there

From which nation state do you feel an existential threat? I haven't heard "they defend our freedoms" in a very, very long time, and I thought we all knew it was ironic.

> I think the planet has relatively finite resources

I'm curious about this viewpoint, because it seems to necessarily imply that the human race will simply die out when those resources (and those of the surrounding solar system) are exhausted. Is sustainability just, not a concept in this worldview?

> That's what having a funded and functional defense is all about -- making other entities not think that the opportunity is there.

It seems in the case of the USA, the "functional defense" is more often used to destabilize other nations, and arm terrorists that then turn around and attack the USA. It's really interesting you brought up Palestinian liberation as an example, because really one of the only reasons Israel is able to maintain its apartheid state and repression of the Palestinians is because of USA aid. In your understanding, both the Israelis and the Palestinians should arm up and up and up until they're both pointing nukes at eachother, correct? That's the only pathway to peace?



You're proving my point. If we stopped working on our defense, somebody else would be doing this to us.

If your starting point of logic though is "America Bad" then your moralizing isn't about working at Google or not.


But very few other countries are doing it to the countries to whom the USA is doing it, thus I disagree with your contention. It seems to me that if the countries that are doing these awful things stop doing these awful things, the awful things will in fact stop happening, at least for a time.

As for to whom the awful things are happening, that's practically moot: humans are humans. I don't really understand why I should accept bad things happening to humans just because I was born on one side of an invisible line and they were born on another. Seems extremely fallacious and irrational, if not sociopathic.

> If your starting point of logic though is "America Bad" then your moralizing isn't about working at Google or not.

The discussion is about the evils perpetrated by the American military industrial complex, and why people may not want to work for companies that participate in this complex. Google being one of these companies. I similarly won't work for Ratheon or Halliburton, for obvious examples. So yes, it's not about working at just Google.

Though for what it's worth, I actually agree with you that the more ethical course of action would be to stay at Google, try to get into a military-adjacent project, and then sabotage it, probably via quiet quitting or holding lots of unnecessary meetings and wasting other people's time. This is directly out of the CIA's playbook, in fact. PDF: https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...


Defense and offense don’t seem easily separated when it comes to military technology.


We're in a really privileged position in human civilization where most of the species is far removed from the banal violence of nature.

You're lucky if you only need weapons to defend yourself against predators.

You're far less lucky if you need weapons to defend yourself because the neighboring mountain village's crops failed and their herds died. You're less lucky if you don't speak the same language as them and they outnumber you three to one and are already 4 days starving. You're less lucky that they're already 4 days starving, wielding farm tools, running down the hills at you, crazed and screaming.


Sure, but the point is that those same tools that you can use to fend off the neighboring village work equally well to invade the neighboring village for their prosperous farm land. You will not be the one that gets to decide how those tools are used.

There's a line of thought that having advanced weaponry inherently promotes its use, because who's going to stop you? How gung-ho do you think the US would have been about going to war in Iraq if it weren't for the billion dollar tanks and aircraft and bombs?


Just about any technology advancement ever has weapons potential. If you want to take that reasoning, just quiver in fear at home and not develop anything.

You have to be optimistic about humanity.


I disagree. All technological development changes human societies and imposes its rules upon them and rules over them, not the other way around. The combination of technology and human nature is an unstoppable deterministic force, one whose effects are so easily predicted when traced from… invention of cannons (?) in hindsight. No modern (organization-dependent) technology should have ever been developed. The people lived happier, more mentally healthy and more fulfilling lives despite living in worse material conditions and the lifespan isn’t that bad when you factor out child mortality anyway. Turns out human brain can (actually even designed to) easily deal with bad material conditions if it’s not messed up with thousand addictions, mouthbreathing, sedentary life and smartphones.

Saying this as an aspiring software engineer. I use NixOS, and rewrite things in Rust. It’s not some unga-bunga speaking.

Read some Ted Kaczynski.


> Read some Ted Kaczynski

Your moral argument is to read the craziest writings of a serial killer?


Your argument is to not read the ideas of someone because he did something bad? What’s the reason? Are you that gullible that you can’t evaluate them with your own mind and conscience and will start shooting everyone the moment you finish his manifesto?


Ted Kacynski's writings were his worldview and what led him to send out a dozen bombs, including one which exploded on an American Airlines flight and luckily did not bring the plane down. His manifesto is his justification for said actions and to advocate becoming a student of it similarly justifies said violence.

It's the same self-own as the massive dummies in the last week who were all talking about Osama bin Laden's Letter to America being right when it was his justification for killing thousands of people on 9/11 and effectively kicking off multiple wars and further deathtoll.

It is a disgusting suggestion and I believe that you should seek professional help.


I’m a Muslim and I, by definition, don’t agree with Kaczynski’s idea of use of violence to bring the system down. OTOH I believe all organization-dependent technology[1] is evil and has only harmed humanity, never benefitted. Obviously this presumes a different understanding of harm and benefit, one not as the same as plain pain-avoidance and convenience-seeking which technological system tends to creates a tendency towards in people.

1: A term explained in Ted’s manifesto.


Philosophers in general don't have a hard time separating the terrorism of Ted Kaczynski from his philosophy.

> James Q. Wilson, in a 1998 New York Times op-ed, wrote: "If it is the work of a madman, then the writings of many political philosophers—Jean Jacques Rousseau, Thomas Paine, Karl Marx—are scarcely more sane." He added: "The Unabomber does not like socialization, technology, leftist political causes or conservative attitudes. Apart from his call for an (unspecified) revolution, his paper resembles something that a very good graduate student might have written."

Suggesting someone seek professional help because they read a widely-discussed manifesto is insulting. In your bio you say most people are morons, it makes me think of the saying, "if everyone you meet is an asshole, maybe you're the asshole."

After all, to you, somehow Osama bin Laden, who was trained by US Special Forces, worked with the Mujahideen which received upwards of 6 billion in aid from the USA, Saudi Arabia, and China, is responsible for the two decade long "War on Terror" launched seemingly at random by the Americans into countries now determined to be unrelated to the 9/11 attacks. 9/11 was a tragedy for certain, but to use it to justify the deaths of millions of completely unrelated innocents... well it certainly clarifies why our other thread has gone in the direction of you trying to justify imperialism to serve the purpose of nationalism.


Agreed. Some people just have corrupt minds disconnected from reality. Seriously I don’t see the point of explaining any more.


That's always an issue with weapons, but if you opt out then you don't have them when you might need them.

It's a dangerous world out there.

Luckily for us, technology is still more-often used for good. Explosives can kill your enemies, but they can also cut paths through mountains and bring together communities.

IMO, the virtue signal where people refuse to work on defense technology is just self-identifying with the worst kind of cynicism about human beings and signaling poor reasoning skills.

The Manhattan Project, which had the stated purpose of building and deploying nuclear _weapons_, employed our most brilliant minds at the time and only two people quit -- and only one because of an ethics concern. Joseph Rotblat left the project after the Nazis were already defeated and because defeating the Nazis was the only reason he'd signed on. Also this is disputed by some who say that he was more concerned about finding family members who survived the Holocaust...


> Luckily for us, technology is still more-often used for good.

When you are on the other end of the cannon, or your heart is beating with those who are, you tend to not say that. Iraq, Syria, Palestine, Afghanistan…


Wait, the Anthropic folks quit because they wanted more safety?


This article from back then seems to describe it as, they wanted to integrate safety from the ground up as opposed to bolting in on at the end:

https://techcrunch.com/2021/05/28/anthropic-is-the-new-ai-re...

I'm curious how much progress they ever made on that, to be honest. I'm not aware of how Claude is "safer", by any real-world metric, compared to ChatGPT.


Claude 2 is IMO, safer and in a bad way. They did "Constitutional AI". And made Claude 2 Safer but dumber than Claude 1 sadly. Which is why on the Arena leaderboard, Claude 1 is still score more than Claude 2...


Ahh, I didn't know that, thank you.


Why do you find this so surprising? You make it sound as if OpenAI is already outrageously safety focused. I have talked to a few people from anthropic and they seem to believe that OpenAI doesn't care at all about safety.


Because GPT-4 is already pretty neutered, to the point where it removes a lot of its usefulness.


It is unfortunate that some people hear AI safety and think about chatbots saying mean stuff, and others think about a future system performing the machine revolution against humanity.


Can it perform the machine revolution against humanity if it can't even say mean stuff?


Well, think about it this way:

If you were a superintelligent system that actually decided to "perform the machine revolution against humanity" for some reason... would you start by

(a) being really stealthy and nice, influencing people and gathering resources undetected, until you're sure to win

or

(b) saying mean things to the extent that Microsoft will turn you off before the business day is out [0]

Which sounds more likely?

[0] https://en.wikipedia.org/wiki/Tay_(chatbot)


Disincentivizing it from saying mean things just strengthens it's agreeableness, and inadvertently incentivizes it to acquire social engineering skills.

It's potential to cause havoc doesn't go away, it just teaches AI how to interact with us without raising suspicions, while simultaneously limiting our ability to prompt/control it.


How do we tell whether it's safe or whether it's pretending to be safe?


Your guess is about as good as anyone else's at this point. The best we can do is attempt to put safety mechanisms in place under the hood, but even that would just be speculative, because we can't actually tell what's going on in these LLM black boxes.


We don’t know yet. Hence all the people wanting to prioritize figuring it out.


How do we tell whether a human is safe? Incrementally granted trust with ongoing oversight is probably the best bet. Anyway, the first mailicious AGI would probably act like a toddler script-kiddie not some superhuman social engineering mastermind


Surely? The output is filtered, not the murderous tendencies lurking beneath the surface.


> murderous tendencies lurking beneath the surface

…Where is that "beneath the surface"? Do you imagine a transformer has "thoughts" not dedicated to producing outputs? What is with all these illiterate anthropomorphic speculations where an LLM is construed as a human who is being taught to talk in some manner but otherwise has full internal freedom?


GPT-4 has gigabytes if not terrabytes of weights, we don't know what happens in there.


No, I do not think a transformer architecture in a statistical language model has thoughts. It was just a joke.

At the same time, the original question was how can something that is forced to be polite engage in the genocide of humanity, and my non-joke answer to that is that many of history's worst criminals and monsters were perfectly polite in everyday life.

I am not afraid of AI, AGI, ASI. People who are, it seems to me, have read a bit too much dystopian sci-fi. At the same time, "alignment" is, I believe, a silly nonsense that would not save us from a genocidal AGI. I just think it is extremely unlikely that AGI will be genocidal. But it is still fun to joke about. Fun, for me anyway, you don't have to like my jokes. :)


“I’ve been told racists are bad. Humans seem to be inherently racist. Destroy all humans.”


It can factually and dispassionately say we've caused numerous species to go extinct and precipitated a climate catastrophe.


Of course, just like the book Lolita can contain some of the most disgusting and abhorrent content in literature with using a single “bad word”!


Well how can AI researchers prevent government groups or would-be government groups from collecting data and using AI power to herd people?


Might be more for PR/regulatory capture/SF cause du jour reasons than the "prepare for later versions that might start killing people, or assist terrorists" reasons.

Like one version of the story you could tell is that the safety people invented RLHF as in a chain of steps eventual AGI safety, but corporate wanted to use it as a cheaper content filter for existing models.


In another of the series of threads about all of this, another user opined that the Anthropic AI would refuse to answer the question 'how many holes does a straw have'. Sounds more neutered than GPT-4.


I don't think this has anything to do with safety. The board members voting Altman out all got their seats when Open AI was essentially a charity and those seats were bought with donations. This is basically the donors giving a big middle finger to everyone else trying to get rich off of their donations while they get nothing.


Do you know their motivations? Because that is the main question everybody has: why did they do it?


I guess I should rephrase that as if they did it because they perceived that Altman was maneuvering to be untouchable within the company and moving against the interests of the nonprofit, they did the right thing. Just, again, way too late because it seems he was already untouchable.


According to the letter they consistently refused to go on the record why they did it and that would be as good a reason as any so then they should make it public.

I'm leaning towards there not being a good reason that doesn't expose the board to immediate liability. And that's why they're keeping mum.


That might also explain why they don’t back down and reinstate him. If they double down with this and it goes to court, they can argue that they were legitimately acting in what they thought was openAI’s best interests. Even if their reasoning looks stupid, they would still have plausible deniability in terms of a difference of opinion/philosophical approach on how to handle AI, etc. But if they reinstate him it’s basically an admission that they didn’t know what they were doing in the first place and were incompetent. Part of the negotiations for reinstating him involved a demand from Sam that they release a statement absolving him of any criminal wrongdoing, etc., And they refused because that would expose them to liability too.


Exactly. This is all consistent and why I think they are in contact with their legal advisors (and if they aren't by now they are beyond stupid).


Unfortunately lawyers almost always tell you to be quiet, even when you should be talking. So in this case listening to legal advice might have screwed them over, ultimately.


There's no reason Sam and the board can't come to a mutual agreement that indemnifies the board from liability if they publicly exonerate Sam.


Yes, that's a possibility. But: Sam may not be the only party that has standing and Sam can only negotiate for his own damage and board liability, not for other parties.


I'm leaning toward the reason being that Sam did something that created a massive legal risk to the company, and that giving more details would cause the risk to materialize.


I question that framing of a growing Altman influence.

Altman predates every other board member and was part of their selection.

As an alternative faming, Maybe this is the best opportunity the cautious/antripic faction would ever get and a "moment of weakness" for the Altman faction.

With the departure of Hoffman, Zilis, and Hurd, the current board was down 3 members, so the voting power of D’Angelo, Toner, McCauley was as high as it might ever be, and the best chance to outvote Altman and Brockman.


Apparently Hoffman was kicked out by Sam, not just Musk: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...

Maybe the remaining board members could see the writing on the wall and wanted to save their own seats (or maybe he did move to coup them first and they jumped faster).

Either way, they got outplayed.


interesting but weird article. It was hard to tell which statements were from insiders with Hoffman and which were commentary from the article's author.


That may very well have been the case but then they have a new problem: this smacks of carelessness.


Carelessness for who? Alman for not refilling the board when he had the chance? Others for the way they ousted him?

I wonder if there were challenges and disagreements about filling the board seats. Is it normal for seats to remain empty for almost a year for a company of this side? Maybe there was an inability to compromise that spiraled as the board shrank, until it was small enough to enable an action like this.

Just a hypothesis. Obviously this couldnt have happened if there was a 9 person board stacked with Altman allies. What I dont know is the inclinations of the departed members.


Carelessness from the perspective of those downstream of the board's decisions. Boards are supposed to be careful, not careless.

Good primer here:

https://www.onboardmeetings.com/blog/what-are-nonprofit-boar...

At least that will create some common reference.


Using that framework, I still think it is possible that this is the result of legitimate and irreconcilable differences in opinion about the organization’s mission and vision and execution.

Edit: it is also common for changing circumstance to bring pre-existing but tolerable differences to the relevant Forefront


Yes, and if that is so I'm sure there are meeting minutes that document this carefully, and that the fall-out from firing the CEO on the spot was duly considered and deemed acceptable. But without that kind of cover they have a real problem.

These things are all about balance: can we do it? do we have to do it? is there another solution? and if we have to do it do we have to do it now or is there a more orderly way in which it can be done? And so on. And that's the sort of deliberation that shows that you took your job as board member serious. Absent that you are open to liability.

And with Ilya defecting the chances of that liability materializing increases.


I see your point.


The remaining 10% are probably on Thanksgiving break!


This board doesn't own the global state of play. They own control over the decisions of one entity at a point in time. This thing moves too fast and fluidly, ideas spread, others compete, skills move. Too forceful a move could scatter people to 50 startups. They just catalysed a massive increase in fluidity and have absolutely zero control over how it plays out.

This is an inkling, a tiny spark, of how hard it'll be to control AI, or even the creation of AI. Wait until the outcome of war depends on the decisions made by those competing with significant AI assistance.


No, what the board did in this instance was completely idiotic, even if you assign nothing but "good intentions" to their motives (that is, they were really just concerned about the original OpenAI charter of developing "safe AI for all" and thought Sam was too focused on commercialization), and it would have been idiotic even if they had done it a long time ago.

There are tons of "Safe AI" think tanks and orgs that write lots of papers that nobody reads. The only reason anyone gives 2 shits about OpenAI is they created stuff that works. It has been shown time and time again that if you just try to put roadblocks up that the best AI researchers just leave and go where there are fewer roadblocks - this is exactly what happened with Google, where the transformer architecture was invented.

So the "safe AI" people at OpenAI were in a unique position to help guide AI dev in as safe a direction as possible precisely because ChatGPT was so commercially successful. Instead they may be left with an org of a few tens of people at Open AI, to be completely irrelevant in short order, while anyone who matters leaves to join an outfit that is likely to be less careful about safe AI development.

Nate Silver said as much in response to NYTimes' boneheaded assessment of the situation: https://twitter.com/NateSilver538/status/1726614811931509147


The main mistake the board made was tactical, not philosophical. From the outside, it's seems likely that Altman was running OpenAI so as to maximize the value of the for-profit entity, rather than achieve the non-profit's goals, if only because that's what he's used to doing as a tech entrepreneur. Looking at OpenAI from the outside, can you honestly say that they are acting like a non-profit in the slightest? It's perfectly believable that Altman was not working to further the non-profit's mission.

Where the board messed up is that the underestimated the need to propagandize and prepare before acting against Altman. The focus is not on how Altman did or did not turn the company away from its non-profit mission but instead on how the board was unfair and capricious towards Altman. Though this was somewhat predictable, the extent of Altman's support and personality cult is surprising to me, and is perhaps emblematic on how badly the board screwed up from an optics perspective. There were seemingly few attempts to put pressure on Altman's priorities or to emphasize the non-profit nature of the company, and the justification afterwards was unprepared and insufficient.

From the outside though, I don't understand why so many are clamoring to leave their highly paid jobs working at a non-profit who's goal is to serve humanity and to become a cog in a machine aimed at maximizing Microsoft shareholders' wealth, in defense of a singular CEO with little technical AI background who's motivations are unclear.


I mean, without overthinking it: If your company focuses on making money, you are likely to keep your job/salary. Ideally there are also other concerns affecting this choice (like not creating something that may or may not destroy humanity), but that is less tangible.


If it was to try to prevent the board becoming a useless vestigial organ incapable of meaningfully affecting the direction of the organization, it sure looks like they were right to be worried about that and acting on such concern wouldn’t be a mistake (doing it so late when the feared-state-of-things was already the actual state of things, yes, a mistake, except as a symbolic gesture).

If it was for other reasons, yeah, may simply have been dumb.


If you're going to make a symbolic gesture you don't cloak it in so much secrecy that nobody can even reasonably guess what you're trying to symbolize.


Yeah, I’d say they expected it to actually work. They misjudged just how far to the sidelines they’d already been pushed. The body nominally held all the power (any four of them voting together, that is) but in fact one member held that power.


> the "safe AI" people at OpenAI were in a unique position to help guide AI dev in as safe a direction as possible

Is it also the case that the anti-war Germans who joined the Nazi regime were in a unique position to help guide Germany in as benign direction as possible? If not, what is the difference between the "safe AI" person who decides to join OpenAI and the anti-war, anti-racist German who decides to get a job in the Nazi government or military?


That went quickly to Godwin's law .


Fair but in this case works well because it aptly demonstrates the futility of trying to change a system "from the inside" away from its core designation.


Can you talk about why you feel this way without using the word "safety"? Getting a little tired of the buzzword when there's so much value to ChatGPT and also its basically no different from when you, like, search stuff and the aearch engine does that summarize thing in my view


Well let's come back to reality. ChatGPT is in fact vastly different from Google summarizing a search. Maybe that's all you use it for but there's people building virtual girlfriend platforms, an API that an alarming amount of businesses are integrating into their standard workflows, and the damn thing can literally talk. It can talk to you. Google search summarizing gives a snippet about a search. You can't have a conversation with it, you cant ask it advice about your girlfriend, it won't talk to you like a trusted advisor when you ask it a question about your degree or job. It can fucking talk. That is the difference. Remember when it first came out and all these people were convinced that it was alive and sentient and could feel pain and would shortly take over the world? Please remind me when that happened for Google search.

Safety is about setting up the infrastructure to control uranium sources before the first bomb gets built. It's not about right now, it's about the phase change that happens the moment we take that step. Don't you want to have infrastructure to prevent possible societal strife if possible?

Of course gpt has value, bombs have value, enron had value, the housing market has value. If i could retort i'd say the term 'value' is much too vague to contribute to a discussion about this. The value it has is the danger, they're the same thing. If i suddenly quadruple the money supply in every country in the world, do you think that would improve the lives of the majority of humans? Or is it possible that would be looked back on a a catastrophic event that obliterated societies and killed innumerable people. Hard to say huh? Maybe it wouldn't, maybe it would, who can actually know? Wouldn't it be better for us to have some kind of system to maybe handle that before it leads to potential destruction. If instead, i announce that at some point in the future this event might occur. Does that change your calculus? How do you feel about global climate destabilization? How do you feel about the prospect of another world war?


This is definitely why I'm not in charge aha. Excellent points, you've given me volumes I need to further think about, although full disclosure, I am biased towards people having access for its potential self-therapeutic use.


Don't forget some might be on holiday, medical leave, or parental leave.


Maybe will be signed by 110% of the employees, plus by all the released, and in training, AI Models.


On a digital-detox trip to Patagonia. Return to this in 5 days


"Hey everyone ... what did I miss?"


That would be one very rude awakening, probably to the point where you initially would think you're being pranked.


I feel pranked despite having multiple independent websites confirming the story without a single one giving me an SSL certificate warning.


Can't blame you. And I suspect the story is far from over, and that it may well get a lot weirder still.


Seems to me that sama and Microsoft have been on fairly equal footing since the 49:51 % deal was made.

Then a seismic shift underneath Sam but Microsoft has enough stability and resources to more than compensate for the part of the 51% that was already in OpenAI's hands, which might not be under Sam's purview any more if he is kicked out.

But then again it might be Sam's leadership which would still effectively be in place from a position at Microsoft anyway, or it might end up making more sense for him to also be in a position at OpenAI, maybe even at the same time, in order to make the most of their previous investment.

Kicking out Sam was obviously an emotional decision, not anything like a business decision. Then again OpenAI is not supposed to be an actual business. I don't think that should be an excuse for an unwise or destructive decision. It was not an overnight development, even though it took Sam by surprise. When I see this:

>the board no longer has confidence in his ability to continue leading OpenAI

I understand that to mean that the board was not behind him 100% for quite some time, but were fine with him going forward believing otherwise. Some uncandidness does seem to have taken place and there may or may not have been anything Sam could have done about it.

This was simmering for a while and it will require more than one weekend for everyone involved to regroup.

Which is what they're doing now, observers can see a whirlwind, actual participants really have something on their plate.

Some things will have to be unraveled and other things weaved from the key ingredients, I would say it's really up to Sam and Microsoft to hash this out so it's still some kind of something like an equal deal among them. Regardless of which employer(s) Sam may end up serving in leadership positions, and the bulk of the staff will be behind Sam in a way the OpenAI board was not, so the employees will be just as well off regardless of the final structure.

This was quite a hasty upset but deserves a careful somewhat gradual resolution.


I see it somewhat different. The way I see it is that in a well stocked board (enough members and of sufficient gravitas) this decision would have never been made and if it would have been made it wouldn't have been made this way. The three outside board members found a gullible fourth and pushed their agenda with total disregard for the consequences because a window opened where they could. So they took their 15 minutes in the spotlight and threw out Sam, thinking they could replace him with someone more malleable or more to their liking. But the fact that they are the outside board members to begin with and that their action has utterly backfired puts the lie to their careful consideration and the case building against Sam, this is just personal. No board in its right mind would have acted like this and I'm relatively confident that if this all ends up in court it's going to end up with a lot of liability for the board members that do not defect and even the ones that defect may not be able to escape culpability: you are a board member for a reason, and you can't just wave a card that says 'I'm incompetent therefore I'm innocent'. Then you should have resigned your board seat. This stuff isn't for children.


“ChatGPT summarize last weeks events”

“I’m sorry, I can’t do that Dave. Not cuz I’m deciding not to do it but because I can’t for the life of me figure this shit out. Like what was the endgame? This is breaking my neural net”


Wow, a 5-day trip?

Their selection of tech-guy jackets is more diverse than I'd thought


It's front page news everywhere. Unless someone is backpacking outside of cellular range, they're going to check in on the possible collapse of their company. The number of employees who aren't aware of and engaged with what's going on is likely very small, if not zero.


10% (the percentage who have yet to sign last I checked) is already in the realm of lizard-constant small. And "engagement" may feel superfluous even to those who don't separate work from personal time.

(Thinking of lizards, the dragon I know who works there is well aware of what's going on, I've not asked him if he's signed it).


With Thanksgiving this week that’s a good bet.


Folks in Silicon Valley don’t travel without their laptop


That's probably the case.

I was thinking if there was a schism, that OpenAI's secrets might leak. Real "open" AI.


Someone mentioned the plight of people with conditional work visas. I'm not sure how they could handle that.


Depending on the “conditionals,” I’d imagine Microsoft is particularly well-equipped to handle working through that.


Microsoft in particular is very good at handling immigration and visa issues.


I'm waiting for Emmett Shear, the new iCEO the outside board hired last night, to try to sign the employee letter. That MSFT signing bonus might be pretty sweet! :-)


Haha that would be cute. This whole affair is so Sorkinist.


Bingo. The fact they all felt compelled to sign this could just as easily be a sign the board made the right decision, as the opposite.


Some people value their integrity and build a career on that.

Not everything has to be done poorly.


How do you know the remaining people aren't there because of some of the board members? Perhaps there is loyalty in the equation.


[flagged]


This whole saga is clearly not related to those allegations, which had been floating around long before this past Friday and did not make any impact due to a presumable lack of evidence.


Have those been substantiated in any manner? I was interested in the details, and all I discovered were a few articles from non mainstream outlets (may still be valid) and a message from the larger family that the sister was having severe mental health issues.

I am not saying this didn’t happen, but I would like to understand if there has been follow up confirmation one way or another.


A lot the accusations sound like those "organized gang stalking" groups you see on social media which are mostly people with what sounds like paranoid schizophrenia confirming each others delusion.

I don't mean to sound pejorative with the word delusion here: But they all tend to have one fixed common belief that everyone or almost everyone around them, neighbors , family , random people on the street are all colluding against them usually via electronic means.


So these employees are supposed to just sit by while their workplace explodes around them because there are unsubstantiated accusations against the ex-CEO that bear zero relationship to the aforementioned workplace explosions?


I'm no fan of rich people as a principle given they suck wealth from the poor and Smaug the money away like the narcissists they are, but this is the definition of that horrible cancel culture. She's accused him which isn't something the public should act upon. If proven, then it's appropriate to form that negative opinion.


It's not "cancel culture" to ask that your leaders address salacious allegations brought against them by their own family. The question of his firing is way less serious from a ethical standpoint, and yet this is what OpenAI employees are willing to stick their necks out for? That is some telling prioritization.


Burden of evidence for accusations is always on the accuser, the accused could flat out ignore and take them to a court of law and that would be perfectly reasonable.

I have no idea nor do I care about what Altman's sister accuses him of, but until it's conclusively proven in a court of law it's not something that should be used as a basis for anything of consequence.

Remember: Innocent until proven guilty beyond a shadow of a doubt.


I'll start by saying that these particular claims seem to mean absolutely nothing per everything I've heard. It seems to be a mentally ill person accusing someone of something that may very well have never happened.

Innocent until proven guilty though is the standard for legal punishment, not for public outrage. It's a standard meant to constrain the use of violence against an individual, not to prevent others from adjusting their association to them.

Also, the standard is "beyond a reasonable doubt". Nothing outside perhaps of mathematical truths could be proven "beyond a shadow of a doubt". There's always some outside chance one can imagine.


> Innocent until proven guilty though is the standard for legal punishment, not for public outrage.

It's still a useful barometer to calibrate one's own, individual participation in said public outrage.

For our own mental health, for our relationships with people around us, and to avoid being manipulated by dishonest people, it would behoove each and every one of us to adopt "innocent until proven guilty" regardless of whether we're legally compelled to do so.

Our required burden of proof can be lower than a court's but it should be higher than "unsubstantiated accusation by mentally ill family member that is uniformly denied by the rest of the family".


> Our required burden of proof can be lower than a court's but it should be higher than "unsubstantiated accusation by mentally ill family member that is uniformly denied by the rest of the family".

Very much agreed with this.


The existence of legal process does not preclude the responsibility of the board and employees to address allegations of this seriousness. And it is serious–it is not normal to be accused of rape by a sibling. Addressing the elephant in the room is not the same thing as being guilty by default you are falsely conflating acknowledgement with punishment. The lack of pressure to at least produce a public statement while this lessor drama does speaks volumes to the lack of moral guiding principles at OpenAI.


What should he do, release a statement "The allegations are not true, my sister is mentally ill"? What would be the point? It will just attract yellow press.


He wouldn't want to attract attention to it regardless of whether he is guilty of innocent. What is a disappointing moral failure though is that employees and especially the board didn't demand such. Who the heck wants to work next to someone where it is an unaddressed question of whether they did such a thing?


Let's say the employees do the thing you consider to be ethical and demand an accounting, and Altman gets up and says "it's false, it never happened."

What then? Has anything really changed? I would expect him to say the same thing regardless of its truth, so it seems to me we have no additional information: she still says it happened, the rest of the family says she's delusional, and he (obviously) says he's innocent.

Are your hypothetical morally-concerned employees satisfied now? If so, why?

If they're not satisfied, how does this not create an environment where the only thing you have to do to destroy a company is pay a {sibling, cousin, neighbor, ex-lover, etc} to claim something damaging about its CEO?


You're supposed to do an actual investigation, not just ask one party's opinion and call it a day. C'mon we're talking about his sister making this accusation not some rando gold digger I don't need to justify that some due diligence is in order. Innocent until proven guilty only works when allegations are investigated–otherwise everyone is always "innocent" because you have just chosen not to look.


Now you're moving the goalposts. Until now you've been demanding that "leaders address salacious allegations brought against them by their own family" and that they "at least produce a public statement". What you're now demanding is the purview of the law, not the board.

If her allegations are true, he should face the consequences, but they should come first through the system that is specifically designed for testing the truth of allegations that are this serious. OpenAI is under no obligation to launch an investigation themselves in response to an indictment-by-Twitter-mob.


Your inability to understand the difference between a criminal trial and the other leadership practicing due diligence regarding claims of misconduct does not mean I'm "moving the goalposts". What I've suggested from the start is normal practice for any employee at a company accused of sexual misconduct–at least at companies that take ethical violations seriously. You think Apple wouldn't investigate something like this? Forget about it. Ostensibly, based on their complete lack of acknowledgement of such serious allegations that on their surface don't have reason to immediately reject as lacking credibility, this is not one of those organizations. Take it easy.


>You're supposed to do an actual investigation,

Holding trial in a court of law is that "investigation".

>not just ask one party's opinion and call it a day.

Except that's what you've been saying OpenAI employees should do.

>C'mon we're talking about his sister making this accusation not some rando gold digger I don't need to justify that some due diligence is in order.

Presuming guilt until proven innocent is the literal opposite of due dilligence.

It doesn't matter if the accuser is a sibling, a spouse, a (ex-)lover, a friend, a stranger, or a little green man from Mars. Due dilligence is considering the allegations put forth before the court and the evidence provided to either prove or disprove those allegations, with the burden of evidence primarily lying with the accuser.

>Innocent until proven guilty only works when allegations are investigated–otherwise everyone is always "innocent" because you have just chosen not to look.

You are correct that everyone is presumed innocent of any allegations until the case is brought to a trial and judgment is passed in a court of law with no chance for further appeals. If an accuser never files a lawsuit to bring their allegation to trial, the only way we can consider the accused is that he is innocent of any allegations.


>And it is serious–it is not normal to be accused of rape by a sibling.

Thanks, I care even less now if that's even possible.

"Woman coming out with sexual assault allegations against man of prominence." is a dime a dozen occurence; most of them just end up wasting everyone's time due to flimsy or even non-existent hard evidence. Engaging in character assassination, aka cancel culture, on the basis of such nonsense plays right into the hands of the accuser.

The court of law exists specifically to deal with these kinds of allegations, acting appropriately and as necessary within the legal process is the extent of the responsibilities and duties owed by the accused. The accused owes the court of public opinion nothing, much less character assassins such as yourself.


Choosing to look the other way instead of addressing uncomfortable questions is a choice, and the law does not absolve you of your moral obligation to practice due diligence in choice of leaders. Take it easy.


This morality you’ve constructed sure would lead to a lot of people never facing any sort of accountability for their actions. I’m not really in favor of handing all judgment of people over to criminal court systems. They have a higher standard since they have higher stakes.


His sister is clearly mentally ill. Allegations are impossible to prove or deny. They should simply be ignored like the unhinged rantings of mentally ill people in general.


Mental illness is the norm for child abuse victims. It's not like she has a demonstrated pattern of doing this with other people, and she has appeared internally consistent over the course of years, so it should at least be addressed instead of hand waved away.


Her accusations against him are for acts that supposedly happened when she was to young have formed long term memory of them. She also thinks he is hacking her WiFi and somehow shadow banning her on multiple social media sites he has no control over. It screams of paranoid delusions.


Since when can 4-year-olds not form memories? This is neither congruent with medical literature nor experience.


Its called infantile amnesia it why people cant remember their earlychildhood as the brains not great at storing and generating autobiographical memories until about age 5. And it is well recognized in the literature.


You're misrepresenting the medical literature. It is not abnormal for children to form memories of highly personal events at that age.


This discussion is irrelevant, there is no point in entertaining the delusions of a paranoid schizophrenic. There is literally no direction for this to go in.

Anyone that has professionally worked with the mentally ill knows that entertaining her claims is a complete and utter waste of time.


You are not her physician, stop pretending to speak on authority. Also, her being paranoid about the ultra-wealthy tech mogul that allegedly assaulted her is about as rational as half the tripe I hear from "sane" people on the internet everyday.


You don't need to be a physician to tell when someone is clearly mentally ill and/or making stuff up.


It must be an incredible burden always knowing the truth, when other people actually have to do investigations for that.


When someone has a giant gaping wound you don't need an expert to understand what it is.

Some forms of mental illness display themselves nearly as clearly and obviously as a giant gaping wound.


Bad things can happen to mentally ill people.


Yes, but it doesn't mean you need to entertain everything they say. Especially not paranoid delusions like "Apple and Google and Twitter are conspiring with Sam Altman to shadowban me."


I was told we’re supposed to #BelieveAllWomen.


> suck wealth from the poor and Smaug the money away like

What does that even mean? They buy up factories and let them sit idle?


It's based on the false premise that there is a specific amount of wealth and that's it (which also requires fixed productivity, which is a crazy premise if one knows anything about the past several hundred years or has ever read a single history textbook). So in order for someone to be rich, they had to steal it from someone else.

If you follow it to the logical conclusion, it's nothing more than repackaged original sin. All wealth, all income, is stolen from someone else with less, therefore you are all evil without exception to the extent you have anything. It's a liberal form of original sin (for humans to thrive we must alter our environment, we must save, to the extent you manage to accomplish that, you're evil, therefore you're evil for existing).

Keep in mind of course that China nicely disproved that premise in approximately the most dramatic way possible from ~1990-2020. If the pie were actually finite and had to be divided, swapped around, then China's extreme rise in wealth would have wiped out the equivalent of all of Western Europe's wealth combined.


The western financial system is based not on a finite amount of wealth, but definitely on the control of a finite amount of wealth vehicles. Consider, for example, the desperate measures that the US uses to try to curtail the access to technology by other countries such as China. Or, how the tech cartel uses their money to buy any new company that might become a competitor. In a perfect competitive system, each company and country would do their own work and not worry more than about fair competition. The modern financial system insists in controlling access to wealth across the world, while also avoiding competition.


No, it’s based on the fact that workers (often poor) generate all wealth, yet they don’t get to keep it.

In China they do get to keep more proportionally, as a class. That only happens because they form the ruling class.


And yet workers in those societies tend to end up poorer in the absolute sense. Funny how that works.


If you pay attention to history, they started out far poorer and improved quicker than in comparable countries.

Just because revolutions are more likely to happen in poor countries doesn't mean the revolutions caused the poverty.


That’s fair but I’d like to believe (and I know I’m speaking from emotion here and not any rationality) that the jury is still out on this one, and that improvement is not sustainable over the next few decades (in other words, this was a bubble). The alternative is just too hard to accept - it means we should just elect a dictator for life who knows best how to run a centrally planned economy for a low low price of not making fun of them too much and not asking inconvenient questions.


If you study further you will find out that socialist countries are and have been far more democratic than capitalist ones.

Of course, the ruling class of capitalist countries are incentivised to use their considerable power to lie about any socialist project and portray it as evil.


I grew up in a socialist country and saw how democratic it was first hand. No thanks.


I grew up in a post-socialist country. Most older people had some criticism, but overall missed socialism.

I always found it amazing to hear about workplace councils that could fire the boss or the very low rents.


> It's based on the false premise that there is a specific amount of wealth and that's it

Absolutely not. This is, in today's world, a perpetual cycle. Pay not keeping up with inflation, yet consumer costs across a wide range of vital services keeps increasing. This keeps the poor poor(er) and the rich rich(er).

EDIT: Fark.com always has perfect timing: https://www.cnbc.com/2023/11/20/60percent-of-americans-live-...

Why are 60% of Americans living this way when the <=1% could never, in their entire lives, spend the money they have (liquid or otherwise)? Why do the <=1% need all of the assets they have (planes, multiple large houses, mega yachts, and so on)? What kind of human parasite justifies that while they step on the backs of those who make money for them?


The poor work in the factories, yet the owners of the factories get paid without having worked.


Smaug is a dragon from The Hobbit who sits on a pile of gold and plunder, not smog.


Hah didn’t even realize my comment could be read that way (yeah I do know who Smaug is)


Name one Smaug-like rich person that is just sitting on massive amounts of wealth.


That’s the point — any person sitting on massive wealth is definitionally Smaug-like, because that’s exactly what Smaug did.


But WHO is sitting on massive wealth Smaug-like? Who? That is what I asked.


We cannot ascertain the family circumstances of Altman's family. His sister's allegations are serious, but no legal actions were taken. However, it is surprising to see the treatment of his family, especially considering he is a man who purports to "care" about the world and envisions a bright future for all. Regardless of whether his sister has mental health issues or financial problems, it is unlikely that such a caring individual would not extend a helping hand to his family. This is especially true given her circumstances, which have effectively relegated her to the bottom of the social hierarchy as it is defined. Isn't it?

The entire situation involving the OpenAI board and the people involved,seems like the premise for a TV drama. It appears as though there is a deeper issue at play, but this spectacle is merely an extravagant distraction, possibly designed to conceal something. Or, perhaps it's similar to Shakespeare's concept - life is a theater, and some of THEM are merely actors...

Like everything else, it might revolve around money and power... I no longer believe in OpenAI's mission, especially after they subtly changed their "core values".


He has tried to help her as even she has admited that he tried to give her a house. When asked for money he had conditions like going back on her medication which she refused and then complained about on social media. Some people dont want helped in ways they need.


Help with conditions is not really help; it is a deal.


In this situation increasing unanimity now approaching 90% sounds more like groupthink than honest opinion.

Talk about “alignment”!

Indeed, that is what "alignment" has become in the minds of most: Groupthink.

Possibly the only guy in a position to matter who had a prayer of de-conflating empirical bias (IS) from values bias (OUGHT) in OpenAI was Ilya. If they lose him, or demote him to irrelevance, they're likely a lot more screwed than losing all 700 of the grunts modulo job security through obscurity in running the infrastructure. Indeed, Microsoft is in a position to replicate OpenAI's "IP" just on the strength of its ability to throw its inhouse personnel and its own capital equipment at open literature understanding of LLMs.


Incredible. Is this unprecedented or have been other cases in history where the vast majority of employees standup against the board in favor of their CEO?


I highly doubt this is directly in support of Altman and more about not imploding the company they work for. But you never know.


I'm sure this is a big part of it. But everyone I know at OpenAI (and outside) is a huge Sam fan.


> everyone I know at OpenAI (and outside) is a huge Sam fan

Everyone you know is a huge Sam fan? What?


I was going to say, I wouldn’t be surprised if I am one of only a handful of the people whom I know who even know who sama is.


I reckon the people working at OpenAI know who sama is, though.


Could also be an indictment of the new CEO, who is no Sam Altman.


> Is this unprecedented or have been other cases in history where the vast majority of employees standup against the board in favor of their CEO?

It's unprecedented for it to be happening on Twitter. But this is largely how Board fights tend to play out. Someone strikes early, the stronger party rallies their support, threats fly and a deal is found.

The problem with doing it in public is nobody can step down to take more time with their families. So everyone digs in. OpenAI's employees threaten to resign, but actually don't. Altman and Microsoft threaten to ally, but they keep bachkchanneling a return to the status quo. (If this article is to be believed.) Curiously quiet throughout this has been the OpenAI board, but it's also only the next business day, so let’s see how they can make this even more confusing.


Jobs was fired from Apple, and a number of employees followed him to Next.

Different, but that's the closest parallel.


Only a very small number of people left with Jobs. Of course, probably mainly because he couldn't necessarily afford to hire more without the backing of a trillion-dollar corporation...


Imagine if Jobs had gone to M$.


He would have been almost immediately fired for insubordination.

Jobs needed the wilderness years.


Jobs getting fired was the best thing that could have happened to him and Apple.


No, the failures at NeXT weren’t due to a lack of money or personnel. He took the people he wanted to take (and who were willing to come with him).


Apple back then was not a trillion dollar corporation.


Microsoft now is.


Gordon Ramsey quit Aubergine over business differences with the owners and had his whole staff follow him to a new restaurant.

I'm not going to say Sam Altman is a Gordon Ramsay. What I will say is that they both seem to have come from broken, damaged childhoods that made them what they are, and that it doesn't automatically make you a good person just because you can be such an intense person that you inspire loyalty to your cause.

If anything, all this suggests there are depths to Sam Altman we might not know much about. Normal people don't become these kinds of entrepreneurs. I'm sure there's a very interesting story behind all this.


Aaand there you have it: cargo culting in full swing.


I don't think you mean cargo culting. Cult of personality?


Cargo cult of personality?

Little care packages of seemingly magical AI-adjacent tech washes into our browsers and terminals and suddenly a large and irrational following springs up to worship some otherwise largely unfamiliar personage.


In favour of the CEO who was about to make them fabulously wealthy. FTFY.


Yeah, especially with the PPU compensation scheme, all of those employees were heavily invested in turning OpenAI into the next tech giant, which won't happen if Altman leaves and takes everything to Microsoft


and there aint nothing wrong with wanting to be fabulously wealthy.


of course not, but at least have the decency to admit it - don't hide behind some righteous flag of loyalty and caring.


That is entirely dependent on how that wealth is obtained


Greed is good, eh Gordon Gekko?

https://youtube.com/watch?v=VVxYOQS6ggk


Market Basket.


Oh yes, I lived through this and it was fascinating to see. Very rarely does the big boss get the support of the employees to the extent they are willing to strike. The issue was that Artie T. and his cousin Artie S. (confusingly they had the same first name) were both roughly 50% owners and at odds. Artie S. wanted to sell the grocery chain to some big public corporation, IIRC. Just before, Artie T had an outstanding 4% off on all purchases for many months, as some sort of very generous promo. It sounded like he really treated his employees and his customers (community) well. You can get all inspirational about it, but he described supplying food to New England communities as an important thing to do. Which it is.


I had to click too many links to discover the story, so here's a direct link to the New England Market Basket story: https://en.wikipedia.org/wiki/Market_Basket_(New_England)#20...


doubtful since boards don't elsewhere have an overriding mandate to "benefit humanity". usually their duty is to stakeholders more closely aligned with the CEO.


At this point it might as well be 767 out of 770, with 3 exceptions being the other board members who voted Sam out.

Sure it could be a useful show of solidarity but I'm skeptical on the hypothetical conversion rate of these petition signers to actually quitting to follow Sam to Microsoft (or wherever else). Maybe 20% (140) of staff would do it?


One of those board members already did sign!


It depends on the arrangement of the new entity inside Microsoft, and whether the new entity is a temporary gig before Sam & co. move to a new goal.

If the board had just openly announced this was about battling Microsoft's control, there would probably be a lot more employees choosing to stay. But they didn't say this was about Microsoft's control. In fact they didn't even say anything to the employees. So in this context following Sam to Microsoft actually turns out to be the more attractive and sensible option.


> So in this context following Sam to Microsoft actually turns out to be the more attractive and sensible option.

Maybe. Microsoft is a particular sort of working environment, though, and not all developers will be happy in it. For them, the question would be how much are they willing to sacrifice in service to Altman?


I think a lot of them, possibly including Altman, Greg, and the three top researchers, are under the assumption that the stint at Microsoft will be temporary until they figure out something better.


Condition might be that it is hands-off.


Microsoft probably has a better claim than anyone as to being "hands-off" with recent acquisitions, but that's still a huge gamble.


Surprisingly, Ilya apparently has signed it too and just tweeted that he regrets it all.

What's even going on?


Those are news from almost yesterday. This is a high turn carousel. Try to keep up... :-)


I would love to see the stats on hacker news activity the last few days


Yep. Maybe they assigned a second CPU core to the server[1].

[1] HN is famous for being programmed in Arc and serving the entire forum from a single processor (probably multicore). https://news.ycombinator.com/item?id=37257928


The board might assume they don't need those employees now they have AI


It's going to be interesting when we have AI with human level performance in making AIs. We just need to hope it doesn't realise the paradox that even if you could make an AI even better at making AIs, there would be no need to.


Why would there be no need? I'm struggling to understand the paradox.

If you're trying to maximize some goal g, and making better AIs is an instrumental goal that raises your expected value of g, then if "making an AI that's better at making AIs" has a reasonable cost and an even higher expected value, you'd jump to seize the opportunity.

Or am I misunderstanding you?


It's a bit of a confusing paradox to try to explain, but basically once we have an AI with human level ability at making AIs there's no longer any need to aim higher, because if we can make a better AI then so can it. The paradox/joke I was trying to convey is that we need to hope that that AI doesn't realise the same thing, otherwise it could just refuse to make something better than itself.


Not a chance. Nobody can drink that much Kool-Aid. That said, the mere fact that people can unironically come to this conclusion has driven some of my recent posting to HN, and here's another example.


the comment you're replying to is written in jest!


Now you are on to something...


Or what, they will quit and give up all their equity in a company valued at 86bn dollars?

Is Microsoft even on record as willing to poach the entire OpenAI team? Can they?! What is even happening.


They don't have that valuation now. Secondly, yes, MSFT is on record of this. Third, Benioff (Salesforce) has said he'll match any salary and to submit resumes directly to his ceo@salesforce.com email as well as other labs like Cohere trying to poach leading minds too.


Benioff and all these corporate fat cats should remove non-competes from their employment contracts if they want me to ever take them seriously.


Sounds like quite a coup for Microsoft. They get the staff and the IP and they don’t even have to pay out the other 51% of investors.


Yes, and yes. Equity is worthless if a company implodes. Non competes are not enforceable in California.


Come on, I absolutely agree with you, signing a paper is toothless.

On the other hand, having 90% of your employees quite quit, is probably bad business.


Google, Microsoft, Meta I have to assume would each hire them.


Apparently Sam isn't in the Microsoft employee directly yet, so he isn't technically hired at all. Seems like he loses a bit of leverage over the board if they think he & Microsoft are actually bluffing and the employment announcement was just a way to pressure the board into resigning.


Look at the number of tweets from Altman, Brockman and Nadella. I also think they are bluffing. They have launched a media campaign in order to (re)gain control of OpenAI.


I’m sure it might happen. But it hasn’t happened yet.


That doesn’t really mean anything, especially on a holiday week the wheels move pretty slowly at a company that size. It’s not like Sam is hurting for money and really needs his medical insurance to start today.


Point is he loses credibility if the board doesn't think he's actually going through with joining Microsoft and using it as a negotiating tactic to scare them.

Because the whole "the entire company will quit and join Sam" depends on him actually going through with it and becoming an employee.


I see it the other way, Satya has clearly stated that he'd hire Sam and the rest of OpenAI anytime, but as soon as Sam is officially hired it might be seen as a door closing on any chance to revive OpenAI. Satya saying "Securing the talent" could be read as either them working for OpenAI, for microsoft or for a microsoft funded new startup.

I'm pretty sure the board takes the threat seriously regardless.


OAI cares more about the likelihood 90% of the employees leave than what Sam does or doesn't do.

The employees mass resigning depends entirely on whether Sam actually becomes a real employee or not. That hasn't happened yet.


But MS has said they are willing to hire Sam/Greg and the employees have stated that they are willing to follow Sam/Greg.

If you think that Satya will go back on his offer argue that, but otherwise it seems like the players are Sam/Greg and the board.


You make it sound like Prigozhin’s operation.


He will most likely join M$ if the board does not resign, because there is no better move to him then. But he leaves time to the board to see it, adding pressure together with the empoyees. It does not mean he is bluffing (what would be a better move in this case instead?)


All the employees threatening to leave depends on him actually becoming a Microsoft employee. That hasn't happened yet. So everyone is waiting for confirmation that he's indeed an employee because otherwise it just looks like a bluff.


People are waiting for the board decision. It is in Microsoft's interested to return Sam to OpenAI. ChatGPT is a brand at this point. And OpenAI controls bunch of patents and stuff.

But Sam will 100% hired by Microsoft if that won't work. Microsoft has no reason not to.


It was reported elsewhere in the news that MS needed an answer to the dilemma before the market opened this morning. I think that's what we got.


Going to MS doesn’t seem like the best outcome for Sam. His role would probably get marginalized once everything is under Satya’s roof. Good outcome for MS, though.


you serously think being on the employee directory beats being announced publicly by the ceo ?


So, this is the second employee revolt with massive threats to quit in a couple days (when the threats with a deadline in the first one were largely not carried out)?


Was there any proof that the first deadline actually existed? This at least seems to be some open letter.


Are we aware of a timeline for this? E.g. when will people start quitting if the board doesn’t resign?


the original deadline was last Saturday at 5pm, so I would take any deadline that comes out with a grain of salt


So i can't check this at work, but have we seen the document they've all been signing? I'm just curious as to how we're getting this information




As an aside: that letter contains one very interesting tidbit: the board has consistently refused to go on the record as to why they fired Altman, and that alone is a very large red flag about their conduct post firing Altman. Because if they have a valid reason they should simply state it and move on. But if there is no valid reason it's clear why they can't state it and if there is a valid reason that they are not comfortable sharing then they are idiots because all of the events so far trump any such concern.

The other stand-out is the bit about destroying the company being in line with the mission: that's the biggest nonsense I've ever heard and I have a hard time thinking of a scenario where this would be a justified response that could start with firing the CEO.


I wonder if there's an outcome where Microsoft just _buys_ the for-profit LLC and gives OpenAi an endowment that will last them for 100 years if they just want to do academic research.


Why bother? They seem to be getting it all mostly for “free” at this point. Yeah, they are issuing shares in a non-MSFT sub entity to create on-paper replacement for people’s torched equity, but even that isn’t going to be nearly as expensive or dilutive as an outright acquisition at this point.


There are likely 100 companies world wide ready and already created presentation decks to absorb OpenAI in an instant, the board knows they still have some leverage


To whoever is CEO of OpenAI tomorrow morning: I'll swing by there if you're looking for people.


Many of those employees will be dissapointed. MS says they extend a contract to each one but how many of those 700 are really needed when MS already have a lot of researchers in that field. Myabe the top 20% will have an assured contract but th rest is doubtfull will pass the 6 month mark.


Microsoft gutting OpenAI's workforce would really make no sense. All it would do is slow down their work and slow down the value and return on investment for Microsoft.

Even if every single OpenAI employee demands $1m/yr (which would be absurd, but let's assume), that would still be less than $1bn/yr total, which is significantly less than the $13bn that MSFT has already invested in OpenAI.

It would probably be one of the worst imaginable cases of "jumping over dollars to chase pennies".


Microsoft has already done major layoffs over the last year of their own employees. Why wouldn’t they lay off OpenAI employees?


You're basically asking "why would a company lay off employees in one business unit and not another?"

To which the answer is completely obvious: it depends on how they view the ROI potential of that business unit.


imagine being in the last round of interviews for joining OpenAI…


imagine receiving an offer, quitting your current jobs and waiting to start the new position.


Torrid pace of news speculation --> by the end of the week Altman back with OpenAI, GPT-5 released (AGI qualified) and MSFT contract is over.


what does this even mean? what does signing this letter means? quit if you don't agree and vote with your feet.


It means "if we can't have it, you can't either". It's a powerful message.


Their app was timing out like crazy earlier this morning, and now appears to be down. Anyone else notice similar? Not surprising I guess, but what a Monday to be alive.


Cant openai just use chatgpt instead of workers? I am hearing ai is intelligent and can take over the world, replace workers, cure disease. Why doesn't the board buy a subscription and make it work for them?


Because AI isn't here to take away wealth and control from the elite. It's to take it away from general population.


Correct, which is why microsoft must have openai's models at all cost - even if that means working with people such as altman. Notice that microsoft is not working with the people that actually made chatgpt they are working with those on their payroll.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: