Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm guessing this is the end of OpenAI. People aren't going to want to work at OpenAI anymore due to the value destruction that just occurred. It's going to be hard for them to raise money now because of the bad rep they have now. It going to be hard for them to hire top talent. You have two leaders, top engineers and researchers leaving the company. Google and Facebook come in a grab up any top talent that still there because they can offer them money and equity.

The company will probably still exist, but the company isn't going to be worth what it is today.



There are engineers who care about the kinds of values that OpenAI was founded on, which have just been – arguably – reaffirmed and revalidated by this latest drama. OpenAI's commercialization was only ever a means to have sufficient compute to chase AGI… If you watch interviews of Ilya you'll see how reluctant he is on principle to yield to the need for profit incentives, but he understands it is a necessary evil to get all the GPUs. There are engineers, and increasingly, non-VC money, that have larger stakes in outcomes for humanity who I feel will back a 'purer' OpenAI.


Do they really believe the path to AGI is through LLMs though? In that case they might be in for a very rude awakening.


Imo sam altman and team believed more in the llm because it took the world by storm and they just couldn’t wait to milk it. Msft has also licensed these type of services from open ai on azure. The folks really motivated by values at open probably want to move on from the llm hype and continue their research and pushing the boundaries of AI further.


They don't, they know it very well. But people has being buying in this AGI bullshit (pardon the language) for a while, and they wanted a piece of the cake.


I'm sure they care. The question is how will they stay liquid if there is a similar or better offer by another party? The kind of interface they use makes it trivial to move from one supplier to another if the engine is better.


OpenAI existed for years before ChatGPT. Granted, at much smaller size and with hundreds fewer employees.

I imagine that the board wants to go back to that or something like it.


The past is not on the menu for any of us, also not for OpenAI. They can't undo that which has been done without wiping out the company in its entirety. Unless they aim to become the Mozilla of AI. Which is a real possibility at this point.


Doesn't seem so from Emmett's tweet which suggests they will continue to pursue commercial interests.


By "for profit" you mean "available to use by people right now"? Well then I hope the "pure" OpenAI is over. I want to be able to use the AI for money, not for these models to be hoarded..


It could be entirely open source and still available hosted for use in exchange for money today though?


OAI is dead.

In the name of safety, the board has gifted OAI to MS. Even Ilya wants to jump ship now that the ship is sinking (I'll be real interesting if Sama even lets him on board the MS money train).

Calling this a win for AI safety is ludicrous. OAI is dead in all be name, MS basically now owns 100% of OAI (the models, the source, and now the team) for pennies on the dollar.


and those values will make them go bankrupt before creating AGI


If I would be betting, I would bet on Altman and Microsoft as well, because in the real world, evil usually wins, but I'm just really astonished by all this rhetoric here on HN. Like, firing Altman is a horrible treason, and people wouldn't want to work with those traitors anymore. Altman is the guy, who is responsible for making OpenAI "closed", which was a constant reason for complaints since it happened. When it all started, the whole vibe sure wasn't "the out-source Microsoft subsidiary ML-research unit that somehow maintains non-profit status", which was basically what happened. I'm not going to argue if it's good or bad — it is entirely possible, that this is the only realistic way to do business and Sutskever, Murati et al are just delusional trying to approach this as a scientific research project. Honestly, I sort of do believe it myself. But since when Altman is the good guy in this story?


Murati was interim ceo for 2 days.

She's going with Altman in all likelyhood.

Ilya is the one changing tac.


Another way of framing this would be that Altman was one of the only people there with their head far enough from the clouds to realize they had to adapt if they were going to have the resources needed to survive. In the real world you need more than a few Tony Starks in a cave to maintain a longterm lead even if the initial output is exceptional with nothing but what's in the cave.


I, for one, never gave a flying shit about OpenAI’s “openness”, which always felt like a gimmick anyway. They gave me a tool that has cut my work down 20-40% across the board while making me able to push out more results. I care about that.

Also AGI will never happen IMO. I’m not credentialed. Have no real proof to back it up and won’t argue one way or the other with anyone, but deep down I just don’t believe it’s even physically possible for AGI. I’ll be shocked if it is, but until then I’m going to view any company with that set as its goal as a joke.

I don’t see a single thing wrong with Altman either, primarily because I never bought into the whole “open” story anyway.

And no, this isn’t sarcasm. I just think a lot of HN folks live with rosy-tinted glasses of “open” companies and “AGI that benefits humanity”. It’s all an illusion and if we ever somehow manage to generate AGI it WILL be the end of us as a species. There’s no doubt.


On the contrary - I will now be actively looking for opportunities to join OpenAI, while I wasn't particularly interested beforehand.


What makes you think you’re more competent than the type of people who were interested in joining OpenAI before?

What if the type of people who made the company successful are leaving and the type of people who have no track record become interested?


A bit surprised by this pseudo ad hominem, but just for one data point I have (now ex-)coworkers in the same role as me who've recently moved to OpenAI. I'm not suggesting I'm more competent than them, but I don't think my hiring was based on luck while they got it on merit either.

> What if the type of people who made the company successful are leaving and the type of people who have no track record become interested?

What if it's the opposite? What if sama was basically a Bezos who was in the right place/time but could've realistically been replaced by someone else? What if Ilya is irreplaceable? Not entirely sure what the point of this is - if you want to convey that your conjecture is far more likely than the opposite, then make a convincing argument for why that's the case.


The Microsoft team going to churn out ChatGPT versions - which are the current valuation-makers. OpenAI is going to chase what comes after ChatGPT, pushing yet another ChatGPT is probably one of the reasons the researchers got fed up.

In my opinion. Best outcome for everyone involved.


I think the reality is the opposite. Sam has said that he doesn't think Transformers/GPT architecture will be enough for AGI where Ilya claims it might be enough.


It seems reasonable to me that people who are motivated by the mission and working with or learning from the existing team will still want to work there.


I didn't believe that OpenAI was being honest in their mission statement before - I thought it was just the typical bay area "we want to make the world a better place" bs.

This entire situation changed my mind radically and now I put the non-profit part in my personal top 3 dream jobs :)


Please disregard my last comment, it was a premature opinion on a situation that is still developing and very unclear from the outside


I wouldn't be so sure. There are a whole lot of people that want absolutely nothing to do with Microsoft.


The flip side perspective is people will love focusing on doing it right, without being rushed to market for moat building and max profit.


Does that not only work long-term with investment?

Unless they get philanthropic backers (maybe?), who else is going to give them investment needed for resources and employees that isn't going to want a return on investment within a few years?


They will be ok. Research does not take that much GPUs compared to training huge commercial LLMs and hiring thousands of people to manually train them to be "safe". You'd prefer smaller models, but faster iterations.


They're going to have to give up control of the board to get more investment. No investor wants these loose cannons in charge of their investments.


> No investor wants these loose cannons in charge of their investments.

The board just proved to stay on the companys core values.


If Ilya is there many will. If Karpathy stays many more. If Alec Radford stays then ...


I agree, any potential hire who has the choice between OpenAI and the new team at MSFT will now choose the latter. And a lot of the current team will follow as well. This is probably the end of OpenAI. Can't say I'm too sad, finally a chance to erase that misleading name from history.


Do leading AI researchers at Google/Meta/OpenAI/Anthropic/HuggingFace want to work at Microsoft?


Yes, for most AI researchers the umbrella organization (or university) doesn't matter nearly as much as the specific lab. These people are not going to work at Microsoft, they are going to work at whatever that new org is going to be called, and that org is going to have a pretty high status.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: