Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like in the minority here, but for me this is looking like a win-win-win situation for now.

1. OpenAI just got bumped up to my top address to apply to (if I would have the skills of a scientist, I am only an engineer level), I want AGI to happen and can totally understand that the actual scientists don't really care for money or becoming a big company at all, this is more a burden than anything else for research speed. It doesn't matter that the "company OpenAI" implodes here as long as they can pay their scientists and have access to compute, which they have do.

2. Microsoft can quite seamlessly pick up the ball and commercialize GPTs like no tomorrow and without restraint. And while there are lots of bad things to say about microsoft, reliable operations and support is something I trust them more than most others, so if the OAI API simply is moved as-is to some MSFT infrastructure thats a _good_ thing in my book.

3. Sam and his buddies are taken care of because they are in for the money ultimately, whereas the true researchers can stay at OpenAI. Working for Sam now is straightforward commercialization without the "open" shenaningans, and working for OpenAI can now become the idealistic thing again that also attracts people.

4. Satya Nadella is becoming celebrated and MSFT shareholder value will eventually rise even further. They actually don't have any interest in "smashing OAI" but the new setup actually streamlines everything once the initial operational hurdles (including staffing) are solved.

5. We outsiders end up with a OpenAI research focussed purely on AGI (<3), some product team selling all steps along the way to us but with more professionality in operations (<3).

6. I am really waiting for when Tim Cook announces anything about this topic in general. Never ever underestimate Apple, especially when there is radio silence, and when the first movers in a field have fired their shots already.



That is just a matter of perspective. It's clearly a win-win if you're on team Sam. But if you're on team Ilya, this is the doomsday scenario: With commercialisation and capital gains for a stock traded company being the main driving force behind the latest state of the art in AI, this is exactly what OpenAI was founded to prevent in the first place. Yes, we may see newer better things faster and with better support if the core team moves to Microsoft. But it will not benefit humanity as a whole. Even with their large investment, Microsoft's contract with OpenAI specifically excluded anything resembling true AGI, with OpenAI determining when this point is reached. Now, whatever breakthrough in the last weeks Sam was referring to, I doubt it's going to move us to AGI immediately. But whenever it happens, Microsoft now has a real chance to sack it for themselves and noone else.


Thinking this is clearly a big win for MSFT is like thinking it's easy to catch lightning in a bottle twice.

There's been a lot of uncertainty created.

It's interesting that others see so much "win" certainty.


From Microsoft's perspective, they have actually lowered uncertainty. Especially if that OpenAI employee letter from 500 people is to be believed, they'll all end up at Microsoft anyways. If that really happens OpenAI will be a shell of itself while Microsoft drives everything.


OpenAI already has the best models and traction.

So MSFT still needs to compete with OpenAI - which will likely have an extremely adversarial relationship with MSFT if MSFT poaches nearly everyone.

What if OpenAI decides to partner with Anthropic and Google?

Doesn't seem like a win for MSFT at all.


> What if OpenAI decides to partner with Anthropic and Google?

Then they would be on roughly equal footing with Microsoft, since they'd have an abundance of engineers and a cloud partner. More or less what they just threw away, on a smaller scale and with less certain investors.

This is quite literally the best attainable outcome, at least from Microsoft's point of view. The uncertainty came from the board's boneheaded (and unrepresentative) choice to kick Sam out. Now the majority of engineers on both sides are calling foul on OpenAI and asking for their entire board to resign. Relative to the administrative hellfire that OpenAI now has to weather, Microsoft just pulled off the fastest merger of their career.


OAI will still modulate the pace of actual model development though


Little pet peeve of mine.

Engineers aren’t a lower level than scientists, it’s just a different career path.

Scientists generate lots of ideas in controlled environments and engineers work to make those ideas work in the wild real world.

Both are difficult and important in their own right.


> Engineers aren’t a lower level than scientists, it’s just a different career path.

I assume GP is talking in context of OpenAI/general AI research, where you need a PhD to apply for the research scientist positions and MS/Bachelors to apply for research engineer positions afaik.


They’re still different careers, not “levels” or whatever.

A phd scientist may not be a good fit for an engineering job. Their degree doesn’t matter.

An phd-having engineer might not be a good fit for a research job either… because it’s a different job.


Well I am an engineer but I have no problems in buying that in case of forefront tech like AI where things are largely algorithmically exploratory, researchers with PHDs will be considered 'higher' than regular software devs. I have seen similar things happen in chip startups in olden days where relative importance of professional is decided by the nature of problem being solved. but sure to ack your point its just a different job, though the phd may be needed more at this stage of business. one way to gauge relative importance is if the budget were to go down 20% temporarily for a few quarters, which jobs would suffer most loss with least impact to business plan.


researchers are paid 2x what engineers are paid at OAI, even if it's not the same job there's still one that is "higher level" than the other.


In terms of pay at OAI, sure.

But being an engineer isn’t just a lesser form of being a researcher.

It’s not a “level” in that sense. Like OAI isn’t going to fire an engineer and replace them with a researcher.


Engineers tend to earn a lot more.


> 3. Sam and his buddies are taken care of because they are in for the money ultimately, whereas the true researchers can stay at OpenAI.

This one's not right - Altman famously had no equity in OpenAI. When asked by Congress he said he makes enough to pay for health insurance. It's pretty clear Sam wants to advance the state of AI quickly and is using commercialization as a tool to do that.

Otherwise I generally agree with you (except for maybe #2 - they had the right to commercialize GPTs anyway as part of the prior funding).


Someone suggested earlier that he probably had some form of profit sharing pass-through, as has become popular in some circles.


I think it makes more sense to take him at the spirit of what he said under oath to Congress (think of how bad it would look for him/OpenAI if he said he had no equity and only made enough for health insurance but actually was getting profit sharing) over some guy suggesting something on the internet with no evidence.


Sam Altman is a businessman through and through based on his entire history. Chances are, he will have found an alternative means to make profit on OpenAI and he wouldn't do this on "charity". Just as how many CEOs say, I will "cut my salary" for example, they will never say "I cut my stocks or bonuses" which can be a lot more than their salary.

Either way based on many CEOs track records healthy skepticism should be involved and majority of them find ways to profit on it at some point or another.


I dunno, the guy has basically infinite money (and the ability to fundraise even more). I don't find it tough to imagine that he gets far more than monetary value from being the CEO of OpenAI.

He talked recently about how he's been able to watch these huge leaps in human progress and what a privilege that is. I believe that - don't you think it would be insane and amazing to get to see everything OpenAI is doing from the inside? If you already have so much money that the incremental value of the next dollar you earn is effectively zero, is it unreasonable to think that a seat at the table in one of the most important endeavors in the history of our species is worth more than any amount of money you could earn?

And then on top of that, even if you take a cynical view of things, he's put himself in a position where he can see at least months ahead of where basically all of technology is going to go. You don't actually have to be a shareholder to derive an enormous amount of value from that. Less cynically, it puts you in a position to steer the world toward what you feel is best.


I think that would be consistent with his testimony. Profit sharing is not a salary and it is not equity. I don’t believe he ever claimed to have zero stake in future compensation.


> reliable operations and support is something I trust them more than most others

With a poor security track record [0], miserable support for office 365 products and lack of transparency on issues in general, I doubt this is something to look forward to with Microsoft.

[0] https://www.wyden.senate.gov/imo/media/doc/wyden_letter_to_c...


> 2. Microsoft can quite seamlessly pick up the ball and commercialize GPTs like no tomorrow and without restraint. And while there are lots of bad things to say about microsoft, reliable operations and support is something I trust them more than most others, so if the OAI API simply is moved as-is to some MSFT infrastructure thats a _good_ thing in my book.

OpenAI already runs all its infrastructure on Azure.


I don't think one of biggest tech giants in control of the "best" AI company out there is beneficial to customers...


How does this separation help scientists at OpenAI if there is no money to fund the research? At the end of the day, you need funding to conduct research and I do not see if there is going to be any investors willing to put large sums of money just to make researchers happy.


I'm with you on this. Also, this hopefully brings the "Open"AI puns to an end. And now there's several fun ways to read "Microsoft owns OpenAI". :)

If OpenAI gets back to actually publishing papers to everyone's benefit, that will be a huge win for humanity!


>whereas the true researchers can stay at OpenAI

The true researchers will go to who pays them most. If OpenAi loses funding they will go to Microsoft with Altman or back to Google.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: