Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This actually seems like a decent compromise. Sam and Greg can retain velocity on the product side without having to spin up a whole new operation in direct competition with their old levers of power, and Ilya + co can remain in possession of the keys to the kingdom.


Maybe I'm reading too much into it, but for me it is us framed as if they won't be working on GPT-based products, but on research.

The whole thing reads like this to me: "In hindsight, we should've done more due diligence before developing a hard dependency on an organization and its product. We are aware that this was a mistake. To combat this, we will do damage control and continue to work with OpenAI, while developing our in-house solution and ditching this hard dependency. Sam & Co. will reproduce this and it will be fully under our control. So rest assured dear investors."


How do you conduct research with sales people? even if they manage to bring in researchers from OpenAI, the only gain here is microsoft getting some of the researchers behind the products and/or product developers.


Ah yes, Greg Brockman, former CTO of Stripe (amongst other things)... sales person.


Well, the same way a man with drive, discipline and money but very little in the way of technical expertise can build a company.

Sometimes you need someone who can drive a project and recruit the right people for the project. That person does not always need to be a subject matter expert.


Who are these "sales people" you're referring to? Surely not Greg Brockman, one of the most talented engineers in the world.


> Greg Brockman, one of the most talented engineers in the world.

Can you help me understand how you came to the conclusion?


People who worked with him at OpenAI and Stripe.


He has technical skill, you don't need to oversell him. He's not Ilya.


Except they only had AI model velocity and not product velocity. The user-side implementation of chatGPT is actually quite below what would be expected based on their AI superiority. So the parts that Sam & Greg should be responsible for are actually not great.


Sam and Greg were responsible for everything including building the company, deciding on strategy, raising funding, hiring most of the team, coordinating the research, building the partnership with Microsoft and acquiring the huge array of enterprise customers.

To act like they were just responsible for the "UI parts" is ridiculous.


I'm the first to defend CEOs and it's not a popular position to be in usually, believe me. But in this case, they did an experiment and it blew up based on their model's superiority alone.

Product-wise, however, it's looking like good enough AI is being commoditized at the pace of weeks and days. They will be forced to compete on user experience and distribution vs the likes of Meta. So far OpenAI only managed to deliver additions that sound good on the surface but prove not to be sticky when the dust settles.

They have also been very dishonest. I remember Sam Altman said he was surprised no one built something like chat GPT before them. Well... people tried but 3rd parties were always playing catch-up because the APIs were waitlisted, censored, and nerfed.


a) Meta is not competing with OpenAI nor has any plans to.

b) AI is only being commoditised at the low-end for models that can be trained by ordinary people. At the high-end there is only companies like Microsoft, Google etc that can compete. And Sam was brilliant enough to lock in Microsoft early.

c) What was stopping 3rd parties from building a ChatGPT was the out of reach training costs not access to APIs which didn't even exist at the time.


You're wrong about A & C but B is more nuanced.

a) Meta is training and releasing cutting-edge LLM models. When they manage to get the costs down, everyone and their grandma is going to have Meta's AI on their phone either through Facebook, Instagram, or Whatsapp.

b) Commoditization is actually mostly happening because companies (not individuals) are training the models. But that's also enough for commoditization to occur over time, even on higher-end models. If we get into the superintelligence territory, it doesn't even matter though, the world will be much different.

c) APIs for GPT were first teased as early as 2020s with broader access in 2021. They got implemented into 3rd party products but the developer experience of getting access was quite hostile early on. Chat-like APIs only became available after they were featured in ChatGPT. So Sam feigning surprise about others not creating something like it sooner with their APIs is not honest.


It's typical HN/engineer brain to discount the CEO and other "non-technical" staff as leeches.


If I recall correctly, Mira Murati was actually the person responsible for productizing GPT into a Chatbot. Prior to that, OpenAI's plan was just to build models and sell API access until they reach AGI.

I know there's a lot of talk about Ilya, but if Sam poaches Mira (which seems likely at this point), I think OpenAI will struggle to build things people actually want, and will go back to being an R&D lab.


This is kind of true, I think programming even codellama or gpt3.5 is more than enough and gpt-4 is very nice but what is missing is good developer experience, and copy-pasting to the chat window is not that.


Just curious what do you think is bad about the user side experience of chatgpt? It seems pretty slick to me and I use it most days.


Not being able to define instructions per “chat” window (or having some sort of a profile) is something I find extremely annoying.


That's exactly what the recently released GPT Builder does for you!


I wonder if they'll get bored working on Copilot in PowerPoint


Ilya and co are going to get orphaned, there’s no point to the talent they have if they intend to slow things down so it’s not like they’ll remain competitive. The capacity that MSFT was going to sell to OpenAI will go to the internal team.


Maybe they want it that way and want to move on from all the LLM hype that was distracting them from their main charter of pushing the boundaries of AI research? If yes, then they succeeded handsomely


"Don't get distracted by the research which actually produces useful things"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: