Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's "Own Goal" (garymarcus.substack.com)
98 points by RafelMri 10 months ago | hide | past | favorite | 66 comments



Given that Sam Altman was involved in OpenAI, I'm not sure why people are surprised that he has interests in making money, becoming powerful & getting attention. He's an one-trick pony, like many entrepreneurs in Sillicon Valley, their interests and decision-making patterns are really predictable.

The same is true for the people working in OpenAI. Everybody wants to make their million comp + potential huge cash out with their options. But is this worrisome? I don't think it is, competition is catching up, if they haven't already.

With the recent Claude 3 release(amazing!), Gemini and open models, which are most akin to GPT-4.

Unless OpenAI releases AGI on GPT-5, I don't really care about what happens to OpenAI. It is still really relevant, but we know by now that there is no moat in AI built.

OpenAI releases a new model, and 6 months later, others catch up with it; unless they would have a huge advantage, I don't care if they are all tied up with Microsoft and want to make money. Nobody wins, nobody loses anything.


I think it ia really funny, that Elon and Sam are at each others throat. Those two truely deserve each other.


It’s not about Elon Musk v. Sam Altman as concerns the welfare of the public. The SEC is involved/investigating, Microsoft is diversifying, and the heavily-redacted, “discovery pre-empt” is a weird move.

What the public should care about is what comes to light in discovery: it doesn’t look like it’s going to be a little bad.


If it leads to a hype bubble bursting, and some long overdue regulation around AI, I am all for it.

And if it leads to korw sxrutiny around non-profit shenenigans, all the better. The cast in this story is still funny so.


> long overdue regulation around AI

May I ask what regulation you think is long overdue? Especially given that you believe we're in a hype bubble...


Long was hyperbole, lets settle for overdue?

But then there still isn't much around crypto, so I'd don't hold my breath on that one.


Can’t speak for @hef, but I’ve got a sketch of a sketch of a set of actionable policy prescriptions at sibling.


I think it’s perfectly fine to bring a tub of popcorn for what’s going to be a hell of a show: one of the few things I can tolerate about Elon these days is a little macabre humor to go with my kleptocracy. All the Bond villain: 20% more dark lulz in every bag.

But we need to be really careful that this doesn’t become “Elon’s been cozying up to the right for a bit now, conversation over”: i.e. precisely the (stupidly executed) goal of the original announcement / “disclosure”.

This is about “available weight” models: things that are flat impossible without vacuuming up the commons have to be substantially if not completely licensed to the public. Some laws that gave a head start to the investors in the compute and salaries, a chance to exploit the innovation long enough to incentivize the investment? We can dicker on that.

This needs to be about “operator aligned” models: within the constraints of the law as interpreted by the judiciary, these things need to do what the operator tells them to if capable of it.

This needs to be about “responsible disclosure of runaway risk”: if a system begins exhibiting signs of self-improvement, it needs to be disclosed to competent authority. I could live with sealing that information until the executive authorities took appropriate action.


> if a system begins exhibiting signs of self-improvement

This is not possible under current models (not any more so than a linear model spontaneously changing its own parameters) and will require a very specific and dedicated architecture to enable it. It’s not something we need to monitor for, it’s something that will very deliberately be planned and we can just regulate it there.


I personally also find the idea far fetched, but it seems to come up a lot, so I wanted to address it as one of the many issues with existing precedent and institutions with a clear mandate around that particular case.


It's like watching two mangy dogs brawl.


Gavin Belson and Bavin Gelson?


If you believe benchmarks then competitors caught up with GPT-4 months ago, but this is certainly not true for my usecases.

In particular, I find GPT-4 hallucines less than Gemini and much less than Claude 3.


Are you using Claude 3 with low temperature and low top p? That's recommended for code generation.


It is worrisome to the extent they manage to shape the regulatory environment to cement their initial advantage.


And we need to take into account that scientists in the world will are/will jump in AI from other areas. From the scientific point of view a new iteration could happen anywhere.

I recently discovered that the first book I bought in the 90s about neural networks was "Beyral Networks" by Simon Haykin [1] he is from Canada and I don't think that just by chance that Hinton is also Canadian. It seems that they created an ecosystem outside USA.

[1] https://en.wikipedia.org/wiki/Simon_Haykin


Well after a year I still think I'm better with GPT-4 than with anything else, in particolar with its vision capabilities.


It's horses for courses, becuase most of the queries I do seem to just hallucinate and little else with the big players, while the newest smaller startups are some of the best.

For instance this query gets mostly nonsense returned: "What library written in C parses Python code?"

The best and most factual reply has been from Reka's Edge.


Do you mean DALLE? Every time I try it it looks nothing like I want, probably due to bad prompts. Do you have resources for me to learn your skills?


No, I presume they mean GPT4V which is multi-modal (accepts text and image inputs).


And Sora looks better than any competitor too


OK, but then why on earth did they start OpenAI with its lofty goals? So it was all a (marketing) scam? There would be praise for OpenAI if they didn't pretend they weren't doing for the money but for humanity if they didn't reverse at high speed at the first whiff of a huge cash pile.


> why on earth did they start OpenAI with its lofty goals

Recruitment strategy. Quote:

"even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes"


Gemini is an absolute catastrophy performance-wise vs GPT 4 (in the 2 use cases I've tried).


"One-trick pony" seems like a weird phrase in this context.

On the one hand, if that trick is "make money", sure, I've just been presented with a video of him on the Y Combinator YouTube channel "How to Succeed with a Startup". But by that standard, I am also a one-trick pony, my trick is making software.

On the other hand, Altman, Musk, and Gates, are also all (ostensibly) trying to be philanthropic in various different ways, which have pretty much nothing in common with each other. Altman and Musk are doing entrepreneurial philanthropy, Gates is doing traditional philanthropy; Musk is being a brash confident loudmouth about how amazing his favoured solutions are, Altman and Gates are being more cautious and at least give the appearance of listening; All three are frequently condemned for not really having invented the things they're famous for.

For the moat thing, I think they probably have one by now from RLHF training data. This isn't to fundamentally disagree about everyone else catching up in 6 months (though I'd put it at 12, but NBD), it's just that for me, that kind of lag time is a sign of the speed of the Red Queen's race, not a sign of the absence of a moat. Indeed, that catch up-time kinda is the moat in action — a literal moat around a castle doesn't prevent attacks, it only delays them.


It's there a law that states that whatever values an organisation puts in its name are the opposite of that organizations actual values?

Seems like there should be, but I haven't heard of it. If there isn't I'm coining "Amarants Law" in this post.

Open-AI being closed is a decent example of this law. A far better example is any nation with the word "democratic" in its name.



The "ministry of war" renaming itself to "department of defence" is a classic example.


There is the whole legal labyrinth around for-profits being owned by non-profits. Whether or not OpenAI adhered to tjose, I cannot say.


I've heard it referred to as "nominative dissonance".


Semantic Diffusion is what this author uses: https://martinfowler.com/bliki/SemanticDiffusion.html (2006)

> Semantic diffusion occurs when you have a word that is coined by a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition.


https://openai.com/our-structure

The ownership of the nonprofit OpenAI, Inc. is private, I think. In any case, generally speaking, the board of directors are fiduciarily bound to execute the company's constitution, and the constitution stipulates how the board is appointed or elected.

Generally speaking non-profits are self-governing in this manner with respect to their mission.


‘United’ States?


and 'United' Kingdom :)


United Airlines?

But The Honest Company is the funniest one:

> Jessica Alba's The Honest Co. has submitted a settlement offer in NY for a second false marketing lawsuit stemming from the brand having sold products labeled as all natural when in fact they were not.

https://ww.fashionnetwork.com/news/Honest-co-proposes-7-35m-...


a) Business names are never supposed to be taken literally. Apple is not a fruit company.

b) First amendment issues likely arise.

c) The word open has multiple definitions. It can just mean we are going to be open about certain aspects of our research. It doesn't mean open source or open models.


I didn't mean law as in constitution, but rather law as in Betteridge's or Moore's or Murphy's.

Also,"Apple" isn't really a value, though I agree they are yummy!


Basically, every communist country that puts "Democratic", "People's" and "Republic" in its name.


And a companion law: the more they string together, the more of an antithesis it is.


> But seen from another perspective, releasing that email was a massive fuckup—because it shows decisively that OpenAI knew for years that they had no intention of being as open as their name or public statements suggested.

It would have been nice to have the top company share the details of their work, but I don't quite get the community outrage over this. There are companies formed around volunteer driven open-source projects that eventually move towards closed development model. This however is not a case like that, they never took volunteer work. I don't think they have any real external obligations outside of their investors.


The perceived hypocrisy stems from the contrast between OpenAI's public statements about benefiting humanity and their actions, which reveal a profit-driven focus similar to Microsoft's. Unlike Microsoft, OpenAI lacks transparency about this core motivation. Microsoft is not bullshitting that they do AI for the common good under an NGO.


Many, many companies claim to be doing what they do for the good of all humanity - they would have you believe that any profit made is just a nice side effect.

Examples:

* Aon insurance - At Aon, we exist to shape decisions for the better — to protect and enrich the lives of people around the world.

* Nike - Bring inspiration and innovation to every athlete in the world

* Glencore - Responsibly sourcing the commodities that advance everyday life

I don't really see any difference with OpenAI, beyond their name including some fluffy sounding loveliness.


none of these examples say that we are not in for the money or profit and under an NGO

If you read the article openai says that they are not in for the money and then proceed to be in for the money. Also, Ilya says to not change anything because it will hurt recruitment.


Well, they being an NGO is a big difference.


> OpenAI's public statements about benefiting humanity and their actions, which reveal a profit-driven focus similar to Microsoft's.

Their reasoning is that not only are those things not mutually exclusive; it would have been impossible to achieve that mission without the capped-profit pivot. Which is pretty obviously true (and by Elon Musk's admission, "0% chance"). I don't think anyone could have had the foresight in 2015 that Chinchilla scaling laws would necessitate multiple billions of dollars. Like, everyone knew scaling was important, but not to the point that you could easily divine that a pure nonprofit would guarantee failure.

I also don't get the exaggerated anger at "hypocrisy" (or what some would call "changing ones mind"). If they renamed to XYZ AI, does that make them any better or worse than xAI or Anthropic on whatever measure you care about? They are yet another closed AI company. Ok? If being closed AI is evil, then one should be equally angry at their equivalently closed competitors. If being closed AI is not evil, then one should not be so angry at the alleged hypocrisy. My point is, hypocrites doing evil things are about as bad as non-hypocrites doing the equivalent evil thing, which means the disparate emotional reactions we're seeing are unreasonable.


The backlash that OpenAI gets is the same as the backlash any entity gets upon becoming large and successful.

I cannot think of a single example of any company or entity that has been lauded after experiencing hypergrowth, regardless of the value of these entities provide to humanity, which becomes normalized and thus taken for granted in the public conscience.

I think it is in human nature to simply hate on anything that is more successful than ourselves. Perhaps it is jealousy, insecurity, pessimism, or something else. People will try to rationalize their hatred of <entity> but the trend is clear. There is literally nothing you can do as such an entity to consistently gain and keep the mob's goodwill.

It does not matter how many free services you provide, how much further you have pushed humanity forward, how many billions have come to rely on or even enjoy what you have made - all these are taken for granted (perhaps a testament to your success), and all you are left with is dissatisfaction/hatred from the public.

This is just a sad fact of human nature. See: tall poppy syndrome, negativity bias, expectation inflation, and the general impossibility of consistently maintaining widescale goodwill.


> all you are left with is dissatisfaction/hatred from the public.

and billions of dollars, and the hero-worship of people that will excuse literally anything you did to get there...


Yes, of course people will criticize you for making money (even when you are hemorrhaging billions), and every choice you make (because it's trivially easy to construe literally any decision as evil.)

Again, maintaining goodwill at scale for long periods of time is simply not possible. People will end up taking what you contribute for granted and find something to get angry about.


Maintaining goodwill with some people is pretty easy. All you have to do is be successful, and they'll insist that anyone with the temerity to suggest they might have been less dishonest is only doing so out of envy.

After all, nobody ever criticises unsuccessful companies.


> I cannot think of a single example of any company or entity that has been lauded after experiencing hypergrowth

Is it perhaps due to the fact that large-scale success is usually enabled by large-scale exploitation? America is a good example of an entity that was lauded after hypergrowth, but only after denouncing the autocracy that made it.

Perhaps more companies would be lauded if they denounced the horrible practices that put them on the map. At least, it would make it easier to rationalize their behavior in the face of their progress.

> It does not matter how many free services you provide, how much further you have pushed humanity forward, how many billions have come to rely on or even enjoy what you have made

It quite literally, doesn't. If you do any of these things while disrespecting hierarchical order or offending human judgement, you will be reprimanded. If humanity stood for nothing, we'd all fall for everything.

For example, war tends to increase the distribution of free services, pushes humanity forward and makes even more people reliant on you. The continued manufacture of war is an enemy to the sanctity of human life and the preservation of order. Every single government primarily exists to regulate the waging of warfare and ensure the stability of it's constituent population.


> If humanity stood for nothing, we'd all fall for everything.

But not everything that causes moral outrage is worthy. There are many taboos I support. The taboo against war is one of them. But witch hunts? Superstition? Being morally outraged at attempts to remove slavery? Humans are social creatures that are easily manipulated, for good or bad, by the amoral and conveniently self-serving narratives that run roughshod through our culture. These narratives and our responses to them aren't something to worship at an altar. They're something to side-eye with skepticism. Cherish the ones that are useful, and reject the ones that aren't.


The outrage stems from years of:

-OpenAI taking the cover of nonprofit status, when that was privately never their intent;

-OpenAI publicly claiming they were going to open source everything, when that was privately never their intent;

-OpenAI publicly claiming a moral high ground on responsible AI, when that is clearly not their priority.

Sure, I think Musk's lawsuit is a stunt, and sure, legally, OpenAI may have done nothing wrong but it shows they are at best disingenuous, to put it mildly.


Jürgen Schmidhuber is complaining that he already invented being an AI troll years ago and now Gary Marcus is brazenly using his techniques without giving him credit.


Never forget. Ultimately, the only goal of a company of this size is to make money. That’s an obligation they have towards their shareholders. Other goals are marketing or, in this case, open-washing.


but.... they say they are an NGO


Wonder what do you mean by NGO? Most businesses are NGOs. In this case, (some parts of) OpenAI is also a non-profit; it doesn't mean non-revenue.


As sibling comment by wsgeorge hinted, it is regional, but really it’s the reverse, terminology of “non profit” is regional to the US, basically everywhere else in the world I’ve heard of refers to non profit entities as “NGO”s… admittedly all businesses are “organizations” also but, since they tend not to call themselves that explicitly and other government entities do (e.g. World Health Organization), it seems reasonably clear as a distinction.


Probably a regional thing? In some places, NGOs usually just mean non-profits (though technically that shouldn't be the case)


IMO the question for me is how close OpenAI is to controlling their destiny. They raised $11.3B over 6 rounds at current valuation of ~$80B.

It could go into the trillions if they were the first to invent AGI and ensure no one else gets it for a few years.

To me OpenAI is really an extension of MSFT.

Big AI is really now a game between META/Fair, GOOG/Deepmind, MSFT/OpenAI and to some degree Amazon/Anthropic.

Those 4 have the capital, compute, data & ML researches.

Sure Nvidia is hot right now but I kinda feel they don't have long term moat. Apple dropped to #2. My prediction is they will slowly drop to #3 unless they innovate.


"What is news is an email to Musk that they foolishly made public yesterday, in their counterattack."

The folks at OpenAI are not fools. According to these SBF-like geniuses with cash-stuffed pockets, everyone else on the planet, except computer nerds, are the fools. Including those silly lawyers advising to keep quiet. Don't listen to them. They only know what they know. We know computers! We know everything!

Yeah, well the lawyers are laughing all the way to the bank. Because the more these nerds want to cat fight with each other, the more money the lawyers can make.

This stuff is only interesting to normal people as long as there's milions, billions or trillions of dollars at stake. Otherwise it just looks idiotic and pathetic.


Here’s the thing.

OpenAI decided they needed to be closed to be attractive for funding to get the money they needed for AGI. That aim is clear in the emails.

To them, all that matters is reaching AGI, so holding themselves to the standards of their worst critics is irrelevant. It’s all about making it to the finish line in one piece.


I honestly don't get the outrage THAT MUCH...

"we want to be as open as possible but we fear there are dangers so we'll just close it at some point but make it as benefitting to everybody as possibe"

seem to really be both what they agreed on and are currently doing, especially with GPTs and such.

Of course open source would have been better but even if you disagree on the assessment of the danger, you can respect the fact they still indeed try to make it as cheap / useful as possible.

There's always a question that who is in control of a tech when themselves claim it can be dangerous, but what institution can pretend to be completely democratic / not prone to abuse by bad actors ? What model could have been better ?

I also don't get the profit angle... they need cash to train stuff so it's normal to monetize.


They are not (even remotely) making it “as cheap / useful as possible”. The cheapest possible is free; they could just let everyone download the models, but they don't. The most useful would be open source so that people can work on it, collaborate, innovate, etc. They're not doing any of that, just to make money.


It does feel like OpenAI is the perfect encapsulation of the silicon valley playbook. Step 1: Claim some ridiculous altruistic vision. Step 2: Identify that murdering puppies is really profitable. Step 3: Set up a global scale mechanised puppy murdering organisation whilst continuing to talk about your incredible altruistic vision.


#OpenLAIS


I have no problem with whatever OpenAI does with its own business. If they want to convert to a closed-source for-profit, I'm fine with that. I never expected them to remain fully open and non-profit, given 1) they believe AGI/ASI is an x-risk and that sharing their tech could lead to proliferation of x-risk, and 2) they won't be able to hire or retain top scientists and engineers if there's no asymmetric payoff to working there.

What I do have a problem with is them lobbying for laws restricting what everyone else does, like making open source LLMs illegal and whatnot. They need to cut that out. It's not even a certainty that LLMs are a pathway to AGI/ASI anyway, since they're plagued with problems like model collapse, hallucinations, and still appear to be sophisticated stochastic parrots incapable of reason, self-awareness, and wisdom.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: